Вы находитесь на странице: 1из 131

Real Time Voice Transmission Using OFDM Modulation

Final Year Project Report Session 2007-08

Project Coordinator

Dr. K.M. Yahya


Professor, Department of Electrical Engineering Chairman, Department of CISE

Project Team

Class # 03, Miss Rizwana Arshad Class # 10, Muhammad Usman Karim Khan Class # 45, Muhammad Suleman

Department of Electrical and Electronics Engineering NWFP UET, Peshawar, Pakistan

NWFP UET, Final Year Project Report, 2007

Page 1

Acknowledgements

First and far most, we thank ALLAH to enable us finish our project. Thanks to our supervisor Dr. K.M. Yahya for his kind cooperation. A special thanks to Mr. Muhammad Asad whose brilliance, and guidance and devotion to the project was exemplary. Thanks to Mr. Imran Ashraf and Mr. Moeen whose help and cooperation was a catalyst in bringing the project to its due end. Thanks to all the authors whose research and study material made us understand the concepts and made us eligible to finish our job.

NWFP UET, Final Year Project Report, 2007

Page 2

ABSTRACT
Communication system is a field of rapid advancements with new technologies replacing the previous ones quite often. Orthogonal Frequency Division Multiplexing (OFDM) is one such technique which is rapidly finding its way in many wired and wireless communication systems. Owing to its popularity in modern communication systems, OFDM is a worthwhile research topic. The project focuses on the transmission of digital data between two nodes using Orthogonal Frequency Division multiplexing. The nodes are computer based. Speech will be captured from the sound card and will be compressed using open source SPEEX codec. The compressed audio bit stream will be modulated using 16QAM modulation and will be combined with IFFT algorithm to form OFDM signal. The receiver at the other end will receiver the signal and will demodulate the signal. The demodulated data will be decoded by the SPEEX decoder and will be played by the sound card. The pre requisite for this project is an in depth knowledge of Digital Signal Processing, Visual C++, MATLAB (for simulation purposes) and WIN32 API.

NWFP UET, Final Year Project Report, 2007

Page 3

Part1
Abstract Contents List of Figures Chapter 1 Introduction
1.1 1.2 1.3 1.4 1.5 1.6 Purpose and Objective Scope OFDM Basics Our Project Tools and Skills Problems and Challenges Encountered

3 4 7 10
11 11 11 12 13 13

Part2
Chapter 2
2.1 2.2 2.3 2.4 3.1 3.2

OFDM system

14
15 15 15 19

General Concept The importance of Orthogonality Comparison of Various Multiplexing Techniques The Block Diagram of an OFDM system

Chapter 3
3.2-1 3.2-2 3.2-3 3.2-4 3.2-5 3.2-6 3.2-7

The OFDM Transmitter


Data Source Serial to Parallel Symbol Mapping (Modulation) IDFT Guard Interval (GI) DAC I/Q Modulation

21
22 22
22 22 22 23 23 23 23

The Block Diagram of OFDM Transmitter System Steps involved in Generation of an OFDM symbol

3.3 3.4 3.5 3.6 3.7 3.8 3.9 3.10 3.11

The Working Principles of OFDM Transmitter


3.3-1 The Importance of Orthogonality 3.3-2 Mathematical Treatment

23
27 27

Fading Intersymbol Interference Intercarrier Interference Delay Spread and Use of Cyclic Prefixing to Mitigate it Modulation Scheme Interleaving Channel Estimation Adaptive Transmission

30 31 31 31 34 34 35 36
Page 4

NWFP UET, Final Year Project Report, 2007

3.12 Design of the OFDM System 36 3.13 Advantages of OFDM Transmitter Upon Other Schemes 37 3.14 Drawbacks of an OFDM System 37

Chapter 4
4.1 4.2

The OFDM Receiver

38
39 39
39 39 39 40 40 40

Block Diagram of an OFDM Receiver System Steps Involved in Recovery of Data from the OFDM Symbol
4.2-1 4.2-2 4.2-3 4.2-4 4.2-5 4.2-6 A/D Converter Signal Detection GI removal DFT Symbol Demapping (detection) Parallel To Serial

4.3 4.4 4.5 4.6 4.7 4.8 4.9

The Working Principles of an OFDM Receiver Synchronization Signal Detection


4.5-1 Energy Based Detection Schemes

40 48 49
49

Symbol Timing Time Synchronization Sampling Clock Synchronization


4.8-1 Estimation 4.8-2 Correction

53 53 54
54 55

Frequency and Phase Synchronization


4.9-1 Estimation 4.9-2 Correction

55
56 56

Part3
Chapter 5
5.1 5.2

The OFDM Transmitter in C++

57
58 58
58 59 60

Block Diagram of the Transmitter System The First Sound Card


5.2-1 Technical Data 5.2-2 Programming Notes 5.2-3 Working

5.3

Compressor
5.3-1 Technical Data 5.3-2 Programming Notes 5.3-3 Working

61
61 61 62

5.4

Signal Mapping
5.4-1 Technical Data 5.4-2 Programming Notes 5.4-3 Working

62
62 63 64

5.5

IFFT

64
Page 5

NWFP UET, Final Year Project Report, 2007

5.5-1 Technical Data 5.5-2 Programming Notes 5.5-3 Working

64 65 66

5.6

Guard Interval Insertion


5.6-1 Technical Data 5.6-2 Programming Notes 5.6-3 Working

66
67 67 68

5.7

The Pilot Data Block


5.7-1 Technical Data 5.7-2 Programming Notes 5.7-3 Working

68
68 69 69

5.8

The 2nd Sound card


5.8-1 Technical Data 5.8-2 Programming Notes 5.8-3 Working

69
69 69 71

5.9 6.1 6.2

Summary of the Procedure

72

Chapter 6
6.1.1 6.1.2 6.1.3

The OFDM Receiver in C++


Technical Data Programming Notes Working

73
74 75
75 76 77

Block Diagram of the Receiver System The 1st Sound Card

6.3 6.4

The Control Block The Signal Detection Block


6.4-1 Technical Data 6.4-2 Programming Notes 6.4-3 Working

77 78
78 78 79

6.5

The Correlation Block


6.5-1 Technical Data 6.5-2 Programming Notes 6.5-3 Working

79
79 80 81

6.6

Hilbert Transform Block


6.6-1 Technical Data 6.6-2 Programming Notes 6.6-3 Working

81
81 81 83

6.7

Synchronization Block
6.7-1 Technical Data 6.7-2 Programming Notes 6.7-3 Working

83
83 83 83

6.8

The GI Removal Block


6.8-1 Technical Data 6.8-2 Programming Notes 6.8-3 Working

84
84 84 84

6.9

The FFT Block

84
Page 6

NWFP UET, Final Year Project Report, 2007

6.9-1 Technical Data 6.9-2 Programming Notes 6.9-3 Working

85 85 85

6.10 The Signal Demapper Block


6.10-1 Technical Data 6.10-2 Programming Notes 6.10-3 Working 6.11

86
86 86 87

Decompressor
6.11-1 Technical Data 6.11-2 Programming Notes 6.11-3 Working

87
87 88 88

6.12 The 2nd Soundcard


6.12-1 Technical Data 6.12-2 Programming Notes 6.12-3 Working

88
89 89 90

6.13 Summary of the Procedure

91

Part4
Chapter 7
7.1 7.2 7.3 7.4

The Shortcomings of the Project

93
94 94 95 95

The Basic Parameters of Our project Parameters of Real OFDM The Transmitter The Receiver

Part5
Chapter 8
8.1 8.2

Visual C++ Implementation of Transmitter Visual C++ Implementation of Receiver

96
97 99

The Implementation of Sound RecordingDlg.h The Implementation of Sound RecordingDlg.cpp

Chapter 9
9.1 9.2 A B C C

106
107 109 119 123 126 128

The Implementation of Sound detect and playDlg.h The Implementation of Sound detect and playDlg.cpp FFTW SPEEX Codec The Secret Rabbit Code Hilbert Transform

Appendixes

References
NWFP UET, Final Year Project Report, 2007

131
Page 7

List of Figures
Fig 1.1, Single Carrier vs OFDM modulation technique Fig 1.2, The Proposed Project Fig 2.1, a Sine wave with areas marked in different colors Fig 2.2, Showing different modulation schemes and Multiple access schemes Fig 2.3, showing frequency spectrum of conventional and orthogonal frequency division multiplexing Fig 2.4, blocks representing various processing involved in OFDM system Fig 3.1, the method of generating an OFDM symbol Fig 3.2, Serial to Parallel Fig 3.3, BPSK Constellation Fig 3.4, Proposed OFDM Transmitter, showing I and Q channels Fig 3.5, The spectrum of an individual subcarrier of the OFDM signal Fig 3.6, Symbolic picture of the individual subchannels for an OFDM system with N tones over a bandwidth W. Fig 3.7, Example of Power spectral Density of a composite OFDM signal Fig 3.8, The multipath demonstration Fig 3.9, Channel Response to a Band of Frequencies Fig 3.10, Delay Spread Fig 3.11, comparing the timings and delay of Single Carrier and Parallel Carriers Schemes Fig 3.12, A Proposed Scheme to render Insignificant the effect of Delay Spread Fig 3.13, The Concept of Cyclic Prefixing Fig 3.14, The Guard Interval representation on Frequency and Time axis Fig 3.15, Showing the Bit Error Rate marked against the Signal to Noise Strength Fig 3.16, Showing BER vs SNR for various Modulation Techniques Fig 3.17, Bits write and read structure for an 8x6 Interleaver Fig 3.18, plot of BER vs SNR with and without Interleaving Fig 3.19, Block type Pilot Arrangement Fig 3.20, Comb type Pilot Arrangement Fig 3.21, Showing the Effect of attenuation at the far end frequencies due to the transfer function of the Transmitter or Receiver Hardware Fig 4.1, Blocks representing the Processes involved in treating an OFDM symbol at the Receiver Fig 4.2, BPSK Constellation Diagram Fig 4.3, Illustration of Parallel To Serial Converter Fig 4.4, Single, Sliding window Scheme Fig 4.5, Double, Sliding window Scheme Fig 4.6, Demonstrating the effect of GI and Correlation on detection of symbol Starting point Fig 4.7, The Loss of Synchronization between Transmitter and Receiver Sampling Clock Fig 4.8, The Synchronized Sampling Clock Synchronization Approach

NWFP UET, Final Year Project Report, 2007

Page 8

Fig 4.9, The Non-Synchronized Sampling Clock Synchronization Approach Fig 4.10, The Carrier Frequency Offset Fig 5.1, The Implementation of Transmitter in C++ Fig 5.2, QPSK Constellation Diagram Fig 5.3, The Structure of Arrays inData and outData Fig 5.4, Structure of the Array giData Fig 6.1, Implementation of OFDM Receiver in C++ Fig 6.2, Schematic of Obtaining the Hilbert Transform Fig 6.3, Illustration of the FFT Block Fig 6.4, QPSK Constellation Diagram

NWFP UET, Final Year Project Report, 2007

Page 9

Part 1 Chapter

INTRODUCTION:
The project focuses on the transmission of digital data between two nodes using Orthogonal Frequency Division Multiplexing (OFDM) technique. OFDM is a modulation and multiple access technique that has been explored over twenty years. Only recently has it been finding way into commercial communication systems. OFDM is presently used in a number of wired and wireless communication systems. It is a special case of data transmission, where a single data stream is transmitted over a number of sub carriers (SCs) to increase robustness against frequency-selective fading or narrowband interference. OFDM is leading the engineers into a new era of digital transmission and is becoming the chosen modulation technique worldwide.

NWFP UET, Final Year Project Report, 2007

Page 10

1.1 Purpose and Objective:


The basic feature of this project is to implement the OFDM technique in real time transmission of the speech signal. The objective of our project is to efficiently transmit data between two computer based nodes, in real time using the OFDM technique. This will provide a test bed for the future study of OFDM implementation.

1.2 Scope:
Multicarrier communication systems were first conceived and developed in 1960s, but it was not until their all-digital implementation with the FFT that their attractive features were unraveled.The processing power of modern digital signal processors has increased to a point where OFDM has become feasible and economical. Examining the patents, journals and books on OFDM it is clear that this technique will have an impact on future of communication. Since many communication systems being developed use OFDM, it is a worthwhile research topic. Some examples of current applications using OFDM include GSTN (General Switch Telephone Network), cellular radio, DSL and ADSL modems, DAB (Digital Audio Broadcast) radio, DVB-T (Terrestrial Digital Video Broadcast), HDTV broadcast, HYPERLAN/2(High Performance Local Area Network Standard) and the wireless networking standards IEEE.

1.3 OFDM Basics:


This project will focus on OFDM research and simulation. OFDM is especially suitable for high speed communication due to its resistance to ISI. ISI occurs when a transmitter interferes with itself and the receiver cannot decode the transmission correctly. Because the signal reflects from large objects such as mountains or buildings, the receiver sees more than one copy of the signal. In communication technology, this is called multipath. Since the indirect paths take more time to travel to the receiver, the delayed copies of the signal interferes with the direct signal causing ISI. As communication systems increase their information transfer speed, the time for each transmission necessarily becomes shorter. Since the delay time caused by multipath remains constant, a limitation in high data rate communication avoids this problem by sending many low speed transmissions simultaneously. For example the following figure shows two ways to transmit the same four pieces of binary data.

Fig 1.1, Single Carrier vs OFDM modulation technique

NWFP UET, Final Year Project Report, 2007

Page 11

Suppose that this transmission takes four seconds, and then each piece of data in the left picture has duration of one second. On the other hand OFDM would send four pieces simultaneously as shown on right. In this case, each piece of data has duration of four seconds. This longer duration leads to fewer problems with ISI. Another reason to consider OFDM is low complexity implementation for high-speed system compared to traditional signal carrier techniques.

1.4 Our Project:


In this project the OFDM technique will be implemented. Two PCs will be used. Each PC will be configured with two soundboards. One soundboard will be used for the capturing of speech and for playing the speech data. The second sound card will act as an AD/DA converter. This will generate the analog OFDM signal from the output of the IFFT at the transmitter end. At the receiver end the sound card will receive the analog OFDM signal and will digitize it. The digitized OFDM signal will be demodulated by FFT routine. The block diagram of the overall system is shown in the figure.

Sound card Audio Capture PC 2

SPEEX CODEC

QPSK modulation

OFDM via IFFT

Channel

Sound card Audio play back PC 1

SPEEX CODEC

QPSK
Demodulation

OFDM via FFT

Fig 1.2, The proposed project As shown in the figure, the PC2 is configured as OFDM transmitter and PC1 is configured as a receiver. The sound card 1 in PC2 will capture the audio data at sampling rate of 8000 samples/sec. The SPEEX codec will compress the data and will produce a bitstream of 8kbps. The output of SPEEX codec will be modulated using QPSK modulation. OFDM signal will be generated using IFFT. The output of IFFT is then sent to the second sound card, which results in analog OFDM signal. The channel will be a wire connecting the two PCs. The speaker out from second sound card in PC2 will go into the line in of the sound card in PC1. At PC1 the analog OFDM signal will be digitized and then demodulated. NWFP UET, Final Year Project Report, 2007 Page 12

1.5Toolsandskills:
1. 2. 3. 4. 5. 6. Two PCs, each with two sound cards C/C++ programming Usage of SPEEX, FFTW and Secret Rabbit Code libraries MATLAB and Simulink for simulation purposes. WIN32 API. Knowledge of Digital signal processing and Communication Systems.

1.6 Problems and Issues Encountered:


The project is implemented in C++. Each PC used in obviously not a communication transmitter and receiver and each PC does not posses the very means and the refinement of the real world implemented communication blocks. Thus it is very much required to optimize the whole process, and by optimization of process it is meant that to optimize the coding. One obvious difference that is there between a PC and a real world system is that in the PC, only one processor is there. This means that at one time, only one process can be carried out (or the process fallow eachother in series). This further requires optimizing the coding and implementation to the best level. An observable reader will notice that the sound cards are not perfect transmitting and receiving hardwares, because of there sampling restrictions, frequency drifts with time etc.

NWFP UET, Final Year Project Report, 2007

Page 13

Part Chapter

The OFDM System


Before going into details of the OFDM transmitter and receiver separately, it will be intuitive to discuss some of the points that are relatively easy to grasp and thus lay the foundation for understanding the in depth concepts that may be hard to come by. NWFP UET, Final Year Project Report, 2007 Page 14

2.1 General Concept:


OFDM is a multicarrier, wideband modulation scheme, i.e. the data is modulated not by just a single frequency, rather the data is sent on a number of frequencies and these frequencies bare a special relationship with each other. OFDM is therefore thought as serving a dual purpose; it is a multiplexing and modulation technique. The general concept of OFDM system is not very different. It corresponds to assigning of various frequencies to the user, with all the frequencies time divided, to carry the user digital data. But assignment of a large number of frequencies to a single user may not be efficient in terms of the bandwidth of the system. To save the bandwidth, we employ frequencies that are orthogonal to each other (details later) and save much of the bandwidth, and thus the name ORTHOGONAL frequency division multiplexing.

2.2 The Importance of Orthogonality


Any function (t) is orthogonal to any other function *(t) if

(t) x *(t) dt = 0
All the subcarriers are sine waves. The area under one period of sine or cosine wave, or any other sinusoidal with some phase angle, is zero. This can be shown diagrammatically.

Fig 2.1, a Sine wave with areas marked in different colors If we take any sinusoidal of frequency of integer m and multiply it with a sinusoidal with a frequency of integer n, then they form an orthogonal group, if m and n are not equal, i.e.

sin(mt) x sin(nt) dt = 0

if m n

This idea is the key to understanding OFDM. The orthogonality allows simultaneous transmission on a lot of subcarrires in a tight frequency space without interference from eachother.

2.3 Comparison of various Multiplexing techniques:


We can compare the different multiplexing techniques with that of OFDM in a diagrammatic way. Shown below are most basic multiplexing techniques of FDM and TDM. There is also a combination of both these scheme, variously applied. The

NWFP UET, Final Year Project Report, 2007

Page 15

comparison of the various modulation techniques and the OFDM technique is imminent. OFDM uses parallel data streams, and uses many narrowband overlapping digital signals in parallel.

Frequency Division Multiple Access (FDMA), users differentiated by different color schemes

Time Division Multiple Access (TDMA), users differentiated by different color schemes

NWFP UET, Final Year Project Report, 2007

Page 16

FDMA and TDMA, users differentiated by different color schemes

FDMA/Time Division Duplexing

NWFP UET, Final Year Project Report, 2007

Page 17

TDMA/Frequency Division Duplexing

Orthogonal Frequency Division Multiple Access (OFDMA) Fig 2.2, Showing different modulation schemes and Multiple access schemes Let us now talk in terms of the bandwidth. OFDM uses orthogonal carriers and the spectrum of the carriers overlap. But owing to the orthogonality of the carriers, the receiver is able to distinguish between the two adjacent and overlapping carriers. The time to send each symbol is increased (symbol length is increased). So OFDM is not only NWFP UET, Final Year Project Report, 2007 Page 18

helpful in saving the bandwidth of the whole system, but also tackles the problems related to fading and sampling errors (the details will be provided in the coming text).

Conventional Frequency Division Multiplexing multicarrier modulation technique

Orthogonal Frequency Division Multiplexing multicarrier modulation technique Fig 2.3, showing frequency spectrum of conventional and orthogonal frequency division multiplexing

2.4 The Block Diagram of an OFDM System:

Fig 2.4, blocks representing various processing involved in OFDM system The basic block diagram of a typical OFDM transceiver, excluding many of the subblocks, is shown above. There are some specifications particular to the OFDM system. One can see that the usual methods of serial to parallel and signal mapping to the constellation diagram is applied in the OFDM system. But the use of other blocks may not yet be clear.

NWFP UET, Final Year Project Report, 2007

Page 19

The main problem with the reception of radio signal is fading caused by the multipath propagation. Also there is intersymbol interference (ISI), frequency selective fading and intercarrier interference (ICI). Further constraints are limited bandwidth, low power consumption, network management and multi-cellular operation. The OFDM system employs method to tackle all these issues and problems.

NWFP UET, Final Year Project Report, 2007

Page 20

Chapter

THE OFDM TRANSMITTER


The aim of this chapter is to provide some theoretical background to OFDM transmitter. In this chapter, we give the block diagrams of the transmitter system and also some theory behind the concepts. Pilots and Guard Interval addition is also discussed here.

NWFP UET, Final Year Project Report, 2007

Page 21

3.1 The block diagram of OFDM Transmitter system:


The block diagram of a simplex point to point OFDM transmitter is hereby shown.

Fig 3.1, the method of generating an OFDM symbol

3.2 Steps Involved in Generation of an OFDM symbol:


Although the explanation of all the blocks is covered individually in the coming text, it is intuitive to get a grasp of the fundamentals of these. Assumptions: In this system, we employ the following assumptions. A cyclic prefix is used. The impulse response of the channel is shorter that the cyclic prefix. Transmitter and receiver are perfectly synchronized, although the effects of impairments in synchronization will be discussed in the next chapter. Channel noise is additive, white and Gaussian. The fading is slow enough for channel to be considered constant during one OFDM symbol interval. 3.2-1 Data Source: The data source generates the data that is required to be sent on the link. Data may be analog; hence, it is necessary to convert this analog data into a digital form. So the data source can be accounted as an ADC. The various components of the data source thus can be as under. Analog source: if data is in analog form, this component is responsible to get that data. It can be a transducer system like a microphone etc. Sampler: the analog data is fed into the sampler, which quantizes the data, and produces the binary code representation of every sample of the input. 3.2-2 Serial to Parallel: The data incoming is now divided into parallel streams and then fed to the symbol mapping block. 3.2-3 Symbol Mapping (Modulation): The digital data from the source is fed to the mapper, which maps the incoming data onto the constellation diagram. These constellations can be taken according to any phase shift

NWFP UET, Final Year Project Report, 2007

Page 22

keying (PSK) or QAM signaling set (symbol mapping). There are N constellation points present. 3.2-4 IDFT: The input to IDFT is N data constellation points, where N is the number of DFT points. The N output samples of the IDFT, being in time domain (TD), form the baseband signal carrying the data symbols on a set of N orthogonal subcarriers (SCs). In a real system, however, not all of these N possible SCs can be used for data. Usually, N is taken as an integer to the power of two, enabling the application of the highly efficient (inverse) FFT algorithms for modulation and demodulation. Both the real and imaginary components are generated. 3.2-5 Guard Interval (GI): The GI is nothing but the copy of the trailing data of an OFDM symbol. GI is added to the start of the symbol. So GI is used as a cyclic prefix whose length should exceed the maximum excess delay of the multipath propagation channel. Due to the cyclic prefix, the transmitted signal becomes periodic, as the starting and end points are equivalent and the effect of the time-dispersive multipath channel becomes equivalent to a cyclic convolution, discarding the GI at the receiver. GI is added to both real and imaginary part of the symbols. 3.2-6 DAC: The data after GI insertion is carried out to DAC, which converts the digital signal into an analog signal to be transmitted on the link. 3.2-7 I/Q modulation: Now the real and imaginary parts are used to modulate I and Q channels. These signals are then summed to give the transmission signal which is sent on the link.

3.3 The Working Principles of OFDM Transmitter:


The best way to understand an OFDM system is through an example. Let us consider that the OFDM system is consisting of four subcarriers. Each carrier is modulated using the incoming data. It is to be noted that for any carrier of frequency cn, we have i.e. c2 = 2 * c1 c3 = 3 * c1 c4 = 4 * c1 cn = n * c1

In this way, all the frequencies are orthogonal to eachother and they do not cause interference when they are added together. Let us consider that we have some serially generated bits with the sequence 1, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 1, 1, 1

NWFP UET, Final Year Project Report, 2007

Page 23

These bits generated are then converted from serial to parallel.

Fig 3.2, Serial to Parallel Using this conversion, we have actually created a table containing bits in specified rows and columns. The table is as shown. c1 1 1 1 0 0 c2 1 1 0 1 1 c3 0 1 0 0 1 c4 0 0 0 1 1

Each column represents the bits that are to be carried by the respective subcarrier. Let us pick the BPSK as our modulation scheme. Its a simple matter of extending the ongoing discussion to other schemes like QPSK, QAM etc as well. Even we can use TCM, which provides coding in addition to modulation.

Fig 3.3, BPSK Constellation Consider each of the carriers separately.

NWFP UET, Final Year Project Report, 2007

Page 24

Carrier 1 c1: By looking at the table, c1 frequency needs to transmit the bits 1, 1, 1, 0, 0 which should be superimposed on the BPSK carrier.
1

0.5

-0.5

-1

0.5

1.5

2.5

3.5

4.5

Carrier 2 c2: Carrier 2 is modulated by the bits 1, 1, 0, 1, 1


1

0.5

-0.5

-1

0.5

1.5

2.5

3.5

4.5

Carrier 3 c3: Carrier 3 is modulated by the bits 0, 1, 0, 0, 1


1

0.5

-0.5

-1

0.5

1.5

2.5

3.5

4.5

NWFP UET, Final Year Project Report, 2007

Page 25

Carrier 4 c4: The bits that modulate the carrier c4 are 0, 0, 0, 1, 1


1

0.5

-0.5

-1

0.5

1.5

2.5

3.5

4.5

After the generation of these modulated carriers, all of them are added together. These four carriers are added using the IFFT block. The IFFT block combines the carrier signals in the time domain. It is so because IFFT block converts all the sinusoidal frequencies presented to its input into a composite time domain signal.

4 3 2 1 0 -1 -2 -3 -4

0.5

1.5

2.5

3.5

4.5

An OFDM symbol

NWFP UET, Final Year Project Report, 2007

Page 26

The Real and Imaginary parts are transmitted on the I and Q channels respectively. The technique is shown below.

Fig 3.4, Proposed OFDM Transmitter, showing I and Q channels 3.3-1 The importance of orthogonality: The orthogonal part of the OFDM name indicates that there is a precise mathematical relationship between the frequencies of the carriers in the system. In a normal FDM system, the many carriers are spaced apart in such way that the signals can be received using conventional filters and demodulators. In such receivers, guard bands have to be introduced between the different carriers, and the introduction of these guard bands in the frequency domain results in a lowering of the spectrum efficiency. It is possible, however, to arrange the carriers in an OFDM signal so that the sidebands of the individual carriers overlap and the signals can still be received without adjacent carrier interference. In order to do this the carriers must be mathematically orthogonal. The receiver acts as a bank of demodulators, translating each carrier down to DC, the resulting signal then being integrated over a symbol period to recover the raw data. If the other carriers all beat down to frequencies, which, in the time domain, have a whole number of cycles in the symbol period (), then the integration process results in zero contribution from all these carriers. Thus the carriers are linearly independent (i.e. orthogonal) if the carrier spacing is a multiple of 1/. 3.3-2 Mathematical treatment: After the qualitative description of the system, it is valuable to discuss the mathematical definition of the modulation system. This allows us to see how the signal is generated and how receiver must operate, and it gives us a tool to understand the effects of imperfections in the transmission channel. As noted above, OFDM transmits a large number of narrowband carriers, closely spaced in the frequency domain. In order to avoid a large number of modulators and filters at the transmitter and complementary filters and demodulators at the receiver, it is desirable to be able to use modern digital signal processing techniques, such as fast Fourier transform (FFT).

NWFP UET, Final Year Project Report, 2007

Page 27

Fig 3.5, The spectrum of an individual subcarrier of the OFDM signal

Fig 3.6, Symbolic picture of the individual subchannels for an OFDM system with N tones over a bandwidth W. There is no crosstalk as the maximum of a subcarrier occur at minimums of all other subcarriers

Fig 3.7, Example of Power spectral Density of a composite OFDM signal

NWFP UET, Final Year Project Report, 2007

Page 28

Mathematically, each carrier can be described as a complex wave: s(t) = A(t) exp[ j( t + (t) ) ] OFDM consists of many carriers. Thus the complex signals s(t) is represented by: s(t) = 1 A(t) exp[ j( t + (t) ) ] N Where = o + n If we consider the waveforms of each component of the signal over one symbol period, then the variables A(t) and (t) take on fixed values, which depend on the frequency of that particular carrier, and so can be rewritten: A(t) A (t) If the signal is sampled using a sampling frequency of 1/T, then the resulting signal is represented by: s(kT) = 1 A(t) exp[ j( (o + n)kT + )] N At this point, we have restricted the time over which we analyze the signal to N samples. It is convenient to sample over the period of one data symbol. Thus we have a relationship: = NT If we put wo = 0, we see that s(kT) = 1 A(t) exp (j) exp[ j(n)kT] N Comparing this equation with the equation of IFFT, i.e. g(kT) = 1 G(n/NT) exp[j2nk/N] N we see that both are equal, if f = /2 = 1/NT = 1/ This is the same condition that was required for orthogonality (see Importance of orthogonality). Thus, one consequence of maintaining orthogonality is that the OFDM signal can be defined by using Fourier transform procedures.

NWFP UET, Final Year Project Report, 2007

Page 29

3.4 Fading:
If the path from transmitter to receiver either has reflections or obstructions, we get what are known as fading effects. Basically, a copy of signal reaches the receiver with a delay, in addition to the original signal, because of multiple paths available to the signal to flow and rebound. These multiple paths also change the gain of the signal. There can be many copies. The resultant signal, which reaches the receiver, is now the combination of the original signal plus the time-delayed copies of the original signal. Thus, the signal is degraded because of the phase shift caused by multi path environment.

Fig 3.8, The multipath demonstration Delay spread is defined as the maximum time delay that occurred in the hostile environment. This spread changes from time to time. In the figure below, we show the channel response in ideal case and when there are deep fades. We see that because of these fades, some frequencies are not allowed to pass and others get through with little attenuation. This form of channel response is known as frequency selective fading, as there are sharp fades and the response of the channel is not uniform for all the frequencies. It is to be noted that this response changes with the environment. It isnt necessary that the fades be always at the same frequencies.

Fig 3.9, Channel Response to a Band of Frequencies OFDM system offers an advantage over ordinary transmission techniques when the environment produces deep fades. In other transmission techniques, if the frequency of

NWFP UET, Final Year Project Report, 2007

Page 30

transmission lies in the deep fade, information is lost. While in OFDM system, only a subcarrier is affected. The information can be recovered using proper coding techniques.

3.5 Intersymbol Interference:


In telecommunication, the term intersymbol interference (ISI) has the following meanings: 1. In a digital transmission system, distortion of the received signal, which is manifested in the temporal spreading and consequent overlap of individual pulses to the degree that the receiver cannot reliably distinguish between changes of state, i.e., between individual signal elements.At a certain threshold, intersymbol interference will compromise the integrity of the received data. 2. The disturbance caused by extraneous energy from the signal in one or more keying intervals that interferes with the reception of the signal in another keying interval.

3.6 Intercarrier Interference:


OFDM requires very accurate frequency synchronisation between the receiver and the transmitter; any deviation and the sub-carriers are no longer orthogonal, causing intercarrier interference (ICI), i.e. cross-talk between the sub-carriers. Frequency offsets are typically caused by mismatched transmitter and receiver oscillators, or by Doppler shift due to movement. Whilst Doppler shift alone may be compensated for by the receiver, the situation is worsened when combined with multipath, as reflections will appear at various frequency offsets, which is much harder to correct. This effect typically worsens as speed increases, and is an important factor limiting the use of OFDM in high-speed vehicles.

3.7 Delay Spread and use of Cyclic Prefixing to Mitigate it:


The orthogonality of subchannels in OFDM can be maintained and individual subchannels can be completely separated by the FFT at the receiver when there are no intersymbol interference (ISI) and intercarrier interference (ICI) introduced by transmission channel distortion. In practice these conditions can not be obtained. The multipath delay causes the signal quality to degrade. The original signal is delayed by an amount such that is causes interference with the symbol arriving right after it. This is called Inter Symbol Interference (ISI).

Fig 3.10, Delay Spread

NWFP UET, Final Year Project Report, 2007

Page 31

Thus, this is a sort of noise. The delay spread only contaminates the start of the next coming OFDM symbol. One key principle of OFDM is that since low symbol rate modulation schemes (i.e. where the symbols are relatively long compared to the channel time characteristics) suffer less from intersymbol interference caused by multipath, it is advantageous to transmit a number of low-rate streams in parallel instead of a single high-rate stream. The high rate scheme has the disadvantage as seen from the following figures that because of multipath fading, symbols interfere with eachother and the receiver may not be able to recognize the arriving signal accordingly. On the other hand, low rate techniques like OFDM employ longer symbol durations to combat the effects of interference caused by multipath. As seen, the symbols are now more immune to the delay spread.

Single Carrier High Rate Scheme

Parallel bit Low Rate Scheme Fig 3.11, comparing the timings and delay of Single Carrier and Parallel Carriers Schemes One may think that a useful approach can be to delay the sending of the next OFDM signal by an amount of Delay spread plus some tolerance. This is the guard interval (GI), as shown by colored area in the figure.

NWFP UET, Final Year Project Report, 2007

Page 32

Fig 3.12, A Proposed Scheme to render Insignificant the effect of Delay Spread It is extremely difficult if not impossible to design such hardware that can neglect the discontinuities in the analog signal as shown above. What we need is The start of the OFDM symbol should be out of the delay spread zone so it should not be corrupted. Start the signal at a new boundary, whose edge falls outside this zone. We cannot leave this area of delay spread and we cannot run the symbol longer. So we need to fill the indicated area with something. The answer to this is the introduction of cyclic prefixing. In this method, we slide the start of the OFDM symbol to edge of the delay spread and the GI is filled with the copy of the tail end of the OFDM symbol. The reason that the guard interval consists of a copy of the end of the OFDM symbol is so that the receiver will correlate the incoming data, for the detection of symbol start and frequency, phase and sampling rate offsets.

Fig 3.13, The Concept of Cyclic Prefixing Now the total time of the symbol becomes Ttotal = T + Tg Where Tg is the time allotted for the GI. NWFP UET, Final Year Project Report, 2007 Page 33

We do not add this cyclic prefix to each of the subcarrier. Cyclic prefix is added to the composite OFDM symbol. This prefix is anything from one tenth to one forth of the original OFDM symbol. It is the responsibility of the receiver to remove the GI from incoming analog signal. The guard-interval also reduces the sensitivity to time synchronization problems.

3.8 Modulation Scheme


The modulation scheme in an OFDM system can be selected based on the requirement of power or spectrum efficiency. The type of modulation can be specified by the complex number dn = an + jbn , where dn is the mapped data sequence and is complex. The symbols an and bn can be selected to (1, 3) for 16QAM and 1 for QPSK. In general, the selection of the modulation scheme applying to each subchannel depends solely on the compromise between the data rate requirement and transmission robustness. Another advantage of OFDM is that different modulation schemes can be used on different subchannels for layered services.

3.9 Interleaving:
Interleaving aims to distribute transmitted bits in time or frequency or both to achieve desirable bit error distribution after demodulation. What kind of interleaving pattern is needed depends on the channel characteristics. Communication channels are divided into fast and slow fading channels. A channel is fast fading if the impulse response changes approximately at the symbol rate of the communication system, whereas a slow fading channel stays unchanged for several symbols. The reason why interleaving is used on OFDM is to attempt to spread the errors out in the bit-stream that is presented to the error correction decoder, because when such decoders are presented with a high concentration of errors the decoder is unable to correct all the bit errors, and a burst of uncorrected errors occurs. Interleaving necessarily introduces delay into the system because bits are not received in the same order as the information source transmits them. Frequency (subcarrier) interleaving increases resistance to frequency-selective channel conditions such as fading. For example, when a part of the channel bandwidth is faded, frequency interleaving ensures that the bit errors that would result from those subcarriers in the faded part of the bandwidth are spread out in the bit-stream rather than being concentrated. Similarly, time interleaving ensures that bits that are originally close together in the bit-stream are transmitted far apart in time, thus mitigating against severe fading as would happen when travelling at high speed. However, time interleaving is of little benefit in slowly fading channels, such as for stationary reception, and frequency interleaving offers little to no benefit for narrowband channels that suffer from flat-fading (where the whole channel bandwidth is faded at the same time).

NWFP UET, Final Year Project Report, 2007

Page 34

Block interleaving operates on one block of bits at a time. The number of bits in the block is called interleaving depth. Deinterleaving is the opposite operation of interleaving; that is, the bits are put back into the original order.

3.10 Channel Estimation:


Channel estimation is the task of estimating the frequency response of the radio channel the transmitted signal travels before reaching the receiver antenna. The channel can be estimated using pilots. Pilots are known symbols, or data, upon which the transmitter and receiver both have agreed. For pilot-symbol-aided channel estimation techniques, first the channel coefficients that belong to the pilot subcarriers are estimated. Then, these estimates are interpolated over the entire frequency-time grid. The pilots can be sent using two techniques. Regular pilot subcarriers that are similar to the WLAN pilots subcarriers; known data is constantly transmitted on them. Use the so-called scattered pilots; known data is transmitted only intermittently on the subcarrier. Then interpolating the frequency response of the subcarriers that do not have training information from the known subcarriers performs the channel estimation. Pilot symbol assisted modulation (PSAM) obtain a channel estimation using pilots that are interspersed with the transmitted data symbols. However, it causes data rate reduction or bandwidth expansion. Therefore, spectrally efficient channel estimation techniques should be considered.

Fig 3.19, Block type Pilot Arrangement

NWFP UET, Final Year Project Report, 2007

Page 35

Fig 3.20, Comb type Pilot Arrangement

3.11 Adaptive Transmission:


The resilience to severe channel conditions can be further enhanced if information about the channel is sent over a return-channel. Based on this feedback information, adaptive modulation, channel coding and power allocation may be applied across all sub-carriers, or individually to each sub-carrier. In the latter case, if a particular range of frequencies suffers from interference or attenuation, the carriers within that range can be disabled or made to run slower by applying more robust modulation or error coding to those subcarriers.

3.12 Design of the OFDM System


The main reason that the OFDM technique has taken a long time to become a prominence has been practical. It has been difficult to generate such a signal, and even harder to receive and demodulate the signal. The proposal of a realistic OFDM-based communications system was one of the goals of this research project. Therefore, we elaborate here on some hardware related design considerations, which are often neglected in theoretical studies. Elements of the transmission chain that have impact on the design of the transmitted OFDM signal include the following: The time-dispersive nature of the mobile channel. The transmission scheme must be able to cope with this. The bandwidth limitation of the channel. The signal should occupy as little bandwidth as possible and introduce a minimum amount of interference to systems on adjacent channels. The transfer function of the transmitter/receiver hardware. This transfer function reduces the useable bandwidth compared to the theoretical one given by the sampling theorem. That is, some oversampling is required.

NWFP UET, Final Year Project Report, 2007

Page 36

Fig 3.21, Showing the Effect of attenuation at the far end frequencies due to the transfer function of the Transmitter or Receiver Hardware Phase jitter and frequency offsets of the up- and down-converters, and Doppler spreading of the channel.

3.13 Advantages of an OFDM Transmitter upon other Schemes:


The OFDM transmission scheme has the following key advantages: Makes efficient use of the spectrum by allowing overlap By dividing the channel into narrowband flat fading subchannels, OFDM is more resistant to frequency selective fading than single carrier systems are. Eliminates ISI and IFI through use of a cyclic prefix. Using adequate channel coding and interleaving one can recover symbols lost due to the frequency selectivity of the channel. Channel equalization becomes simpler than by using adaptive equalization techniques with single carrier systems. It is possible to use maximum likelihood decoding with reasonable complexity, as discussed in OFDM is computationally efficient by using FFT techniques to implement the modulation and demodulation functions. OFDM is a spectrally efficient modulation technique It is conveniently implemented using IFFT and FFT operations It handles frequency selective channels well when combined with error correction coding

3.14 Drawbacks of an OFDM system:


In terms of drawbacks OFDM has the following characteristics: The OFDM signal has a noise like amplitude with a very large dynamic range; therefore it requires RF power amplifiers with a high peak to average power ratio. It is more sensitive to carrier frequency offset and drift than single carrier systems are due to leakage of the DFT.

NWFP UET, Final Year Project Report, 2007

Page 37

Chapter

THE OFDM RECEIVER


In this chapter, we discuss the theories related to the working of an OFDM receiver. Various signal detection techniques are discussed along with there quantitative approach. Also, the effect of frequency shifts and synchronization impairments is analyzed.

NWFP UET, Final Year Project Report, 2007

Page 38

4.1 Block Diagram of an OFDM Receiver System:


The following figure shows the simplified block diagram of an OFDM receiver system.

Fig 4.1, Blocks representing the Processes involved in treating an OFDM symbol at the Receiver

4.2 Steps Involved in Recovery of Data from the OFDM Symbol:


Here, we discuss the basic principles that are encountered while recovery of information is carried out in the receiver system. 4.2-1 A/D Converter: An A/D converter converts the analog input to its corresponding digital output. There are many techniques of analog to digital conversion related to the hardware employed. Output is on a single wire and it is fed to the signal detection and decision block. 4.2-2 Signal Detection: This block estimates the presence of an OFDM signal, which triggers the rest of the circuitry. Signal detection is an energy-based scheme that constantly computes the energy from its input from A/D block. If this energy is beyond a certain threshold, the arrival of signal is signaled and the receiver gets ready to perform the other steps necessary for the regeneration of data. Else, the receiver goes into standby or sleep mode if there is no indication of signal from the signal detection block. The other function performed by this block is the time synchronization of the incoming OFDM signal. It indicates the start of the OFDM symbol, so that the blocks following it are capturing the correct data. 4.2-3 GI Removal: The GI is inserted in the transmitter, and this is the redundant data that must be removed before processing the signal. Although GI can be termed as redundant, this GI is very NWFP UET, Final Year Project Report, 2007 Page 39

essential for an OFDM system to combat the errors arising from the intersymbol interference and from the delay spread produced by the environment because of the multiple paths available for signal to flow. 4.2-4 DFT: This block performs the discrete Fourier Transform on the incoming time domain signal. Digital, sampled data from A/D is driven into the DFT block, which produces its frequency components at the output. There are N outputs equal to N number of DFT points. This block extracts the SCs and each SC is given to a separate line on the output of the DFT block. Specifically speaking, the DFT block is used to demodulate the data constellation on the orthogonal SCs as IDFT block is used to modulate the data. Usually, N is taken as an integer to the power of two, i.e. N = 2m where m is an integer, enabling the application of the highly efficient FFT algorithms for modulation and demodulation. Both the real and imaginary components are generated. This block also separates the pilot carriers that are essential for error correction and also frequency synchronization of the OFDM SCs. 4.2-5 Symbol Demapping (detection): The output of the FFT block is fed to the symbol demapping circuit. This block is the inverse of the symbol-mapping block in the transmitter. At the transmitter, the symbol mapper plots the incoming bits on the constellation diagram and generates the complex symbols at the output. Here at the receiver, the demapper gets the complex data from the FFT block, and then, it generates the output bits that represent the points on the constellation diagram, corresponding to the complex input. The rotation of the symbols, arising from the ISI and ICI and also AWGN channel is being corrected by the pilot data estimates. 4.2-6 Parallel to Serial: This block corresponds to the serial to parallel block in the transmitter. The input to this block is a parallel data set from the demapper. The parallel stream of data is converted to serial form in this block. This generates the output of the receiver, which was intended to be transmitted from the transmitter.

4.3 The Working Principles of an OFDM Receiver:


We discuss the basic working algorithm for an OFDM receiver with the help of a simple example. Here, the same assumptions apply as were discussed before in the Steps Involved in the Generation of OFDM symbol of the previous chapter. In a classical parallel data system, the total signal frequency band is divided into N nonoverlapping frequency subchannels. Each subchannel is modulated with a separate symbol and, then, the N subchannels are frequency multiplexed. There are three schemes that can be used to separate the subbands:

NWFP UET, Final Year Project Report, 2007

Page 40

1. Use filters to completely separate the subbands. This method was borrowed from the conventional FDM technology. 2. Use Discrete Fourier transform (DFT) to modulate and demodulate parallel data. The individual spectra are now sinc functions and are not band limited. The FDM is achieved, not by bandpass filtering, but by baseband processing. Using this method, both transmitter and receiver can be implemented using efficient FFT techniques that reduce the number of operations from N2 in DFT, down to NlogN. Let us consider that we have an OFDM system with four SCs, and we are using BPSK as the modulation scheme. Suppose an OFDM symbol arrives at the input of A/D. After detection, time synchronization and GI removal, it becomes as shown below.
4 3 2 1 0 -1 -2 -3 -4

0.5

1.5

2.5

3.5

4.5

(The A/D converts this signal into its digital representation. And then the Signal Detector detects this signal.) This signal is passed on to the FFT block.

As we have assumed, the transmitter and receiver are in perfect synchronization. So the sampling frequency of both transmitter and receiver is the same. The FFT block collects data for 4 samples and process this data further after this much time. The FFT block generates the harmonic components present inside this digital signal and this is sent on each of the output line of FFT. Let us separately discuss the each collection interval of the FFT block and lets have a look at the outputs after each of these time intervals.

NWFP UET, Final Year Project Report, 2007

Page 41

From Time 0 to 1:
4

-2

-4

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

The OFDM signal during 1st time unit

1st output 1 1

2nd output

-1 1

0.2

0.4 0.6 3rd output

0.8

-1 1

0.2

0.4 0.6 4th output

0.8

-1

0.2

0.4

0.6

0.8

-1

0.2

0.4

0.6

0.8

Individual Outputs of FFT Block

NWFP UET, Final Year Project Report, 2007

Page 42

From Time 1 to 2:
3 2 1 0 -1 -2 -3

1.1

1.2

1.3

1.4

1.5

1.6

1.7

1.8

1.9

The OFDM signal during 2nd time unit


1st output 1 1 2nd output

-1 1

1.2

1.4 1.6 3rd output

1.8

-1 1

1.2

1.4 1.6 4th output

1.8

-1

1.2

1.4

1.6

1.8

-1

1.2

1.4

1.6

1.8

Individual Outputs of FFT Block

NWFP UET, Final Year Project Report, 2007

Page 43

From Time 2 to 3:
3 2 1 0 -1 -2 -3

2.1

2.2

2.3

2.4

2.5

2.6

2.7

2.8

2.9

The OFDM signal during 3rd time unit


1st output 1 1 2nd output

-1 1

2.2

2.4 2.6 3rd output

2.8

-1 1

2.2

2.4 2.6 4th output

2.8

-1

2.2

2.4

2.6

2.8

-1

2.2

2.4

2.6

2.8

Individual Outputs of FFT Block

NWFP UET, Final Year Project Report, 2007

Page 44

From Time 3 to 4:
4 3 2 1 0 -1 -2 -3 -4 3 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 4

The OFDM signal during 4th time unit

1st output 1 1

2nd output

-1

3.2

3.4

3.6

3.8

-1

3.2

3.4

3.6

3.8

3rd output 1 1

4th output

-1

3.2

3.4

3.6

3.8

-1

3.2

3.4

3.6

3.8

Individual Outputs of FFT Block

NWFP UET, Final Year Project Report, 2007

Page 45

From Time 4 to 5:
3 2 1 0 -1 -2 -3

4.1

4.2

4.3

4.4

4.5

4.6

4.7

4.8

4.9

The OFDM signal during 5th time unit


1st output 1 1 2nd output

-1

4.2

4.4

4.6

4.8

-1

4.2

4.4

4.6

4.8

3rd output 1 1

4th output

-1

4.2

4.4

4.6

4.8

-1

4.2

4.4

4.6

4.8

Individual Outputs of FFT Block The entire individual outputs when are joined together in time domain give rise to the signals as shown in the following figures.

NWFP UET, Final Year Project Report, 2007

Page 46

0.5

-0.5

-1

0.5

1.5

2.5

3.5

4.5

1 Output

st

0.5

-0.5

-1

0.5

1.5

2.5

3.5

4.5

2 Output

nd

0.5

-0.5

-1

0.5

1.5

2.5

3.5

4.5

3 Output

rd

0.5

-0.5

-1

0.5

1.5

2.5

3.5

4.5

4 Output

th

NWFP UET, Final Year Project Report, 2007

Page 47

These outputs are continuously fed to the demapper circuit, which maps them and generates the respective outputs using the constellation diagram of BPSK.

Fig 4.2, BPSK Constellation Diagram

Fig 4.3, Illustration of Parallel To Serial Converter The demappers output can be tabulated as fallows. Time 01 12 23 34 45 Output 1 1 1 1 0 0 Output 2 1 1 0 1 1 Output 3 0 1 0 0 1 Output 4 0 0 0 1 1

The Parallel to Serial Block converts this parallel stream of data into serial stream and the output is the serial data that was fed to the input of the transmitter.

4.4 Synchronization:
Synchronization is an essential task for any digital communication system. Without accurate synchronization algorithms, it is not possible to reliably receive the transmitted data. From the digital baseband algorithm design engineer's perspective, synchronization algorithms are the major design problem that has to be solved to build a successful product. OFDM is used for both broadcast type systems and packet switched networks, like WLANs. These two systems require somewhat different approach to the synchronization problem. Broadcast systems transmit data continuously, so a typical receiver, like

NWFP UET, Final Year Project Report, 2007

Page 48

European Digital Audio Broadcasting (DAB) or Digital Video Broadcasting (DVB) system receivers, can initially spend a relatively long time to acquire the signal and then switch to tracking mode. On the other hand, WLAN systems typically have to use so called "single-shot" synchronization; that is, the synchronization has to be acquired during a very short time after the start of the packet. This requirement comes from the packet switched nature of WLAN systems and also from the high data rates used. To achieve good system throughput, it is mandatory to keep the receiver training information overhead to the minimum. The OFDM signal waveform makes most of the synchronization algorithms designed for single carrier systems unusable, thus the algorithm design problem has to be approached from the OFDM perspective. This distinction is especially visible on the sensitivity difference to various synchronization errors between single carrier and OFDM systems. The frequency domain nature of OFDM also allows the effect of several synchronization errors to be explained with the aid of the properties of the Discrete Fourier Transform (DFT). Another main distinction with single carrier systems is that many of the OFDM synchronization functions can be performed either in time- or frequency-domain. This flexibility is not available in single carrier systems. The trade-offs on how to perform the synchronization algorithms are usually either higher performance versus reduced computational complexity.

4.5 Signal Detection:


By signal detection, we mean to find the approximate start of the signal. Thus, this is the first of the synchronization algorithms. Afterwards, for the receiver, it depends upon the quality of the signal to detect the true value sent from the transmitter. The test may form a shape that if a test parameters exceed a certain threshold Th, the signal is present, else not. Like, we can write the format of the test as,

H < Th H > Th

: :

No Data Data Present

4.5-1 Energy based Detection Schemes: One of the simplest algorithms for detection of a signal is the calculation of energy present, inside a band of symbols. This is so because when there is no signal present, the energy of the signal is just that of the noise, which is very less as compared to energy when a signal exists. The receiver can then decide whether signal exists or not based on the amount of energy calculated. A threshold is there, and if the energy exceeds that threshold, the signal detector alarms the arrival of a signal to the rest of the receiver circuitry. Sliding Window: When there is no signal, the receiver input is noise, given by rn = wn

NWFP UET, Final Year Project Report, 2007

Page 49

When the signal reaches the receiver, the receiver input then becomes rn = wn + sn Now the decision is based on some window of length L, through which we calculate the energy of the signal by

where mn is the decision variable. Calculation of mn can be simplified by noting that mn is a moving sum. Therefore, we can also calculate mn recursively.

Fig 4.4, Single, Sliding window Scheme The L in the sliding window is application dependant and also, the designer keeps in mind the effect of environment and the various attributes of the OFDM signal. This window size L can be made equivalent to the size of the GI. Shown below is the MATLAB simulation of arbitrary data and its energy calculation with window of size 4.
10 9 8 7 6 5 4 3 2 1 0
0 0 5 10 15 20 25 Index 30 35 40 45 50 Energy 100 150 200 250

x[n]

10

15

20

25 Index

30

35

40

45

50

As seen, the start of the signal is indicated by a sharp increase in the energy of the incoming signal. The receiver can pick this information from the signal and can indicate to the receiver the start of an OFDM signal at the input of the receiver.

NWFP UET, Final Year Project Report, 2007

Page 50

This simple method suffers from a significant drawback; namely, the value of the threshold depends on the received signal energy. When the receiver is searching for an incoming packet, the received signal consists of only noise. The level of the noise power is generally unknown and can change when the receiver adjusts its Radio Frequency (RF) amplifier settings or if unwanted interferers go on and off in the same band as the desired system. When a wanted packet is incoming, its received signal strength depends on the power setting of the transmitter and on the total path loss from the transmitter to the receiver. Double Sliding Window: This method incorporates two adjacent sliding windows of received energy. The principle of working of this type of algorithm is to make mn, the decision variable, the ratio of total energy contained in the two sliding windows.

Fig 4.5, Double, Sliding window Scheme Window 1 and 2 are considered stationary relative to the incoming data that slides over them to the right. The response is flat when noise is received, because both windows contain ideally the same amount of energy. Once the signal slides in the window 1, the variable mn increases as energy contained in the sliding window 1 is now much greater than that of window 2. mn is the greatest when all of the signal is contained in window 1 and window 2 only contains noise. Once the signal slides pass window 1 into window 2, the variable mn starts to decrease. And the response again becomes flat when signal exists in both the sliding windows. The value of energy in the window 1 is given by

and the value of energy in the window 2 is generated by the equation

NWFP UET, Final Year Project Report, 2007

Page 51

Then the value of the parameter mn is given by

which shows that mn is an energy ratio. Shown in the following figures is the MATLAB simulation result of a sequence x[n] with the window size of L = 8.
14

30

12

25

10

20
8 x[n]

Energy ratio

15

10
4

0
0 10 20 30 40 Index 50 60 70 80

10

20

30 Index

40

50

60

The figure clearly shows how the value of mn does not depend on the total received power. After the peak, the response levels off to the same value as before the peak, although the received energy level is much higher. An additional benefit of this approach is that, at the peak point of mn the value of an contains the sum of signal energy S and noise energy N and the bn value is equal to noise energy N, thus the value of mn at the peak point can be used to estimate the received SNR.

NWFP UET, Final Year Project Report, 2007

Page 52

4.6 Symbol Timing:


Symbol timing refers to the task of finding the precise moment of when individual OFDM symbols start and end. The symbol timing result defines the DFT window; i.e., the set of samples used to calculate DFT of each received OFDM symbol. The DFT result is then used to demodulate the subcarriers of the symbol.

4.7 Time Synchronization:


An obvious way to obtain time synchronization is to introduce a kind of time stamp into the seemingly irregular and noise-like OFDM time signal. The EU147 DAB system which can be regarded as the pioneer OFDM system uses quite a simple method that even allows for traditional analog techniques to be used for coarse time synchronization. At the beginning of each transmission frame, the signal will be set to zero for the duration of (approximately) one OFDM symbol. This null symbol can be detected by a classical analog envelope detector (which may also be digitally realized) and tells the receiver where the frame and where the first OFDM symbol begin. In the wireless LAN systems IEEE 802.11a and HIPERLAN/2, a reference OFDM symbol of length 2TS is used for time synchronization and for the estimation of the channel coefficients circuit that are needed for coherent demodulation. The OFDM subcarriers are modulated with known data. Another smart method to find the time synchronization without any time stamp is based on the guard interval. We note that an OFDM signal with guard interval has a regular structure because the cyclically extended part of the signal occurs twice in every OFDM symbol of duration TS this means that the OFDM signal s[n] given has the property s[n] = s[n + N] We may thus correlate s[n] with s[n + N] using a sliding window of length equal to the length of the guard interval. This procedure is fallowed for one OFDM symbol length after the detection of signal.

Fig 4.6, Demonstrating the effect of GI and Correlation on detection of symbol Starting point Then the maximums that are generated using these correlation windows are stored and at the end, the index of the sliding window for which the highest peak occurred denotes the start of the OFDM symbol.

NWFP UET, Final Year Project Report, 2007

Page 53

However, owing to statistical nature of the OFDM symbol, we may not get peak for the GI. In that case, to get the time synchronization, we use the pilot signals. The pilots are sent such that the first pilot sent contains some specific frequencies. The next pilot sent has frequencies with 180 in phase opposition to the 1st pilot. Then the peak will only occur for the correlation window with starting index of GI. And hence, the start of the frame is at hand. This type of synchronization is useful for broadcasting of OFDM signal or the OFDM signal that is continuous.

4.8 Sampling Clock Synchronization:


The oscillators generating the clock pulses for the sampling instances of A/D and D/A are never synchronized and they have never the same period. This suggests that the shifting of the sampling instances occurs at the receiver, although slowly, relative to transmitter. Due to this error, the two main impairments occur. The slow shift of the symbol timing point causes the SCs to rotate. This effect occurs for every type of digital transmission. The loss of SNR due to ICI generated by the slightly incorrect instants causes the SCs to loss orthogonality. This effect is special to OFDM system.

Fig 4.7, The Loss of Synchronization between Transmitter and Receiver Sampling Clock 4.8-1 Estimation: The estimation of the sampling frequency errors can be done using the pilot SCs. Pilot symbols are known data that are transmitted with the original signal to enable receiver to perform synchronization operations and estimate the channel characteristics as well. The rotation of the pilot SCs can then be used to correct the rotation encountered by the other SCs of the system and the frequency offset is estimated by using the knowledge between the phase rotation caused by the offset and pilot SC index. If T is the sampling time of transmitter and T is the sampling time of receiver, then the sampling time offset is given by t = T-T T

NWFP UET, Final Year Project Report, 2007

Page 54

Because of this offset, the symbols are rotated and the information is corrupted. The rotation caused by the sampling frequency offset is given by

where k is the SC index, l is the OFDM symbol index, Ts is the symbol time and Tu is the usful time. Hence the rotation largest for the outermost SC and also increases with the OFDM symbols. We use OFDM symbols to estimate the sampling frequency offset. If the symbol contains three frequencies f1, f2 and f3, we remove GI, take the FFT and then compute the following parameters. th1 = arg (pilot2(f1) * pilot1(f1)) th2 = arg (pilot2(f2) * pilot1(f2)) th3 = arg (pilot2(f3) * pilot1(f3)) and we have = Tu ( th3 th1 + th2 th1 ) 4*Ts f3-f1 f2-f1 where th is the angle at the frequency f. 4.8-2 Correction: To correct the sampling frequency offset, we need to resample the input signal. The ratio of new sampling frequency to the sampling frequency should be such that fnew/fs = (fs + *fs)/fs I.e. the new sampling frequency should be fnew = fs + *fs

4.10 Frequency and Phase Synchronization:


If we consider the drawbacks of an OFDM system, the main drawback is its sensitivity to the carrier frequency offset. The degradation is caused either by the carrier amplitude change or by the ICI of the neighboring SCs. If the desired carrier is no longer sampled at the peak of the sinc function of DFT, the change in amplitude of the SC occurs. Adjacent carriers cause the interference because they may not be sampled at the zero crossings of the sinc function of the under discussion SC. There is always some residual error in the frequency estimation process. The main problem with frequency error is the carrier rotation on the constellation plot. The constellation points might rotate over the decision boundaries and it will become impossible to correctly demodulate the incoming data.

NWFP UET, Final Year Project Report, 2007

Page 55

Like the frequency error estimation, there are also data aided and non data aided techniques for carrier phase tracking. The data aided techniques utilize the pilot symbols send with the data.

Fig 4.10, The Carrier Frequency Offset Various approaches have arisen for the correction of the frequency synchronization. 4.10-1 Estimation: The Estimation of the frequency and phase offset can be done using GI correlation. An OFDM data symbol has GI at the start, and this GI is correlated with the end data of the symbol. Max value obtained by the correlation is saved and since this value is complex, the angle from it is deduced. Say it is . This is the phase offset. The frequency offset f can be calculated by the following formula. f = /(2) 4.10-2 Correction: For correction purposes, two phasors are made, called the phase_phasor and the freq_phasor. For every input complex value at index n, of the OFDM data symbol of N total indices, phase_phasor[n] = exp(-j*2**n/N) freq_phasor[n] = exp(-j*2*f*n/N) The corrected data is obtained by the following execution Data_corrected[n] = (ifft(fft(data[n])*phase_phasor))*freq_phasor

NWFP UET, Final Year Project Report, 2007

Page 56

Part 3 Chapter

The OFDM transmitter in C++


This chapter will describe in detail the working principles behind the transmitter of an OFDM system being implemented in C++. The theory is already presented in the preceding chapters, and hence, in the context of C++, this theory will be incorporated.

NWFP UET, Final Year Project Report, 2007

Page 57

5.1 Block Diagram of the Transmitter System:


This block diagram resembles to that of an OFDM transmitter system as shown previously. But with little changes, it is redrawn according to the steps involved in the implementation of this transmitter in our project.

Fig 5.1, The Implementation of Transmitter in C++ An acute comparison of this system with the actual OFDM transmitter is beneficial. The 1st sound card of the PC is replacing the data source. This sound card is responsible for taking in the analog data and converting it into the digital waveform using PCM. The output of the sound card is a serial stream of data that is buffered and then forwarded. Although not shown previously in the OFDM transmitter system, compressor is the integral part of our project, which functions to compress the incoming audio stream to a reasonable/tolerable limit. The compressed data is mapped to the constellation diagram. Here, we can use of the constellations like QPSK, QAM, etc. The pilot data block inserts known symbols in the IFFT block so that they can be transmitted and can be used by the receiver for correction and detection purposes. Once the complex data from the constellation diagram is available, the IDFT of the data is taken using IFFT. The output of the IFFT is in time domain and is a serial stream of data. Cyclic Prefixing is introduced in the Guard Interval Insertion block. The 2nd sound card of the PC converts the digital signal from the GI Insertion block into and analog signal. Hence, it replaces the D/A in the previous scheme.

5.2 The 1st Sound Card:


This sound card is necessary to obtain the audio data from the user. It takes the audio data, converts it into digital, and then buffers the output data in digital form. 5.2-1 Technical Data: The sound card will receive the input at the rate of 8000 samples per second. This data will then be pulse coded and each sample will be represented by 2 bytes, i.e. 16 bits. As the system is implemented in real time, we need to forward some data from the sound card after a specific time, say 20msec. We cannot just capture the speech for 10 sec and then forward it. In other words, the sound card will capture speech for 20msec, and will forward it to the compressor, which will further process the speech. Again it will start

NWFP UET, Final Year Project Report, 2007

Page 58

capturing speech and the cycle will carry on. The missing due to the delay of sending the sound data and processing it after every 20msec is tolerable and insensitive to human ear. Now if we are sampling at 8000 samples per second with 2 bytes representing each sample, then total number of bytes recorded after 20msec will be: 8000 samples/sec * 2 bytes/sample * 20msec = 320 bytes This suggests a buffer size of 320 bytes for the 20msec spurts of sound data. 5.2-2 Programming Notes: To open the sound card for recording, we need the following structures to be filled with appropriate data. WAVEFORMATEX: This structure contains the information about the format of the output of the sound card, i.e. it should be filled accordingly to give us the amount of output data, the number of output channels etc. The format tag of the output is WAVE_FORMAT_PCM, which indicates that the output is obtained via PCM. The sampling frequency, given by the nSamplesPerSec variable is chosen to be 8000. The nBitsPerSample is taken as 16, i.e. we have taken 2 bytes per sample of the PCM data. So the average number of bytes per second now becomes 8000 samples/sec * 2 bytes/sample = 16000 bytes/sec HWAVEIN: This handle contains the information about the device (sound card) itself. Using this handle, we can play with the sound card as we wish. WAVEHDR: This structure defines the header used to identify a waveform audio buffer. It contains all the information needed by the program to know the whereabouts of the buffer, its length and some other information like different flags. Its member lpData is a long pointer to the data buffer where the pulse coded data will be stored. The dwBufferLength denotes the length of that buffer, and the dwBytesRecorded gives us the number of bytes recorded at a given time when the audio input device had started receiving the input data. The various flags supply the information about the buffer. WAVEINCAPS: This structure stores the information about the sound device that is being installed in the computer, for example the device name. Once all these structures are filled with appropriate data, we need to open the device for recording. To open the device, we use the function waveInOpen( ). waveInOpen(): This function opens the device for recording. Its first parameter is the pointer to the handle of the device given by HWAVEIN. The second parameter is the ID of the device. There may be more than one sound cards installed on the PC. So the DeviceID is used to select the intended device. The third parameter is the pointer to the structure that identifies the desired format for recording waveform-audio data. A pointer to WAVEFORMATEX structure gives this. The forth argument is the pointer to a CALLBACK function, or an event handler, or a window, or a thread identifier. We will use the CALLBACK function in our project. The fifth argument is the User-instance data

NWFP UET, Final Year Project Report, 2007

Page 59

passed to the callback mechanism. The sixth and final argument is the flag for opening the device. We used the flag CALLBACK_FUNCTION to indicate that the forth parameter is a CALLBACK procedure address. The return value of the function is MMSYSERR_NOERROR if the device is successfully opened or an error otherwise. Result = waveInOpen (&InHandle,InDeviceID,&InWaveFormat, (DWORD) waveInProc , (DWORD)this, CALLBACK_FUNCTION); if( Result = = MMSYSERR_NOERROR) // device opened, can start recording else // add some error handling code here After the device is opened, we can start recording. To start the recording, we call the waveInStart( ) function. waveInStart( ):This function starts input on a given audio input device. It has a single parameter, the handle to the waveform-audio device, given by the HWAVEIN strurcture. It returns MMSYSERR_NOERROR if successful. Result = waveInStart(InHandle); if( Result = = MMSYSERR_NOERROR) // do further processing else // add some error handling code in here waveInStop( ): Once all the recording is done, one can stop reading data from sound card using the waveInStop( ) function. Result=waveInStop(InHandle); if( Result ! = MMSYSERR_NOERROR) // add some error handling code in here waveInClose( ): The device is closed using this function. The syntax is as fallows. Result=waveInClose(InHandle); if( Result ! = MMSYSERR_NOERROR) // add some error handling code in here 5.2-3 Working: The WAVEINHDR structure tells that the buffers are of the size 320 bytes. Once a buffer is filled, the CALLBACK function, waveInProc( ) is called. In this function, further processing of this data of size 320 bytes is done, as shown in figure 5.1. This data is compressed and mapped, and then IFFT of the mapped data is carried out. Afterwards we add the Guard Interval. And in the last, it is played on the second sound card. NWFP UET, Final Year Project Report, 2007 Page 60

5.3 Compressor:
The compressor that has been utilized in the project is the open source SPEEX codec. (For the details of SPEEX codec, please refer to the indexes). Its basis purpose is to compress the incoming stream of 320 bytes. The data is compressed using this codec, every time a buffer is filled by the sound data and returned to the application. We cannot map the whole 320 bytes of data because it defeats the purpose of real time communication in our project, the reasons will be visible later. 5.3-1 Technical Data: SPEEX codec can compress data to variable limits, the limits being initialized by the programmer. The mapping of compressed data is to be carried out afterwards. Keeping in mind the constellation diagram, we need to set the quality of the compressed data to such an extent that the compressed data is fully mapped on the constellation diagram. Compression of 320 bytes to 15 bytes is optimal. These 15 bytes can be fully mapped onto the constellation, the method and theory being presented in the next topic. 5.3-2 Programming Notes: To use SPEEX, first we need to create a new state of SPEEX in narrowband mode. void *state; state = speex_encoder_init(&speex_nb_mode); After the creation of the state, that holds the relative information about the compression, we need to tell SPEEX how much to compress data. For compression of 320 bytes of data (this constitutes one frame) to 15 bytes, the variable tmp is set to 2 and utilized as shown. int tmp=2; speex_encoder_ctl(state, SPEEX_SET_QUALITY, &tmp); There is a structure SpeexBits that holds the bits that are written to (and read from) the SPEEX routines. It is initialized as fallows. SpeexBits bits; speex_bits_init(&bits); When every thing is ready, we come to the compression of the incoming data. This incoming data from the sound card is of size 320 bytes, and this must be converted to float for SPEEX to work on it. An ordinary for loop is utilized for this purpose. for (int i=0;i<FRAME_SIZE;i++) input[i]=*(tmp_array+i); Here, the FRAME_SIZE is set to 160. input is an array of float type. The pointer tmp_array is of type short* and points to the incoming data stream. After this, we flush all the data in the structure bits to encode a new frame. This is done by calling the function NWFP UET, Final Year Project Report, 2007 Page 61

speex_bits_reset(&bits); Next, we encode the new frame. speex_encode(state, input, &bits); The structure bits now contains the compressed data when control is returned to the program. These bits are copied to a char array that can be processed further. nbBytes = speex_bits_write(&bits, cbits, 200); nbBytes is an integer that gives the size of the char written by SPEEX on the char array cbits, using the structure bits. The size of cbits is 200. Only the first nbBytes locations of cbits array carry the compressed data. 5.3-3 Working: Incoming data from the sound card is compressed using the SPEEX encoder. The encoder compresses 320 bytes of data to 15 bytes and then writes it onto an array of char called cbits. The IFFT of the compressed data in this array is taken when this data is mapped accordingly on the constellation plot.

5.4 Signal Mapping:


Signal mapping is carried out next. Here the incoming data is mapped to the constellation plot where the programmer selects the constellation. It can be QPSK or anything higher. The output of mapped signal is then filled in an array, or buffer, which will be then utilized by the IFFT block to take the Inverse Discrete Fourier Transform. We discuss some of the features of mapping the data. 5.4-1 Technical Data: It takes 2 bits of the data to map to a single point on the constellation of QPSK. And this 1-point is then written to the IFFT buffer; the buffer that will be utilized by the IFFT block to carry on the IFFT process. So 1 byte (8 bits) will be written to 4 distinct indexes of the IFFT buffer. Therefore, the 15 bytes of data, after they are mapped, are written to 15 bytes * 4 indexes/byte = 60 indexes Thus from the compressed data, two bits will be taken sequentially at a time and they will be mapped to the constellation diagram. The constellation of QPSK is given as under.

NWFP UET, Final Year Project Report, 2007

Page 62

Fig 5.2, QPSK Constellation Diagram Thus, every 15 bytes will grasp 60 indexes of the IFFT buffer. And then the IFFT block will take the IDFT of this data. This will happen after every 20msec, i.e. after each 20msec, we require IFFT buffer ready to accept the incoming data. We notice that to map the 2-bit data, the following conditions are met. If the first bit of the two bits taken is 0, then the imaginary part of the index of the IFFT buffer is +j, else it is j. If the second bit is 0, then the real part at the specific index of the IFFT buffer is +1, else, it is 1. 5.4-2 Programming Notes: The mapping module will take in the 15 bytes (120 bits) and map them accordingly. It is again mentioned here that the constellation that is utilized is QPSK. The module then stores the mapped data in an array known as inData. inData is basically a complex valued array, having two columns. The first column represents the real data and the second column represents the imaginary data. Details of inData are given in the coming topic of IFFT. The code comprises of evaluating 4 indexes of inData from a single byte of incoming compressed data form. The reader should not be confused by the discussion before, which said that 2-bits are taken at a time. Here, the 2-bits of the single byte are entertained at a time. Bit-wise operations are performed. The value of a bit, in the byte, is checked by ANDing the byte with a constant having a 1 at the specific bit position and all 0s at the other positions. The result of the AND operation is non-zero (TRUE) if there is 1 in the byte at the specific location. If there is a 0, then the result is zero (FALSE). Then the above two conditions for mapping are applied to fill the specific index of the buffer.

NWFP UET, Final Year Project Report, 2007

Page 63

for(i=0;i<15;i++) { inData[k][0] inData[k][1] inData[k+1][0] inData[k+1][1] inData[k+2][0] inData[k+2][1] inData[k+3][0] inData[k+3][1] k=k+4; }

= *(cbits+i) & 0x40? -1 : 1; = *(cbits+i) & 0x80? -1 : 1; = *(cbits+i) & 0x10? -1 : 1; = *(cbits+i) & 0x20? -1 : 1; = *(cbits+i) & 0x04? -1 : 1; = *(cbits+i) & 0x08? -1 : 1; = *(cbits+i) & 0x01? -1 : 1; = *(cbits+i) & 0x02? -1 : 1;

5.4-3 Working: When data is compressed to 15 bytes, it is mapped using QPSK modulation. This mapped data is then used to fill the buffer for the IFFT block. The buffer is filled by the mapped data in such a way that the output of the IFFT block will now contain frequencies ranging from 187.5 Hz to 3937.5 Hz (details in the up-coming topic).

5.5 IFFT:
This block is responsible for the IFFT of the mapped data. Data is coming from the signal mapper after every 20msec.This data is being written to the IFFT buffer at different frequency indexes. The IFFT block takes IDFT of this data present in the buffer and produces the output in discrete time. 5.5-1 Technical Data: The data from the compressor block is fed to the signal-mapping block. After mapping this data, output will be fed to the IFFT block, which is working with following specifications. N fs fo = = = number of points/samples on the frequency axis = 128 sampling frequency = 8000 Hz bin, the distance between two adjacent frequencies on the frequency axis = 8000 / 128 = 62.5 Hz

It is evident from the given figures that the size of the IFFT buffer must be 128. According to the Nyquist theorem, the maximum output frequency fm produced should fallow the following law. fm fs / 2 I.e. the outputs maximum frequency should be less than or equal to half of the sampling frequency, 4000 Hz. The output of the signal mapper is used to fill the buffer for IFFT. These 15 bytes should be mapped fully onto the constellation diagram, and then put in the IFFT buffer at such

NWFP UET, Final Year Project Report, 2007

Page 64

places that the maximum frequency of the output is not greater than 4000 Hz. At 4000 Hz, the frequency index is 4000 / 62.5 = 64 We do not want the DC term (zero frequency) in the output, so we start from the 3rd index of the IFFT buffer. Thus we have the output frequency at the 3rd index 3 * 62.5 = 187.5Hz This constitutes an allowable range of 62 frequency indexes, ranging from index 3 to 64. The signal mapper block fills 60 indexes. Therefore, the last index that will be filled by the mapper will be 63 of the IFFT buffer, if the starting index is 3. Thus the range of frequencies in the output would be 3 * 62.5 Hz 187.5 Hz f f 63 * 62.5 Hz 3937.5 Hz or

5.5-2 Programming Notes: The processing of the frequency domain data, the output of mapper, is all done using FFTW, the Fastest Fourier Transform in the West (for details of FFTW, please refer to the indexes given at the end). First of all, we allocate memory for the input and output arrays of the IFFT. Naturally, there must be 128 elements in each array. inData is the array that will be filled by the signal mapper, i.e. its the IFFT buffer as was mentioned earlier. outData is the output array of the IFFT block. It contains the time domain composite signal of the input frequency domain data. The arrays are of type fftw_complex. Both of these arrays consist of 2 columns, and 128 rows. First column contains the real data, and the second column contains the imaginary data.

Fig 5.3, The Structure of Arrays inData and outData

NWFP UET, Final Year Project Report, 2007

Page 65

Once the input and output arrays are allocated, we need a plan for taking IFFT. The plan is a keyword that is discussed in the index at the end. fftw_plan plan_backward; plan_backward=fftw_plan_dft_1d(128,inData,outData, FFTW_BACKWARD,FFTW_MEASURE); plan_backward is the plan that contains the information about the computational procedure that will be carried out when IFFT is taken using FFTW. It is basically a structure of type fftw_plan. It has information now that the IFFT is of the size 128, and inData and outData are the input and output arrays respectively. Also, the FFTW_MEASURE flag indicates that the planner takes time to calculate the best and most efficient path for the IFFT. The signal mapper fills the array inData. And the outData array is filled by the following command. fftw_execute(plan_backward); So the outData array now contains the time domain data of the input array inData. A word about the size of inData and outData is beneficial. Both of these arrays are of size 128 rows * 2 columns and each element is of the type double. So there are 128 * 2 = 256 elements and the total size is 256 doubles * 8 bytes/double = 2048 bytes where a double value takes 8 bytes of the memory. 5.5-3 Working: The output of signal mapper, filled in a buffer, is being forwarded to the IFFT block. The IFFT block then takes the IDFT using the FFT procedures. And the output of the IFFT block is now ready to be forwarded to the GI insertion block.

5.6 Guard Interval Insertion:


The GI block is responsible for the insertion of guard interval in an OFDM symbol. The reasons for the introduction of a guard interval were covered in the previous chapters and thus they will not be further discussed here.

NWFP UET, Final Year Project Report, 2007

Page 66

5.6-1 Technical Data: As was discussed previously, the cyclic prefix can be anything from one tenth to one forth of the total OFDM symbol. We have taken the cyclic prefix equal to one forth of the time domain OFDM symbol and then inserted at the start of each OFDM symbol. The data coming out of the IFFT block is defined for 128 points, and the data is complex. So, one forth of the total 128 points is 128 / 4 = 32 It is see that we need to append the 32 trailing complex values of the IFFT output at the start of the OFDM symbol. These 32 values comprise of 32 * 2 * 8 bytes = 512 bytes of data, where the factor 2 denotes that each of the 32 values is a complex value comprising of a real and imaginary part, and each part is of the size 8 bytes. After we append these 32 values at the start of the symbol, we have the resultant defined at 128 + 32 = 160 points Therefore, the total size of the symbol is 160 * 2 * 8 bytes = 2560 bytes 5.6-2 Programming Notes: We define an array giData that is of size 160 rows * 2 columns = 320 elements and of the type double, and hence, the size of the array is 320 * 8 bytes= 2560 bytes where the first 32 rows of the array (32 * 2 * 8 = 512 bytes) must contain the cyclic prefix of the IFFT output. To copy the last 32 rows (512 bytes) of the OFDM symbol, we use the function memcpy. memcpy(&giData[0][0],&outData[95][0],512); Now the first 32 complex values of the giData contain the last 32 complex values of the outData. After copying the trailing data, we copy the original OFDM symbol in the same array. memcpy(&giData[32][0],&outData[0][0],2048);

NWFP UET, Final Year Project Report, 2007

Page 67

and now, the array giData is the output array including the cyclic prefix.

Fig 5.4, Structure of the Array giData 5.6-3 Working: The output of IFFT block is fed to the GI Insertion block. The GI Insertion block adds the guard interval to each OFDM symbol that is generated after every 20msec. This data at the output of GI Insertion block is now the composite OFDM signal that we wish to transmit using the audio out of the second sound card (details in the coming text).

5.7 The Pilot Data Block:


The pilot data block is used to insert known symbols in the transmission, for the purposes of detection and synchronization. This is the redundant data that must be transmitted in order for the receiver to recognize the signal presence and to synchronize with the transmitter. 5.7-1 Technical Data: The pilot data is no different from communication data, atleast in size and composition. But the information present in the pilot symbol is zero, which basically means that the receiver already knows what the pilot data contains. But still we need these pilots in order the receiver should know the start of the OFDM transmission. The strength of the pilot is somewhat greater than the data, which will be sent on the link. At some specific indexes of the IFFT buffer, we insert these pilots and then we take the IFFT of the buffer to generate a pilot symbol. It is hereby mentioned that in our project, we utilized a comb pilot combination. Pilots are sent on the link for the first second before the transmission of voice data begins.

NWFP UET, Final Year Project Report, 2007

Page 68

5.7-2 Programming Notes: Let us define the pilot data at the indexes 10, 20 and 29. These indexes are taken on experimental bases and do not contribute to the theory. To fill these indexes with data, we employ the following code. m_iPilotPhase = 10; inData[0][0]=0; inData[10][0]=m_iPilotPhase; inData[20][0]=m_iPilotPhase; inData[29][0]=m_iPilotPhase; After filling in the inData array with pilot data, the procedure employed is the same, i.e. we take the FFT and then we add the GI. The following lines of the code show this. fftw_execute(plan_backward); memcpy(&giData[0][0],&outData[96][0],512); memcpy(&giData[32][0],&outData[0][0],2048); The next pilot symbol that will be transmitted has the amplitude opposite to that the one preceding it. This is done so to detect the start of the symbol using correlation at the receiver. m_iPilotPhase=-m_iPilotPhase; 5.7-3 Working: Pilots are symbols sent at the start of a transmission. The receiver uses correlation to detect the start of the symbol and this fact is utilized when the data for the pilots is prepared.

5.8 The 2nd Sound Card:


This sound card is used to transmit the symbol generated at the output of the GI insertion block. The analog speech data that was captured by the 1st sound card is now ready to be transmitted on the link via the 2nd sound card. 5.8-1 Technical Data: The sound card is sampling at the rate of 8000 Hz and each sample is represented by 2 bytes. The amount data that this sound card can play (i.e. send on the link) for 1 sec is 8000 samples/sec * 1 sec * 2 bytes/sample = 16000 bytes So, if it intended to play the data for 1 sec continuously on the sound card, we need to accumulate 16000 bytes in a buffer and then call the respective API functions. 5.8-2 Programming Notes: To play the data on the sound card, we need to fallow the following procedure. We fill the following structures accordingly. NWFP UET, Final Year Project Report, 2007 Page 69

WAVEFORMATEX: This structure contains the information about the format of the output of the sound card, i.e. it should be filled accordingly to give us the amount of output data, the number of output channels etc. The format tag of the output is WAVE_FORMAT_PCM, which indicates that the output is obtained via PCM. The sampling frequency, given by the nSamplesPerSec variable is chosen to be 8000. The nBitsPerSample is taken as 16, i.e. we have taken 2 bytes per sample of the PCM data. So the average number of bytes per second is 8000 samples/sec * 2 bytes/sample = 16000 bytes/sec WAVEHDR: This structure defines the header used to identify a waveform audio buffer. It contains all the information needed by the program to know the whereabouts of the audio output buffer, its length and some other information like different flags. Its member lpData is a long pointer to the data buffer where the pulse-coded data is stored stored. The dwBufferLength denotes the length of that buffer. The various flags supply the information about the buffer. Once these structures are filled with appropriate data, we need to open the device for playing the sound data. To open the device, we use the function waveOutOpen( ). waveOutOpen( ): This function opens the given waveform-audio output device for playback. Its first variable is the pointer to the buffer identifying the waveform output device. The second parameter identifies the device that will be used to play the audio. The third parameter is the pointer to the structure that identifies the desired format for recording waveform-audio data. A pointer to WAVEFORMATEX structure gives this. The forth and fifth argument are used for the CALLBACK procedure. In our project, there was no need to implement the CALLBACK procedure at the receiver. Therefore, these two arguments will be set to zero. The sixth argument is CALLBACK_NULL, identifying that there are no CALLBACK actions taking place. The return value of the function is MMSYSERR_NOERROR if the device is opened successfully for playback. MMRESULT Result=waveOutOpen(&wOut,OUT_DEVICEID, &pwfxOut,0,0,CALLBACK_NULL); if( Result = = MMSYSERR_NOERROR) // device opened, can start intended actions else // add some error handling code here After all the headers are filled and the device is opened, the sound card is now capable of working and playing the data. The output data is already prepared and is placed in the double array giData by the GI insertion block. Let us denote an array out_data that is has the same data as giData but with the type of array is char (only char can be written to the sound card). Say the header that is active for the given time is current. The lpData member of this header must point to the sound data. This can be accomplished by current -> lpData = out_data;

NWFP UET, Final Year Project Report, 2007

Page 70

Before playing this data, we need to prepare the respective header to be played on the sound card. This is done using the waveOutPrepareHeader( ) function. waveOutPrepareHeader( ): This function prepares an waveform-audio data block for playing. The first parameter is the handle of the waveform output device. The second argument is the pointer to the header that we wish to prepare and the third argument is the size of the WAVEHDR block. waveOutPrepareHeader(wOut, current, sizeof(WAVEHDR)); When the header is prepared, the data is written to the sound card using the waveOurWrite( ) function. waveOutWrite( ): This function sends a waveform-audio data block to the output device for playback. It has the same parameters as the waveOutPreapareHeader( ) function. waveOutWrite(wOut, current, sizeof(WAVEHDR)); 5.8-3 Working: The data from GI insertion block is stored in an array of 16000 bytes. These bytes are then written on the 2nd sound card, and the OFDM symbol is thus generated this way.

NWFP UET, Final Year Project Report, 2007

Page 71

5.9 Summary of the Procedure:

1st Sound Card

Compressor

Fill structures WAVEINHDR, WAVEFORMATEX, WAVEINCAPS and HWAVEIN. Call the function waveInOpen( ) to open the device. Start recording using waveInStart( ). To stop, use the waveInStop( ) function and to close the device, use waveInClose( ) function.

Use the open source SPEEX codec for compression. Create a new encoder state in the narrow band mode. Set the quality and initialize the bits structure. Using SPEEX, encode the input from sound card and write the encoded data to an array of char that can be manipulated.

Signal Mapper

IFFT

Get the input from the compressor. Map this data on the constellation of QPSK and prepare the IFFT buffer.

Take the IFFT of input data using FFTW. Define the input and output arrays and allocate them memories using fftw_malloc( ) function. Define the structure fftw_plan for taking the IFFT of the data. Fill the input array by the output of signal mapper. Take the IFFT of the data, i.e. fill the output array, using fftw_execute( ) function.

GI Insertion

2nd Sound Card

Use the function memcpy( ) to append the trailing 1/4th part of the OFDM symbol at the start of the symbol. The output symbol generated by the GI insertion block thus becomes the OFDM symbol intended for transmission.

Fill the structures WAVEFORMATEX, WAVEHDR. Call the function waveOutOpen( ) to open the device for playing. Prepare the header for playback and then write the output to the sound card using waveOutWrite( ) function

NWFP UET, Final Year Project Report, 2007

Page 72

Chapter

The OFDM Receiver in C++


The focus of the chapter will be to discuss in detail the implementation of the OFDM receiver in C++ and the issues encountered during so. The reader should incorporate the theory of the real OFDM receiver, given in the preceding chapters. NWFP UET, Final Year Project Report, 2007 Page 73

6.1 Block Diagram of the Receiver System:

Fig 6.1, Implementation of OFDM Receiver in C++ A comparison of the implemented receiver in C++ with that of the real world OFDM receivers is described. It will be seen that the input from the channel is fed to the mic of the 1st sound card. This sound card is actually replacing the ADC of the real OFDM receiver. The purpose of the control block is to direct the signal to the appropriate destination block. When the signal is not detected, i.e. the transmission is not detected, the control block send output of the 1st sound card to the signal detector. When signal is detected and start of the frame is at hand (using the correlation block), the control block drives the GI removal block with the input of the 1st sound card. Signal detection block investigates and calculates the total energy in the channel and alarms the rest of the block if there is some activity in the channel. When signal is detected, next step is to obtain the start of the OFDM frame. Correlation block provides the time synchronization needed to correctly detect the start of the OFDM frame. It also triggers the control block to start driving the input of the GI removal block directly with the output of the 1st sound card. This is carried out using the Hilbert transform block. The real data, after GI removed, is fed to this block which generates the corresponding imaginary data for every index and

NWFP UET, Final Year Project Report, 2007

Page 74

hence, at every index, we get a complex number, which is exactly the output of the IFFT block in the transmitter system. The synchronization block removes the sampling rate offset, frequency offset and phase offset. The GI correlation procedure is utilized to achieve the estimates of offsets and the correction procedures are carried out. The GI removal is carried out next. It is pointed out here that the transmitter sends only the real data. Its the responsibility of the receiver to generate the imaginary data on its own. The complex outcome finds its way into the FFT block, which provides at its output the different frequencies that are used to coagulate into an OFDM symbol. Signal demapping the opposite of signal mapping and thus, the original data is received using this block (by original data, it is meant the data which is not mapped to any constellation). The decompressor was not shown to be a part of a real world OFDM receiver system, although it is an integral part of the OFDM system and is necessary to be utilized in our project for the reasons that fallow. The decompressed data is actually the digitized sound data input of the OFDM transmitter. So to get the analog output, we pass it through to the 2nd sound card whose speaker system is utilized to generate the analog output data. This 2nd sound card is infact replacing the DAC of the real world OFDM receiver. The coming topics will describe the working of each block separately from other blocks and we will discuss the details of the C++ codes that make each of the block work as required.

The 1st sound card of the receiver system is exactly the same as the 1st sound card of the transmitter system. It captures the analog OFDM signal, converts it into digital form, and then feeds the data to the signal detection block which determines that weather the signal input is an OFDM symbol or merely noise. 6.2-1 Technical Data: The sound card will receive data at the rate of 8000 samples per second. Exactly like the 1st sound card of the transmitter, this data will be pulse coded and each pulse will be represented by a 16-bit value, or 2 bytes. The receiver system is implemented in real time, we need to forward some data from the sound card after a specific time, say 20msec(or we may say that we need to forward bursts of data). We cannot just capture the input data for, e.g. 10 sec and then forward it. The signal detection block requires constant data input and constant calculations to detect the presence of apposite signal. In other words, the sound card will capture input from the communication channel for 20msec, will forward it to the signal detector, which will further process the input data. Again it will start capturing the input analog data, and the cycle will carry on. The missing due to the delay of sending the input data and processing it after every 20msec is tolerable. Now if we are sampling at 8000 samples per second with 2 bytes representing each sample, then total number of bytes recorded after 20msec will be:

6.2 The 1st Sound Card:

NWFP UET, Final Year Project Report, 2007

Page 75

8000 samples/sec * 2 bytes/sample * 20msec = 320 bytes This suggests a buffer size of 320 bytes for the 20msec spurts of input data. 6.2-2 Programming Notes: To open the sound card for recording, we need the following structures to be filled with appropriate data. WAVEFORMATEX: This structure contains the information about the format of the output of the sound card, i.e. it should be filled accordingly to give us the amount of output data, the number of output channels etc. The format tag of the output is WAVE_FORMAT_PCM, which indicates that the output is obtained via PCM. The sampling frequency, given by the nSamplesPerSec variable is chosen to be 8000. The nBitsPerSample is taken as 16, i.e. we have taken 2 bytes per sample of the PCM data. So the average number of bytes per second now becomes 8000 samples/sec * 2 bytes HWAVEIN: This handle contains the information about the device (sound card) itself. Using this handle, we can play with the sound card as we wish. WAVEINHDR: This structure defines the header used to identify a waveform audio buffer. It contains all the information needed by the program to know the whereabouts of the buffer, its length and some other information like different flags. Its member lpData is a long pointer to the data buffer where the pulse coded data will be stored. The dwBufferLength denotes the length of that buffer, and the dwBytesRecorded gives us the number of bytes recorded at a given time when the audio input device had started receiving the input data. The various flags supply the information about the buffer. WAVEINCAPS: This structure stores the information about the sound device that is being installed in the computer, for example the device name. Once all these structures are filled with appropriate data, we need to open the device for recording. To open the device, we use the function waveInOpen( ). waveInOpen(): This function opens the device for capturing analog input. Its first parameter is the pointer to the handle of the device given by HWAVEIN. The second parameter is the ID of the device. There may be more than one sound cards installed on the PC. So the DeviceID is used to select the intended device. The third parameter is the pointer to the structure that identifies the desired format for recording waveform-audio data. This is given by WAVEFORMATEX structure. The forth argument is the pointer to a CALLBACK function, or an event handler, or a window, or a thread identifier. We will not use the CALLBACK function in our project. The fifth argument is the User-instance data passed to the callback mechanism. The sixth and final argument is the flag for opening the device. We used the flag CALLBACK_NULL to indicate that we will not employ any CALLBACK procedure. The return value of the function is MMSYSERR_NOERROR if the device is successfully opened or an error otherwise.

NWFP UET, Final Year Project Report, 2007

Page 76

Result = waveInOpen (&InHandle,InDeviceID,&InWaveFormat, 0,0, CALLBACK_NULL); if( Result = = MMSYSERR_NOERROR) // device opened, can start recording else // add some error handling code here After the device is opened, we can start recording. To start the recording, we call the waveInStart( ) function. waveInStart( ):This function starts input on a given audio input device. It has a single parameter, the handle to the waveform-audio device, given by the HWAVEIN strurcture. It returns MMSYSERR_NOERROR if successful. Result = waveInStart(InHandle); if( Result = = MMSYSERR_NOERROR) // do further processing else // add some error handling code in here waveInStop( ): Once all the recording is done, one can stop reading data from sound card using the waveInStop( ) function. Result=waveInStop(InHandle); if( Result ! = MMSYSERR_NOERROR) // add some error handling code in here waveInClose( ): The device is closed using this function. The syntax is as fallows. Result=waveInClose(InHandle); if( Result ! = MMSYSERR_NOERROR) // add some error handling code in here 6.2-3 Working: The WAVEINHDR structure tells that the buffers are of the size 320 bytes. Once a buffer is filled, the data is forwarded to the signal detection block. If the signal is already detected, and the start of the OFDM frame is also known, then this data is fed to the GI removal block, which removes the GI from the OFDM symbols.

6.3 The Control Block:


The purpose of the control block is to direct the flow of data accordingly. It is not an independent entity, yet, for clarity, it is shown so. The reader just needs to remember that the data from the sound card must fallow the appropriate path. If the signal is not detected, the data from sound card must go to the signal detection block. If signal is detected, and the start of the signal is desired, the data flows to the correlation block. NWFP UET, Final Year Project Report, 2007 Page 77

If the signal is detected and the start of the symbol is also known, then the data must be directed to the Hilbert transform block for a specific time, equal to the length of the transmission.

6.4 The Signal Detection Block:


This block is responsible for announcing to the rest of the blocks the arrival of an OFDM signal input at the 1st sound card. It uses the energy based detection scheme, given and explained thoroughly in the chapter for the real world OFDM receiver. 6.4-1 Technical Notes: The signal detection is based on the principle of single window of finite size to calculate the energy of the incoming data. Referring to the article 4.5-1, we see the need of denoting a size of the window, i.e. L. Here, in our project, we choose L = 32 This means that the signal detection block collects 32 samples from the 1st sound card and then it uses the appropriate equations to calculate the energy level of the signal at the input of the 1st sound card. It is to be noted that the signal detection block calculates energy recursively. If the energy of 32 samples of data is greater than some THRESHOLD value, then the signal is actually present at the input of the 1st sound card and hence, further processing is carried out. In our project, the THRESHOLD was chosen to be THRESHOLD = 5500 It is to be clarified that this is purely an experimental value and was not calculated, rather, many THRESHOLD values were observed for various readings and the most appropriate was chosen. 6.4-2 Programming Notes: An infinite loop starts execution when the program starts. This loop is Continuously reading the input to the first sound card Also this loop is continuously giving the output of the first sound card. The loop determines the energy level of the incoming data, continually. We first define an array of size 32 and type double to handle the 32 samples of the input waveform. If energy is the variable denoting the energy of the 32 samples of the input array, denoted by in_arr, then to calculate this variable, we employ the following loop. for(n=0 ; n<32 ; n++) energy = energy + in_arr[n] * in_arr[n]; The energy is calculated according to the formula given in the text of article 4.5-1.

NWFP UET, Final Year Project Report, 2007

Page 78

If the energy is greater than some threshold value, then the signal is detected, otherwise not and the loop continues. if(energy > THRESHOLD) { //the signal is detected; call the correlation block to get //start of the input frame. } else { //no signal present, start the energy calculation process //again } 6.4-3 Working: The signal detection block enables the further processing of the signal if the signal is faithfully detected, and would not let receiver initiate further action unless a signal is detected at the input of the receiver.

6.5 The Correlation Block:


The procedure to find the start of the OFDM symbol was discussed in article 4.7, time synchronization. On the transmission side, we send out the pilot symbols for the first second of transmission. These pilot signals are utilized here to get the start of the frame. These are pilot symbols owing to which the signal detector block recons the activity in the channel. 6.5-1 Technical Data: As was previously discussed in the article 4.7, we have to collect atleast two OFDM symbols to find the start of the frames that are due to come. The data was sent in a frame comprising of 160 double values. Therefore, we require collecting 160 * 2 = 320 double values from the output of the 1st sound card. After this much data is collected, we need to distribute the data between two sliding windows, each of size equal to GI (i.e. 32 doubles) and having a difference of 160 (symbol length) 32 (length of GI) = 128 indexes between them. Now that the windows are ready, we need to correlate the data of the two windows. This correlation must continue till the first sliding window slide through 160 double values. The starting index of the first sliding window for which the correlation is a maximum is the starting index of the frames for the reasons discussed in 4.7.

NWFP UET, Final Year Project Report, 2007

Page 79

6.5-2 Programming Notes: The 2 OFDM symbols required are stored in an array pilotData. The pilotData is an array of size 320 double values. We loop through the pilotData array to indicate starting index of the first sliding window, which is of the size 32 doubles. Also, after 128 indexes, the second sliding window follows. Filling these sliding windows is done using the memcpy( ) function. The function to correlate these two sliding windows is then called. for(int i=0 ; i<160 ; i++) { memcpy(win1,&pilotData[i],256); memcpy(win2,&pilotData[i+128],256); //call the correlation function here } win1 and win2 are the two sliding windows, size of each is 32 doubles. The correlation function determines the starting index. It fills a variable index, which is actually the value i in the above loop for which the correlation peak is the maximum. This indicates we will pass the value of i to this function. The correlation function is given as under. for(int n=0 ; n<31 ;n++) { for(int k=0 ; k<n+1 ; k++) cArr[n] = win1[k] * win2[31-n+k] + cArr[n]; if(cArr[n] > max) { max = cArr[n]; index = i; } } memset(cArr, 0, 504); for(n=0 ; n<32 ; n++) { for(int k=0 ; k<32-n ; k++) cArr[n+31] = win1[n+k] * win2[k] + cArr[n+31]; if(cArr[n+31] > max) { max = cArr[n+31]; index = i ; } }

NWFP UET, Final Year Project Report, 2007

Page 80

An interested reader can verify that the above program code produces the correlation outcome in the array cArr of size 63 doubles. The maximum peak of correlations is stored in the variable max, and the index for the maximum peak that occurred during the correlation of the whole pilotData is stored in the variable index. 6.5-3 Working: Correlation function helps to detect the start of the OFDM frame. In our project, we transmit the voice data for about 10 sec after the transmitter and receiver had exchanged the pilot signals (whose sole purpose was to aid the receiver in detecting the signal and getting the start of the signal). Therefore, after we get the start, we do not correlate for each and every input frame.

6.6 Hilbert Transform Block:


For the theory of Hilbert transform, the reader is notified to read the index at the end. In short, the discrete Hilbert Transform (DHT) is responsible for generation of imaginary output from the real input. 6.6-1 Technical Data: Each is 160 double values, and from these real values, we ought to calculate 160 imaginary values. As stated, to educe the imaginary part from an array of real elements, we need to perform the following steps.

Fig 6.2, Schematic of Obtaining the Hilbert Transform The scheme is proposed in the above figure. The real and imaginary values will then be combined to produce the complex input to the FFT block. h[n] is also of the size 160 values and it is required to be multiplied with the output of the FFT block (note that we are mentioning the FFT block in the Hilbert transform, as shown in the above figure, and not the FFT block of the receiver system). 6.6-2 Programming Notes: We initialize the sequence h[n] using the following lines of code. for(int k = 1 ; k <= N/2-1 ; k++) h[k] = -1; for(k = N/2+1 ; k <= N-1 ; k++) h[k] = 1;

NWFP UET, Final Year Project Report, 2007

Page 81

and N is equal to 160. The real data was collected in the GI removal block, in the array realData. Now is the time to utilize this array. First, we need to perform the FFT on this data. It is to be noted that the input data is all but real. For any sequence x[n] whose Fourier transform is given by X(e jw), if x[n] is real, then Re{X(ejw)} = Re{X(e jw)} And Im{X(ejw)} = -Im{X(e jw)}

i.e. the Fourier transform satisfies the Hermition redundancy. We will utilize the FFTW in the whole process. And FFTW has commands for the transform of real input data, which contributes to efficiency of finding the FFT of the real input. These transforms compute the complex output from the real input. But FFTW will not generate the redundant data. It is our own job to generate these redundant outputs on our own. We use the function fftw_execute(plan_r2c) to calculate this FFT. The transform is made to recognize that the realData is the input array of real data before it performs any transformation. This is done when the plan plan_r2c is initialized in the main program. After calling this function, the output composite array, comData, is filled, and we fill the redundant data on our own. The piece of the program code is given. fftw_execute(plan_r2c); for(int b = N/2+1 ; b < N ; b++) { comData[b][0] = comData[N-b][0]; comData[b][1] = -comData[N-b][1]; } The next step is to compute the analytic data from this composite data, the result being stored in the array anaData. This anaData is obtained by multiplying the comData with h[n]. for(int k = 0 ; k < N ; k++) { anaData[k][0] = -h[k] * comData[k][1]; anaData[k][1] = h[k] * comData[k][0]; } FFTW also has commands to draw out real data from the complex data that satisfies the Hermition redundancy. Indicating that the input array is anaData and the output array is imagData initializes this transform. The call to the function fftw_execute(plan_c2i) fills the array imagData. fftw_execute(plan_c2i); for(int i =0 ; i < 128 ; i++) inData[i][1] = imagData[i+32];

NWFP UET, Final Year Project Report, 2007

Page 82

The imaginary part of inData is thus obtained by copying the imagData to the imaginary part of the array inData. The array inData is now ready to be fed to the FFT block. 6.6-3 Working: Hilbert Transform block is used to obtain the imaginary part of the real input data. The real and imaginary parts of the respective indexes are now conjoined to form a complex value at that index. And the resultant array is now ready to be fed to the FFT block. Blocks following the Hilbert transform block are exactly the opposite of what were present in the transmitter, i.e. to say that FFT is the opposite of IFFT, signal demapper is opposite to signal mapping, decompressor is opposite to compressor.

6.7 Synchronization Block:


The Synchronization Block consists of coding incorporated for correction of sampling frequency errors, ISI and ICI, produced due to frequency offsets at the receiver. Same theory of sampling clock synchronization, frequency and phase synchronization applies here as was discussed chapter 4. 6.7-1 Technical Data: This block uses correlation, Hilbert Transform, FFT and IFFT algorithms. Moreover, comprehensive study is involved to resample the input data. Fortunately, the libsamplerate library is available for the programmers, designed to resample the input at a rate given by the user. 6.7-2 Programming Notes: To use libsamplerate library, one needs to fill the structure SRC_DATA. We define the input and output arrays as well as their sizes. The implementation is given under. myData.data_in=inputData; myData.data_out=resampledData; myData.input_frames=N; myData.output_frames=N; myData.src_ratio=SampleRate; When the structure is appropriately filled, we call the function to resample the input data. res=src_simple(&myData,SRC_SINC_BEST_QUALITY,1); For frequency offset correction, we employ all the procedure given in the theory of chapter 4. 6.7-3 Working: Data from the Hilbert transform finds its way to the synchronization block, whose output is fed to the GI removal block for demodulation of the OFDM symbol. This data still contains errors but are much less severe if we did not employ the synchronization algorithms.

NWFP UET, Final Year Project Report, 2007

Page 83

6.8 GI Removal Block:


It is probably the simplest of all the blocks presented. Its purpose is to remove the GI from the incoming data stream. After detection of signal start, sending data to this block is the next logical step. As stated in the previous text, GI insertion is necessary for an OFDM symbol to reach its destination and to be recovered faithfully. 6.8-1 Technical Data: The IFFT block at the transmitter system generated 128 double values, real and imaginary. The imaginary part was removed and only the real part was forwarded to the GI insertion block. The GI insertion block appended each frame with one forth of the frame and each frame was represented by 128 + 128 * = 160 double values. These were then sent to the 2nd sound card of the transmitter. And these are precisely the values that the GI removal block should obtain. The GI removal block collects 160 double values from the 1st sound card, discards the GI and the remaining 128 double values of data are contained. These are the real values that were present at the output of the IFFT block of transmitter. 6.8-2 Programming Notes: Let us say that the giData is the array of size 160 complex values that receives the data. Then we must discard the GI from this data and store the remaining data for the FFT block that fallows the GI removal block. for(int i = 0 ; i < N ; i++) inData[i][0] = giData[32+i][0]; Here, N = 128 The giData array has first 32 values as the GI data, so the actual data starts after these 32 values and this data is stored in inData, of size 128 doubles. 6.8-3 Working: The GI removal block is responsible for the removal of the redundant GI data. Once GI is removed, the data is now ready to be treated in the FFT block.

6.9 The FFT Block:


The data present at the input of the FFT block is exactly the same as the data present at the output of the IFFT block in the transmitter, if no error had occurred during transmission. Thus, we can obtain the constellation mapping after taking the FFT of the input data.

NWFP UET, Final Year Project Report, 2007

Page 84

FFT stands for Fast Fourier Transform, and we use FFTW to compute this FFT. The calculations involved are well documented in the FFTW index presented at the end of the thesis. 6.9-1 Technical Data: Input to the FFT block is a 128 complex valued array in time domain, generated using the GI removal block and the Hilbert transform block.

Fig 6.3, Illustration of the FFT Block The FFT buffer collects this data and then it takes the FFT of the data, producing an output of 128 complex values in the frequency domain. 6.9-2 Programming Notes: It was previously stated that the array inData was the input to the FFT block, whose output is outData, an array similar to inData. Before performing the FFT of the data in the inData array, we inform the FFTW using the plan plan_forward that the input array is inData and the output array is outData. Afterwards, when inData is filled, we then perform the FFT using the following command. fftw_execute(plan_forward); Once the command is executed, the array outData is filled with the output of the FFT block. Hence, we have at hand the constellation of the original sound data of the transmitter. At the transmitter, the IFFT buffer was filled from index 3 to 63. Hence, the outData will also be considered for these indexes only. Working: Once the input to the FFT block is prepared via GI removal and the Hilbert transform block, the FFT is taken. The output data of this block is now ready to be demapped/remapped.

NWFP UET, Final Year Project Report, 2007

Page 85

6.10 The Signal Demapper Block:


In the transmitter system, before we performed the IFFT, the sound data was compressed and it was mapped on the constellation diagram. So therefore, the block that will follow the FFT block in the receiver must be then the signal demapper, reverse of signal mapper block. Demapper block will collect data from the FFT block. After that, it will make decisions on each data element that which point on the constellation diagram does the data element represent. Thus it will collect the bits represented by each of the data element, conjoin them and make a composite data stream of bits in order to be fed to the decompressor/decoder block.

Fig 6.4, QPSK Constellation Diagram 6.10-1 Technical Data: Input to signal demapper is a stream of data, which is basically the output of the FFT block. The output of the FFT block consists of 128 complex values. In the transmitter side, the sound data was mapped on QPSK constellation; hence the output of the FFT will be either 0 or 1, for the real and imaginary part. The QPSK constellation is shown above. It will be seen that the output of the FFT block is a combination of real and imaginary part. And each complex output will represent a point on the constellation. Thus, the corresponding 2 bits will be collected and then all the pair of bits will be combined to form a composite data stream. The reader will observe that for each complex output, the following two conditions are met. If the real part of the complex value is +1, then the 2nd bit is 0. Else it is 1. If the imaginary part of the complex value is +j, then the 1st bit is 0. Else it is 1.

6.10-2 Programming Notes: It is already known that at the transmitter, we used the SPEEX codec to compress the input 320 bytes to 15 bytes and then mapped these 15 bytes accordingly. Therefore, the demapper will collect the 15 bytes from the array outData. To collect these 15 bytes, the

NWFP UET, Final Year Project Report, 2007

Page 86

demapper will start processing the array outData from index 3 and will go on to process until it reaches the index 63 of the outData buffer. i = 3; for(int k=0;k<15;k++) { *(cbits+k)=outData[i][1] <0 ? *(cbits+k) | 0x80: *(cbits+k) & 0x7f; *(cbits+k)=outData[i][0] <0 ? *(cbits+k) | 0x40: *(cbits+k) & 0xbf; *(cbits+k)=outData[i+1][1]<0 ? *(cbits+k) | 0x20: *(cbits+k) & 0xdf; *(cbits+k)=outData[i+1][0]<0 ? *(cbits+k) | 0x10: *(cbits+k) & 0xef; *(cbits+k)=outData[i+2][1]<0 ? *(cbits+k) | 0x08: *(cbits+k) & 0xf7; *(cbits+k)=outData[i+2][0]<0 ? *(cbits+k) | 0x04: *(cbits+k) & 0xfb; *(cbits+k)=outData[i+3][1]<0 ? *(cbits+k) | 0x02: *(cbits+k) & 0xfd; *(cbits+k)=outData[i+3][0]<0 ? *(cbits+k) | 0x01: *(cbits+k) & 0xfe; i=i+4; } The principles employed in this piece of coding are the two conditions that were described above. The value of outData at a specific index determines the value of the bits. Bit wise operations are again utilized. Every real and imaginary value produces 1 bit. The bits produced by one index are then appended to the rest of the bits. It is emphasized here that 4 complex values at 4 indexes are used to make 1 byte of data. The 60 indexes written will thus produce 15 bytes, as already pointed out. 6.10-3 Working: The input to this block is the output of the FFT block. This data is demapped and 15 bytes are obtained from the demapping process. The demapping process is the inverse of the mapping process that was used at the transmitter.

6.11 Decompressor:
The decompressor block is responsible for the decoding of the compressed data. Although not shown in the real world OFDM receiver system, this decompressor along with the compressor at the transmitter forms an integral part of the OFDM communication system. 6.11-1 Technical Data: At the transmitter, the speech signal from the 1st sound card was compressed at 15 bytes per 320 bytes of data, i.e. for each 20msec the speech signal was converted to 15 bytes for further processing. NWFP UET, Final Year Project Report, 2007 Page 87

Therefore, in the receiver system, it is required to do the opposite, i.e. we shall now convert the 15 bytes coming from the receiver system into corresponding 320 bytes again using the SPEEX codec. 6.11-2 Programming Notes: First of all, a new decoder state in narrowband mode is created using the following statements. void* state; state=speex_decoder_init(&speex_nb_mode); speex_decoder_ctl(state,SPEEX_SET_ENH,&tmp); Making the tmp variable equal to 1 sets the SPEEX enhancement mode. We also need a SPEEX bit-packing structure, which is generated and initialized using the following commands. SpeexBits bits; speex_bits_init(&bits); Again, once the decoder initialization is done, for every input frame: speex_bits_read_from(&bits, cbits, nbBytes); speex_decode_int(state, &bits, output); Where cbits is a (char *) containing the bit-stream data received for a frame, nbBytes is the size (in bytes) of that bit-stream (and nbBytes is equal to 15), and output is a (float *) and it points to the area where the decoded stream will be written. The output can then be converted to short type using the following loop. for (int i=0;i<FRAME_SIZE;i++) tmp_array[i]=output[i]; The FRAME_SIZE in our project was 160, so the 160 short values in the tmp_array will correspond to 320 char values. This is the sound data needed to be played on the 2nd sound card. 6.11-3 Working: The decompressor receives the compressed data from the signal demapper block as input. This data is decompressed and is ready to be sent to the 2nd sound card.

6.12 2nd Sound Card:


This sound card is used for the playing of the sound data. It receives its input from the decompressor block. The analog data, being received by the 1st sound card of the receiver was processed and passed through many stages in order to be heard via the 2nd sound card as the speech signal.

NWFP UET, Final Year Project Report, 2007

Page 88

6.12-1 Technical Notes: Decompressor block generates the 320 bytes of sound data. These 320 bytes are played on the 2nd sound card for 20msec. The sound card is sampling at the rate of 8000 Hz and the 320 bytes, where 2 bytes represent a sample. This data is played for 1/8000 sec/sample * 320/2 samples = 20msec which is in agreement with the argument before. 6.12-2 Programming Notes: To play the data on the sound card, we need to fallow the following procedure. We fill the following structures accordingly. WAVEFORMATEX: This structure contains the information about the format of the output of the sound card, i.e. it should be filled accordingly to give us the amount of output data, the number of output channels etc. The format tag of the output is WAVE_FORMAT_PCM, which indicates that the output is obtained via PCM. The sampling frequency, given by the nSamplesPerSec variable is chosen to be 8000. The nBitsPerSample is taken as 16, i.e. we have taken 2 bytes per sample of the PCM data. So the average number of bytes per second is 8000 samples/sec * 2 bytes/sample = 16000 bytes/sec WAVEHDR: This structure defines the header used to identify a waveform audio buffer. It contains all the information needed by the program to know the whereabouts of the audio output buffer, its length and some other information like different flags. Its member lpData is a long pointer to the data buffer where the pulse-coded data is stored stored. The dwBufferLength denotes the length of that buffer. The various flags supply the information about the buffer. Once these structures are filled with appropriate data, we need to open the device for playing the sound data. To open the device, we use the function waveOutOpen( ). waveOutOpen( ): This function opens the given waveform-audio output device for playback. Its first variable is the pointer to the buffer identifying the waveform output device. The second parameter identifies the device that will be used to play the audio. The third parameter is the pointer to the structure that identifies the desired format for recording waveform-audio data. A pointer to WAVEFORMATEX structure gives this. The forth and fifth argument are used for the CALLBACK procedure. In our project, there was no need to implement the CALLBACK procedure at the receiver. Therefore, these two arguments will be set to zero. The sixth argument is CALLBACK_NULL, identifying that there are no CALLBACK actions taking place. The return value of the function is MMSYSERR_NOERROR if the device is opened successfully for playback. MMRESULT Result=waveOutOpen(&wOut,OUT_DEVICEID, &pwfxOut,0,0,CALLBACK_NULL); if( Result = = MMSYSERR_NOERROR) // device opened, can start intended actions

NWFP UET, Final Year Project Report, 2007

Page 89

else // add some error handling code here After all the headers are filled and the device is opened, the sound card is now capable of working and playing the sound. The sound data is already prepared and is placed in the short array tmp_array by the decompressor block. Let us denote an array sound_data that is has the same data as tmp_array but with the type of array is char (only char can be written to the sound card). Say the header that is active for the given time is current. The lpData member of this header must point to the sound data. This can be accomplished by current -> lpData = sound_data; Before playing this data, we need to prepare the respective header to be played on the sound card. This is done using the waveOutPrepareHeader( ) function. waveOutPrepareHeader( ): This function prepares an waveform-audio data block for playing. The first parameter is the handle of the waveform output device. The second argument is the pointer to the header that we wish to prepare and the third argument is the size of the WAVEHDR block. waveOutPrepareHeader(wOut, current, sizeof(WAVEHDR)); When the header is prepared, the data is written to the sound card using the waveOurWrite( ) function. waveOutWrite( ): This function sends a waveform-audio data block to the output device for playback. It has the same parameters as the waveOutPreapareHeader( ) function. waveOutWrite(wOut, current, sizeof(WAVEHDR)); 6.12-3 Working: The sound card accepts data from the decompressor block and plays it. This was the same sound data that was input to the transmitter. And after being send using OFDM, we were able to listen to this data at the output of the receiver.

NWFP UET, Final Year Project Report, 2007

Page 90

6.13 Summary of the Procedure:


Signal Detection

1st Sound Card

Fill structures WAVEINHDR, WAVEFORMATEX, WAVEINCAPS and HWAVEIN. Call the function waveInOpen( ) to open the device. Start recording using waveInStart( ). To stop, use the waveInStop( ) function and to close the device, use waveInClose( ) function.

Detect the signal using energy based detection scheme. Using a single window, calculate the energy for every length of the input and announce the arrival of data whenever the energy in the input passes by a threshold.

Correlation

GI Removal

After detecting the signal, use the correlation block and the pilots that are send at start of each transmission to get the starting index of the OFDM frame.

When the start of the frames is known, use the GI removal block to remove the GI from the input data at the 1st sound card. Use the functions like memcpy( ) to detach the transmission data form the GI data and discard the GI data from the input.

Hilbert Transform

FFT

The data from the GI removal block are only real values. Generate the imaginary values yourself, using the Hilbert Transform algorithm and using the FFTW for the DFT algorithms, used in the Hilbert Transform.

After both real and imaginary values are generated, use the FFT to compute the DFT of the input data. Use the FFTW functions like fftw_malloc( ), fftw_execute( ) and structures like fftw_complex and plan to fill in the variables encountered.

NWFP UET, Final Year Project Report, 2007

Page 91

Demap

Decompress

After the FFT, demap the data using the QPSK modulation. This demapping is the opposite to the mapping at the transmitter. The demapping of FFT data produces 15 bytes of data, which gives us the compressed data that was present at the input of IFFT block at the transmitter.

Using the SPEEX codec, decompress these 15 bytes. Use the narrow band state. Set the enhancement mode ON and decode the 15 bytes to produce 320 bytes of data, which corresponds to 20msec of playback time.

2nd Sound card for playback

Fill the structures WAVEFORMATEX, WAVEHDR. Call the function waveOutOpen( ) to open the device for playing. Prepare the header for playback and then write the output to the sound card using waveOutWrite( ) function.

NWFP UET, Final Year Project Report, 2007

Page 92

Part 4 Chapter

The Shortcomings of the Project


Now we need to compare the OFDM transceiver system that we implemented in C++ with the real world OFDM system. The basic ideas are the same for both the systems, and the basic blocks are the same. But our implemented system lacks few of the blocks that form a genuine part of real world OFDM systems. We describe these parts with the intention that anybody that is interested in lifting the project to new dimensions can incorporate them duly because the basic steps are well documented in the previous chapters. NWFP UET, Final Year Project Report, 2007 Page 93

7.1 The Basic Parameters of Our Project:


Number of Data Carriers used = 60 SPEEX compression ratio = 64 : 3 Modulation = QPSK FFT and IFFT size = 128 Subcarrier Frequency Spacing = 62.5 Hz FFT period (Symbol Period) = 64msec Range of Frequencies = 187.5 Hz to 3937.5 Hz Transmission data Rate = 6 Kbps Guard Interval to symbol ratio = 1 : 5 Guard Duration = 16msec Symbol Time = 80msec Pilot data type = comb Signal detection scheme = Energy based Number of sound cards utilized = 4 Speech input sampling Rate (at transmitter) = 8000 Hz Speech output sampling Rate (at receiver) = 8000 Hz Sampling rate of the transmitter DAC and the receiver ADC = 8000 Hz Speech capturing period = 20msec Time of transmission = 10sec Link = wired and dedicated (for all the time) Data on the Link = real data only

7.2 Parameters of Real OFDM:


OFDM is used in wireless internet modem and its usage is called IEEE 802.11a. The summary of the parameters is given below. Size of FFT = 64 Number of Data Carriers Used = 48 Number of Pilots = 4 Data Rate = 6 Mbps to 12 Mbps Modulation = BPSK, QPSK, 16 QAM or 64 QAM Subcarrier Frequency Spacing = 312.5 kHz FFT period = 3.2 sec Guard Interval to Symbol ratio = 1 : 5 Guard Duration = 0.8 sec Symbol Time = 4 sec

NWFP UET, Final Year Project Report, 2007

Page 94

7.3 The Transmitter:


The following points describe the shortcomings (and foreseen improvements) in our transmitter system. The sampling rate of the input sound card used for capturing speech can be increased in order to represent more faithfully the speech signal. We notice that there were no error detection and correction coding process involved. One can incorporate coding and then introduce deep fades intentionally and check the validity and loyalty of the received signal. We used QPSK constellation plotting. But it can be increased to 8 PSK or 16 QAM to increase the data rate of transmission. Also using different modulation techniques, we can plot the bit error rates (BER) VS signal to noise ratio (SNR). The scheme presented in the chapters can be implemented at the wireless level. Two antennas, communicating using air as the medium of communication, can replace the wired dedicated link. FPGA can be used to implement much of the functions. For example we can use the PC for sound capture and analog to digital conversion, and the parallel port can be used to feed the FPGA with this data. FPGA on the other hand can implement signal mapping, FFT/IFFT and GI insertion. The output ports of FPGA can be tied to an antenna. Interleaving can be used for frequency diversity. There was no redundant data sent along with the OFDM frame. The redundant bits can be used for error correction etc. One can bring in a networking device (e.g. a switch) in the project. Multiplexing should be given a thought. Channel equalization and adaptive equalization can be implemented. Instead of comb type pilot arrangement that we have used, a block type pilot scheme can be implemented.

7.4 The Receiver:


The discussion below describes the improvements that can be incorporated in the OFDM receiver system in C++. Signal Detection can be done using the double sliding window method instead of the single sliding window method. Sampling Clock of the ADC of receiver can be synchronized with the sampling clock of the DAC of the transmitter using the methods presented in the previous chapters. Frequency synchronization can also be carried out using the schemes indicated in the previous text. Like described in the transmitter, we can also use the FPGA for various purposes in regard to this project. It can be used to receive data, remove GI insertion, IFFT, and signal demapping etc. DSP filters can be used in order to eradicate the noise. Channel estimation using the pilots can also be done. NWFP UET, Final Year Project Report, 2007 Page 95

Part 5 Chapter

Visual C++ Implementation of Transmitter


The following files show the implementation of an OFDM transmitter in Visual C++. Only the header and the .cpp files are shown. NWFP UET, Final Year Project Report, 2007 Page 96

8.1 Implementation of Sound RecordingDlg.h


// Sound RecordingDlg.h : header file // #if !defined(AFX_SOUNDRECORDINGDLG_H__DD2FC55A_1F0C_4B2B_9203_7AF9147DAEE3__INCLUDED_) #define AFX_SOUNDRECORDINGDLG_H__DD2FC55A_1F0C_4B2B_9203_7AF9147DAEE3__INCLUDED_ #if _MSC_VER > 1000 #pragma once #endif // _MSC_VER > 1000 #include "windows.h" #include <MMSystem.h> #include <speex/speex.h> #include <fftw3.h> #include <cstring> #define NUM_BUF 500 #define BUF_SIZE 320 #define FRAME_SIZE 160 ///////////////////////////////////////////////////////////////////////////// // CSoundRecordingDlg dialog class CSoundRecordingDlg : public CDialog { // Construction public: int m_iPilotPhase; void ProcessHeader(LPWAVEHDR pHdr); CSoundRecordingDlg(CWnd* pParent = NULL); // Dialog Data //{{AFX_DATA(CSoundRecordingDlg) enum { IDD = IDD_SOUNDRECORDING_DIALOG }; CProgressCtrl m_cProgress; CString m_sStatus; CString m_sDevices; int m_iCount; CString m_sSampFreq; CString m_sNumBuf; CString m_sBufSize; CString m_sFileName; CString m_sPathName; //}}AFX_DATA // ClassWizard generated virtual function overrides //{{AFX_VIRTUAL(CSoundRecordingDlg) protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support //}}AFX_VIRTUAL // Implementation protected: HICON m_hIcon; // Generated message map functions

// standard constructor

NWFP UET, Final Year Project Report, 2007

Page 97

//{{AFX_MSG(CSoundRecordingDlg) virtual BOOL OnInitDialog(); afx_msg void OnPaint(); afx_msg HCURSOR OnQueryDragIcon(); afx_msg void OnPrepbuff(); afx_msg void OnClrbuff(); afx_msg void OnStart(); afx_msg void OnB2browse(); virtual void OnCancel(); //}}AFX_MSG DECLARE_MESSAGE_MAP() private: //1st Sound Card Variables WAVEINCAPS InCaps; MMRESULT Result; UINT InDeviceID; HWAVEIN InHandle; WAVEFORMATEX InWaveFormat; WAVEHDR InWaveHeader[NUM_BUF]; void InErrorCheck(MMRESULT Result); //ifft() Variables fftw_plan plan_backward; fftw_complex* outData; fftw_complex* inData; //SPEEX Variables float input[FRAME_SIZE]; char cbits[200]; int nbBytes; /*Holds the state of the encoder*/ void *state; /*Holds bits so they can be read and written to by the Speex routines*/ SpeexBits bits; FILE * fp; double giData[160][2]; }; //{{AFX_INSERT_LOCATION}} // Microsoft Visual C++ will insert additional declarations immediately before the previous line. #endif // !defined(AFX_SOUNDRECORDINGDLG_H__DD2FC55A_1F0C_4B2B_9203_7AF9147DAEE3__INCLUDED_)

NWFP UET, Final Year Project Report, 2007

Page 98

8.2 Implementation of Sound RecordingDlg.cpp


// Sound RecordingDlg.cpp : implementation file // #include "stdafx.h" #include "windows.h" #include "Sound Recording.h" #include "Sound RecordingDlg.h" #pragma comment(lib,"winmm.lib") #ifdef _DEBUG #define new DEBUG_NEW #undef THIS_FILE static char THIS_FILE[] = __FILE__; #endif void CALLBACK waveInProc(HWAVEIN hwi,UINT uMsg,DWORD dwInstance,DWORD dwParam1,DWORD dwParam2); ///////////////////////////////////////////////////////////////////////////// // CSoundRecordingDlg dialog CSoundRecordingDlg::CSoundRecordingDlg(CWnd* pParent /*=NULL*/) : CDialog(CSoundRecordingDlg::IDD, pParent) { //{{AFX_DATA_INIT(CSoundRecordingDlg) m_sStatus = _T(""); m_sDevices = _T(""); m_iCount = 0; m_sSampFreq = _T(""); m_sNumBuf = _T(""); m_sBufSize = _T(""); m_sFileName = _T(""); m_sPathName = _T(""); //}}AFX_DATA_INIT // Note that LoadIcon does not require a subsequent DestroyIcon in Win32 m_hIcon = AfxGetApp()->LoadIcon(IDR_MAINFRAME); fp=fopen("TxOut","wb"); m_iCount=0; inData=(fftw_complex*)fftw_malloc(sizeof(fftw_complex)*128); outData=(fftw_complex*)fftw_malloc(sizeof(fftw_complex)*128); plan_backward=fftw_plan_dft_1d(128,inData,outData,FFTW_BACKWARD,FFTW_MEASURE); } //**************************************************************************************************************** void CSoundRecordingDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); //{{AFX_DATA_MAP(CSoundRecordingDlg) DDX_Control(pDX, IDC_PROGRESS, m_cProgress); DDX_Text(pDX, IDC_STATUS, m_sStatus); DDX_Text(pDX, IDC_DEVICENAME, m_sDevices); DDX_Text(pDX, IDC_COUNT, m_iCount); DDX_Text(pDX, IDC_SAMPFREQ, m_sSampFreq); DDX_Text(pDX, IDC_NUMBUF, m_sNumBuf); DDX_Text(pDX, IDC_BUFSIZE, m_sBufSize); DDX_Text(pDX, IDC_FILENAME, m_sFileName);

NWFP UET, Final Year Project Report, 2007

Page 99

DDX_Text(pDX, IDC_PATHNAME, m_sPathName); //}}AFX_DATA_MAP } BEGIN_MESSAGE_MAP(CSoundRecordingDlg, CDialog) //{{AFX_MSG_MAP(CSoundRecordingDlg) ON_WM_PAINT() ON_WM_QUERYDRAGICON() ON_BN_CLICKED(IDC_PREPBUFF, OnPrepbuff) ON_BN_CLICKED(IDC_CLRBUFF, OnClrbuff) ON_BN_CLICKED(IDC_START, OnStart) ON_BN_CLICKED(IDC_B2BROWSE, OnB2browse) //}}AFX_MSG_MAP END_MESSAGE_MAP() ///////////////////////////////////////////////////////////////////////////// // CSoundRecordingDlg message handlers BOOL CSoundRecordingDlg::OnInitDialog() { CDialog::OnInitDialog(); // Set the icon for this dialog. The framework does this automatically // when the application's main window is not a dialog SetIcon(m_hIcon, TRUE); // Set big icon SetIcon(m_hIcon, FALSE); // Set small icon // TODO: Add extra initialization here m_sStatus="Expecting to Prepare the Buffers..."; GetDlgItem(IDC_CLRBUFF)->EnableWindow(FALSE); GetDlgItem(IDC_START)->EnableWindow(FALSE); GetDlgItem(IDC_B2BROWSE)->EnableWindow(FALSE); GetDlgItem(IDC_PREPBUFF)->EnableWindow(TRUE); GetDlgItem(IDC_PROGRESS)->ShowWindow(FALSE); GetDlgItem(IDC_SAVEDAS)->ShowWindow(TRUE); GetDlgItem(IDC_SAVEDAT)->ShowWindow(TRUE); GetDlgItem(IDC_FILENAME)->ShowWindow(TRUE); GetDlgItem(IDC_PATHNAME)->ShowWindow(TRUE); waveInGetDevCaps(0,&InCaps,sizeof(InCaps)); m_sDevices=InCaps.szPname; m_sSampFreq="8000"; m_sBufSize="320"; m_sNumBuf="500"; m_sFileName="TxOut"; m_sPathName="Project Folder"; m_iPilotPhase=10; m_cProgress.SetRange(0,NUM_BUF); m_cProgress.SetStep(1); m_cProgress.SetPos(0); ZeroMemory(&InWaveFormat,sizeof(WAVEFORMATEX)); ZeroMemory(InWaveHeader,NUM_BUF*sizeof(WAVEHDR)); /*Create a new encoder state in narrowband mode*/ state = speex_encoder_init(&speex_nb_mode); /*Set the quality to 8 (15 kbps)*/ int tmp=2; speex_encoder_ctl(state, SPEEX_SET_QUALITY, &tmp); speex_bits_init(&bits);

NWFP UET, Final Year Project Report, 2007

Page 100

UpdateData(FALSE); return TRUE; // return TRUE unless you set the focus to a control } // If you add a minimize button to your dialog, you will need the code below // to draw the icon. For MFC applications using the document/view model, // this is automatically done for you by the framework. //************************************************************************************************************* void CSoundRecordingDlg::OnPaint() { if (IsIconic()) { CPaintDC dc(this); // device context for painting SendMessage(WM_ICONERASEBKGND, (WPARAM) dc.GetSafeHdc(), 0); // Center icon in client rectangle int cxIcon = GetSystemMetrics(SM_CXICON); int cyIcon = GetSystemMetrics(SM_CYICON); CRect rect; GetClientRect(&rect); int x = (rect.Width() - cxIcon + 1) / 2; int y = (rect.Height() - cyIcon + 1) / 2; // Draw the icon dc.DrawIcon(x, y, m_hIcon); } else { CDialog::OnPaint(); } } // The system calls this to obtain the cursor to display while the user drags // the minimized window. HCURSOR CSoundRecordingDlg::OnQueryDragIcon() { return (HCURSOR) m_hIcon; } //**************************************************************************************************************** void CSoundRecordingDlg::OnPrepbuff() { // TODO: Add your control notification handler code here InDeviceID = 0;/*0 for internal and 2 for external sound card*/ InWaveFormat.wFormatTag = WAVE_FORMAT_PCM; InWaveFormat.nChannels = 1; InWaveFormat.nSamplesPerSec = 8000; InWaveFormat.nAvgBytesPerSec = 8000 * 1 * 16/8; InWaveFormat.nBlockAlign = 1 * 16/8; InWaveFormat.wBitsPerSample = 16 ; InWaveFormat.cbSize = sizeof(WAVEFORMATEX); m_iCount=0; Result = waveInOpen (&InHandle,InDeviceID,&InWaveFormat,(DWORD) waveInProc , (DWORD)this, // callback instance data CALLBACK_FUNCTION);

NWFP UET, Final Year Project Report, 2007

Page 101

// InErrorCheck(Result); Result=0; /*Headers for the speech signal*/ for (int i = 0; i<NUM_BUF; i++ ) { InWaveHeader[i].lpData =(LPSTR)HeapAlloc(GetProcessHeap(),8,/*InWaveFormat.nAvgBytesPerSec*/BUF_SIZE); InWaveHeader[i].dwBufferLength = /*InWaveFormat.nAvgBytesPerSec*/BUF_SIZE; InWaveHeader[i].dwUser=i; Result = waveInPrepareHeader(InHandle,&InWaveHeader[i], sizeof(WAVEHDR)); if(Result!=MMSYSERR_NOERROR) { AfxMessageBox("Could not prepare Headers"); exit(0); } Result = waveInAddBuffer (InHandle, &InWaveHeader[i], sizeof(WAVEHDR)); InErrorCheck(Result); } m_sStatus="Buffers prepared and added, expect to start Recording..."; GetDlgItem(IDC_PREPBUFF)->EnableWindow(FALSE); GetDlgItem(IDC_B2BROWSE)->EnableWindow(TRUE); GetDlgItem(IDC_START)->EnableWindow(TRUE); GetDlgItem(IDC_CLRBUFF)->EnableWindow(TRUE); UpdateData(FALSE); return; } //*************************************************************************************************************** void CSoundRecordingDlg::InErrorCheck(MMRESULT Result) { if(Result!=MMSYSERR_NOERROR) { if(Result==MMSYSERR_INVALHANDLE) AfxMessageBox("Invalid Handle"); else if(Result==MMSYSERR_NODRIVER) AfxMessageBox("No Driver available"); else if(Result==MMSYSERR_NOMEM) AfxMessageBox("Could not Allocate Memory"); AfxMessageBox("Error Encountered, Program terminated"); exit(0); } return; } //**************************************************************************************************************** void CALLBACK waveInProc(HWAVEIN InHandle,UINT uMsg, DWORD dwInstance, DWORD dwParam1, DWORD dwParam2) { LPWAVEHDR pHdr=NULL; CSoundRecordingDlg *pDlg=(CSoundRecordingDlg*)dwInstance; switch(uMsg) { case WIM_OPEN: break; case WIM_DATA: pDlg->ProcessHeader((LPWAVEHDR)dwParam1); break; case WIM_CLOSE: break;

NWFP UET, Final Year Project Report, 2007

Page 102

default: break; } return; } //********************************************************************************************************************** void CSoundRecordingDlg::ProcessHeader(LPWAVEHDR pHdr) { m_cProgress.StepIt(); m_iCount++; if(m_iCount<6) { memset(&giData[0][0],0,2560); for(int i=0;i<160;i++) fwrite(&giData[i][0],8,1,fp); } else if(m_iCount<51) { inData[0][0]=0; inData[10][0]=m_iPilotPhase; inData[20][0]=m_iPilotPhase; inData[29][0]=m_iPilotPhase; fftw_execute(plan_backward); memcpy(&giData[0][0],&outData[96][0],512); memcpy(&giData[32][0],&outData[0][0],2048); for(int i=0;i<160;i++) fwrite(&giData[i][0],8,1,fp); m_iPilotPhase=-m_iPilotPhase; // fwrite(&giData[0][0],8,320,fp); } else { int k=3; short* tmp_array=(short*)pHdr->lpData; /*Copy the 16 bits values to float so Speex can work on them*/ for (int i=0;i<FRAME_SIZE;i++) input[i]=*(tmp_array+i); /*Flush all the bits in the struct so we can encode a new frame*/ speex_bits_reset(&bits); /*Encode the frame*/ speex_encode(state, input, &bits); /*Copy the bits to an array of char that can be written*/ nbBytes = speex_bits_write(&bits, cbits, 200); /*Write the size of the frame first. This is what sampledec expects but its likely to be different in your own application*/ fwrite(&nbBytes, sizeof(int), 1, fp); /*Write the compressed data*/ fwrite(cbits, 1, nbBytes, fp); for(i=0;i<15;i++) { inData[k][0] = *(cbits+i) & 0x40? -1 : 1; inData[k][1] = *(cbits+i) & 0x80? -1 : 1; inData[k+1][0] = *(cbits+i) & 0x10? -1 : 1; inData[k+1][1] = *(cbits+i) & 0x20? -1 : 1; inData[k+2][0] = *(cbits+i) & 0x04? -1 : 1; inData[k+2][1] = *(cbits+i) & 0x08? -1 : 1; inData[k+3][0] = *(cbits+i) & 0x01? -1 : 1; inData[k+3][1] = *(cbits+i) & 0x02? -1 : 1; k=k+4;

// //

NWFP UET, Final Year Project Report, 2007

Page 103

// // }

} fftw_execute(plan_backward); memcpy(&giData[0][0],&outData[95][0],512); memcpy(&giData[32][0],&outData[0][0],2048); for(int j=0;j<160;j++) { fwrite(&giData[j][0],8,1,fp); fwrite(&giData[j][1],8,1,fp); } fwrite(&giData[0][0],8,320,fp); Result=waveInAddBuffer(InHandle,pHdr,sizeof(WAVEHDR)); if(m_iCount==NUM_BUF) { GetDlgItem(IDC_START)->SetWindowText("&Start"); OnClrbuff(); Result=waveInStop(InHandle); Result=waveInClose(InHandle); fclose(fp); BOOL res=OnInitDialog(); }

} //***************************************************************************************************************** void CSoundRecordingDlg::OnClrbuff() { // TODO: Add your control notification handler code here for(int i=0;i<NUM_BUF;i++) { Result=waveInUnprepareHeader(InHandle,&InWaveHeader[i],sizeof(WAVEHDR)); HeapFree(GetProcessHeap(),0,InWaveHeader[i].lpData); ZeroMemory(&InWaveHeader[i],sizeof(WAVEHDR)); } BOOL res=OnInitDialog(); return; } //**************************************************************************************************************** void CSoundRecordingDlg::OnStart() { // TODO: Add your control notification handler code here CString m_sProcess; GetDlgItem(IDC_START)->GetWindowText(m_sProcess); if(m_sProcess.Compare("&Start")==0) { m_sStatus="Recording..."; GetDlgItem(IDC_START)->SetWindowText("&Stop"); GetDlgItem(IDC_START)->EnableWindow(TRUE); GetDlgItem(IDC_PROGRESS)->ShowWindow(TRUE); GetDlgItem(IDC_SAVEDAS)->ShowWindow(FALSE); GetDlgItem(IDC_SAVEDAT)->ShowWindow(FALSE); GetDlgItem(IDC_FILENAME)->ShowWindow(FALSE); GetDlgItem(IDC_PATHNAME)->ShowWindow(FALSE); Result=waveInStart(InHandle); } else

NWFP UET, Final Year Project Report, 2007

Page 104

{ GetDlgItem(IDC_START)->SetWindowText("&Start"); OnClrbuff(); Result=waveInStop(InHandle); Result=waveInClose(InHandle); fclose(fp); BOOL res=OnInitDialog(); } UpdateData(FALSE); } //***************************************************************************************************************** void CSoundRecordingDlg::OnB2browse() { // TODO: Add your control notification handler code here char fpath[128]={0}; CFileDialog m_ldFile(FALSE,NULL,"TxOut",OFN_OVERWRITEPROMPT,"Text Files|*.txt",NULL); if(m_ldFile.DoModal()==IDOK) { m_sFileName.Empty(); m_sPathName.Empty(); m_sFileName=m_ldFile.GetFileName(); m_sPathName=m_ldFile.GetPathName(); } fclose(fp); strcat(fpath,m_sPathName); fp=fopen(fpath,"wb"); UpdateData(FALSE); } //******************************************************************************************************************* void CSoundRecordingDlg::OnCancel() { // TODO: Add extra cleanup here /* fftw_free(inData); fftw_free(outData); fftw_destroy_plan(plan_backward); speex_encoder_destroy(state); speex_bits_destroy(&bits);*/ CDialog::OnCancel(); }

NWFP UET, Final Year Project Report, 2007

Page 105

Chapter

Visual C++ Implementation of Receiver


The following files show the implementation of an OFDM receiver in Visual C++. Only the header and the .cpp files are shown. NWFP UET, Final Year Project Report, 2007 Page 106

9.1 Implementation of Sound detect and playDlg.h


// Sound detect and playDlg.h : header file // #if !defined(AFX_SOUNDDETECTANDPLAYDLG_H__1249DBFC_AB8E_4B42_833D_777F11428887__INCLUDED _) #define AFX_SOUNDDETECTANDPLAYDLG_H__1249DBFC_AB8E_4B42_833D_777F11428887__INCLUDED_ #if _MSC_VER > 1000 #pragma once #endif // _MSC_VER > 1000 ///////////////////////////////////////////////////////////////////////////// // CSounddetectandplayDlg dialog #include <windows.h> #include <MMSystem.h> #include <cstring> #include <speex/speex.h> #include <fftw3.h> #define THRESHOLD 5500 #define FRAME_SIZE 160 #define N 128 #define NUM_BUF 450 #define BUF_SIZE 320 class CSounddetectandplayDlg : public CDialog { // Construction public: //2nd Sound Card Variables void writeAudio(); WAVEINCAPS InCaps; WAVEHDR* frag; WAVEFORMATEX pwfxOut; HWAVEOUT wOut; MMRESULT Result; int waveFreeBlockCount; int waveCurrentBlock; //SPEEX Variables int nbBytes; int tmp; SpeexBits bits; void* state; char* cbits; float* output; //Hilbert() Variables void Hilbert(); double* realData; double* imagData; fftw_plan plan_r2c; fftw_plan plan_c2i; fftw_complex* outData; fftw_complex* inData; fftw_complex* anaData; fftw_complex* comData;

NWFP UET, Final Year Project Report, 2007

Page 107

int com_size; int* h; //play() Variables void play(); void fft(); void fill_buff(); char* c; short* tmp_array; int num_buf; fftw_plan plan_forward; //Correlation Variables void GI_Correlate(); void correlate(int i); double cArr[63]; double* pilotData; double win1[32],win2[32]; double max; long index; //File Variables FILE * fp; long itr_file; long lsize; //Other Variables double* giData; void SigDetected(); CSounddetectandplayDlg(CWnd* pParent = NULL); // Dialog Data //{{AFX_DATA(CSounddetectandplayDlg) enum { IDD = IDD_SOUNDDETECTANDPLAY_DIALOG }; CProgressCtrl m_cProgress; CString m_sDeviceName; CString m_sStatus; //}}AFX_DATA // ClassWizard generated virtual function overrides //{{AFX_VIRTUAL(CSounddetectandplayDlg) protected: virtual void DoDataExchange(CDataExchange* pDX); // DDX/DDV support //}}AFX_VIRTUAL // Implementation protected: HICON m_hIcon; // Generated message map functions //{{AFX_MSG(CSounddetectandplayDlg) virtual BOOL OnInitDialog(); afx_msg void OnSysCommand(UINT nID, LPARAM lParam); afx_msg void OnPaint(); afx_msg HCURSOR OnQueryDragIcon(); afx_msg void OnDetectplay(); afx_msg void OnExit(); //}}AFX_MSG DECLARE_MESSAGE_MAP() }; // standard constructor

NWFP UET, Final Year Project Report, 2007

Page 108

//{{AFX_INSERT_LOCATION}} // Microsoft Visual C++ will insert additional declarations immediately before the previous line. #endif // !defined(AFX_SOUNDDETECTANDPLAYDLG_H__1249DBFC_AB8E_4B42_833D_777F11428887__INCLUDED _)

9.2 Implementation of Sound detect and playDlg.cpp


// Sound detect and playDlg.cpp : implementation file // #include "stdafx.h" #include "Sound detect and play.h" #include "Sound detect and playDlg.h" #ifdef _DEBUG #define new DEBUG_NEW #undef THIS_FILE static char THIS_FILE[] = __FILE__; #endif void CALLBACK waveOutProc(HWAVEOUT,UINT,DWORD,DWORD,DWORD); static CRITICAL_SECTION waveCriticalSection; ///////////////////////////////////////////////////////////////////////////// // CAboutDlg dialog used for App About class CAboutDlg : public CDialog { public: CAboutDlg(); // Dialog Data //{{AFX_DATA(CAboutDlg) enum { IDD = IDD_ABOUTBOX }; //}}AFX_DATA // ClassWizard generated virtual function overrides //{{AFX_VIRTUAL(CAboutDlg) protected: virtual void DoDataExchange(CDataExchange* pDX); //}}AFX_VIRTUAL // Implementation protected: //{{AFX_MSG(CAboutDlg) //}}AFX_MSG DECLARE_MESSAGE_MAP() }; CAboutDlg::CAboutDlg() : CDialog(CAboutDlg::IDD) { //{{AFX_DATA_INIT(CAboutDlg) //}}AFX_DATA_INIT } void CAboutDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX);

// DDX/DDV support

NWFP UET, Final Year Project Report, 2007

Page 109

//{{AFX_DATA_MAP(CAboutDlg) //}}AFX_DATA_MAP } BEGIN_MESSAGE_MAP(CAboutDlg, CDialog) //{{AFX_MSG_MAP(CAboutDlg) // No message handlers //}}AFX_MSG_MAP END_MESSAGE_MAP() ///////////////////////////////////////////////////////////////////////////// // CSounddetectandplayDlg dialog CSounddetectandplayDlg::CSounddetectandplayDlg(CWnd* pParent /*=NULL*/) : CDialog(CSounddetectandplayDlg::IDD, pParent) { //{{AFX_DATA_INIT(CSounddetectandplayDlg) m_sDeviceName = _T(""); m_sStatus = _T(""); //}}AFX_DATA_INIT // Note that LoadIcon does not require a subsequent DestroyIcon in Win32 m_hIcon = AfxGetApp()->LoadIcon(IDR_MAINFRAME); giData=new double[160]; h=new int[N]; realData=(double*)fftw_malloc(sizeof(double)*N); imagData=(double*)fftw_malloc(sizeof(double)*N); inData=(fftw_complex*)fftw_malloc(sizeof(fftw_complex)*N); outData=(fftw_complex*)fftw_malloc(sizeof(fftw_complex)*N); anaData=(fftw_complex*)fftw_malloc(sizeof(fftw_complex)*N); comData=(fftw_complex*)fftw_malloc(sizeof(fftw_complex)*N); plan_forward=fftw_plan_dft_1d(N,inData,outData,FFTW_FORWARD,FFTW_MEASURE); plan_r2c=fftw_plan_dft_r2c_1d(N,realData,comData,FFTW_MEASURE); plan_c2i=fftw_plan_dft_c2r_1d(N,anaData,imagData,FFTW_MEASURE); /*Set the perceptual enhancement on*/ tmp=1; nbBytes=15; ZeroMemory(&pwfxOut,sizeof(WAVEFORMATEX)); // ZeroMemory(frag,NUM_BUF*sizeof(WAVEHDR)); pwfxOut.wFormatTag =WAVE_FORMAT_PCM; pwfxOut.nChannels =1; pwfxOut.nSamplesPerSec =8000; pwfxOut.nBlockAlign =2; pwfxOut.wBitsPerSample =16; pwfxOut.cbSize =0; pwfxOut.nAvgBytesPerSec =pwfxOut.nSamplesPerSec * pwfxOut.nBlockAlign; MMRESULT rc=waveOutOpen(&wOut,0, &pwfxOut,(DWORD)waveOutProc, (DWORD)waveFreeBlockCount, CALLBACK_FUNCTION); /*0 for internal sound card,2 for external sound card*/ char* buffer; buffer=(LPSTR)HeapAlloc(GetProcessHeap(),8,NUM_BUF*(BUF_SIZE+sizeof(WAVEHDR)));

NWFP UET, Final Year Project Report, 2007

Page 110

frag=(WAVEHDR*)buffer; buffer+=sizeof(WAVEHDR)*NUM_BUF; for (int i = 0; i<NUM_BUF; i++ ) { frag[i].lpData =buffer/*(LPSTR)HeapAlloc(GetProcessHeap(),8,BUF_SIZE)*/; frag[i].dwBufferLength =BUF_SIZE; frag[i].dwUser=i; buffer+=BUF_SIZE; } max=0; com_size=N/2+1;

fp=fopen("TxOut","rb"); itr_file=0; } //**************************************************************** void CSounddetectandplayDlg::DoDataExchange(CDataExchange* pDX) { CDialog::DoDataExchange(pDX); //{{AFX_DATA_MAP(CSounddetectandplayDlg) DDX_Control(pDX, IDC_PROGRESS, m_cProgress); DDX_Text(pDX, IDC_DEVICENAME, m_sDeviceName); DDX_Text(pDX, IDC_STATUS, m_sStatus); //}}AFX_DATA_MAP } //**************************************************************** BEGIN_MESSAGE_MAP(CSounddetectandplayDlg, CDialog) //{{AFX_MSG_MAP(CSounddetectandplayDlg) ON_WM_SYSCOMMAND() ON_WM_PAINT() ON_WM_QUERYDRAGICON() ON_BN_CLICKED(IDC_DETECTPLAY, OnDetectplay) ON_BN_CLICKED(IDC_EXIT, OnExit) //}}AFX_MSG_MAP END_MESSAGE_MAP() ///////////////////////////////////////////////////////////////////////////// // CSounddetectandplayDlg message handlers BOOL CSounddetectandplayDlg::OnInitDialog() { CDialog::OnInitDialog(); // Add "About..." menu item to system menu. // IDM_ABOUTBOX must be in the system command range. ASSERT((IDM_ABOUTBOX & 0xFFF0) == IDM_ABOUTBOX); ASSERT(IDM_ABOUTBOX < 0xF000); CMenu* pSysMenu = GetSystemMenu(FALSE); if (pSysMenu != NULL) {

NWFP UET, Final Year Project Report, 2007

Page 111

CString strAboutMenu; strAboutMenu.LoadString(IDS_ABOUTBOX); if (!strAboutMenu.IsEmpty()) { pSysMenu->AppendMenu(MF_SEPARATOR); pSysMenu->AppendMenu(MF_STRING, IDM_ABOUTBOX, strAboutMenu); } } // Set the icon for this dialog. The framework does this automatically // when the application's main window is not a dialog SetIcon(m_hIcon, TRUE); // Set big icon SetIcon(m_hIcon, FALSE); // Set small icon // TODO: Add extra initialization here /*Create a new decoder state in narrowband mode*/ state = speex_decoder_init(&speex_nb_mode); speex_decoder_ctl(state, SPEEX_SET_ENH, &tmp); /*Initialization of the structure that holds the bits*/ speex_bits_init(&bits); GetDlgItem(IDC_PROGRESS)->ShowWindow(FALSE); m_cProgress.SetStep(1); for(int k=1;k<=N/2-1;k++) h[k]=-1; for(k=N/2+1;k<=N-1;k++) h[k]=1; waveCurrentBlock=0; GetDlgItem(IDC_DETECTPLAY)->EnableWindow(TRUE); waveInGetDevCaps(0,&InCaps,sizeof(InCaps)); m_sDeviceName=InCaps.szPname; m_sStatus="Start to Detect and play the Sound"; UpdateData(FALSE);

return TRUE; // return TRUE unless you set the focus to a control } //**************************************************************** void CSounddetectandplayDlg::OnSysCommand(UINT nID, LPARAM lParam) { if ((nID & 0xFFF0) == IDM_ABOUTBOX) { CAboutDlg dlgAbout; dlgAbout.DoModal(); } else { CDialog::OnSysCommand(nID, lParam); } } // If you add a minimize button to your dialog, you will need the code below // to draw the icon. For MFC applications using the document/view model, // this is automatically done for you by the framework. //**************************************************************** void CSounddetectandplayDlg::OnPaint()

NWFP UET, Final Year Project Report, 2007

Page 112

{ if (IsIconic()) { CPaintDC dc(this); // device context for painting SendMessage(WM_ICONERASEBKGND, (WPARAM) dc.GetSafeHdc(), 0); // Center icon in client rectangle int cxIcon = GetSystemMetrics(SM_CXICON); int cyIcon = GetSystemMetrics(SM_CYICON); CRect rect; GetClientRect(&rect); int x = (rect.Width() - cxIcon + 1) / 2; int y = (rect.Height() - cyIcon + 1) / 2; // Draw the icon dc.DrawIcon(x, y, m_hIcon); } else { CDialog::OnPaint(); } } // The system calls this to obtain the cursor to display while the user drags // the minimized window. HCURSOR CSounddetectandplayDlg::OnQueryDragIcon() { return (HCURSOR) m_hIcon; } //*************************************************************** void CSounddetectandplayDlg::OnDetectplay() { // TODO: Add your control notification handler code here GetDlgItem(IDC_DETECTPLAY)->EnableWindow(FALSE); fseek(fp,0,SEEK_END); lsize=ftell(fp); rewind(fp); m_sStatus="Detecting..."; UpdateData(FALSE); double* d; d=new double[32]; double energy=0; itr_file=0; int n; while(1) { fseek(fp,itr_file,SEEK_SET); fread(d,8,32,fp); for(n=0;n<32;n++) energy=energy+ *(d+n) * *(d+n); if(energy>THRESHOLD) { SigDetected(); break; } else

NWFP UET, Final Year Project Report, 2007

Page 113

{ itr_file+=8; energy=0; continue; } } BOOL B=OnInitDialog(); } //************************************************************* void CSounddetectandplayDlg::SigDetected() { GI_Correlate(); play(); } //************************************************************* void CSounddetectandplayDlg::GI_Correlate() { fseek(fp,itr_file,SEEK_SET); pilotData=new double[320]; fread(pilotData,8,320,fp); for(int i=0;i<160;i++) { memcpy(win1,&pilotData[i],256); memcpy(win2,&pilotData[i+128],256); memset(cArr,0,504); correlate(i); } delete [] pilotData; } //************************************************************* void CSounddetectandplayDlg::correlate(int i) { for(int n=0;n<31;n++) { for(int k=0;k<n+1;k++) cArr[n] = win1[k] * win2[31-n+k] + cArr[n]; if(cArr[n]>max) { max=cArr[n]; index=i; } } for(n=0;n<32;n++) { for(int k=0;k<32-n;k++) cArr[n+31] = win1[n+k] * win2[k] + cArr[n+31]; if(cArr[n+31]>max) { max=cArr[n+31]; index=i; } }

NWFP UET, Final Year Project Report, 2007

Page 114

} //*************************************************************** void CSounddetectandplayDlg::play() { cbits=new char[200]; output=new float[FRAME_SIZE]; tmp_array=new short[FRAME_SIZE]; GetDlgItem(IDC_PROGRESS)->ShowWindow(TRUE); c=(char*)tmp_array; lsize=lsize-64000; fseek(fp,itr_file+index*8+44*160*8,SEEK_SET); num_buf=lsize/(160*8); long size=num_buf*160; int itr_c=0;int count=50; InitializeCriticalSection(&waveCriticalSection); m_cProgress.SetRange(0,size); m_cProgress.SetStep(size/num_buf); m_cProgress.SetPos(0); GetDlgItem(IDC_PROGRESS)->ShowWindow(TRUE); m_sStatus="Playing..."; UpdateData(FALSE); while (itr_c<size) { fft(); speex_bits_read_from(&bits, cbits, nbBytes); /*Decode the data*/ speex_decode(state, &bits, output); /*Copy from float to short (16 bits) for output*/ for (int i=0;i<FRAME_SIZE;i++) *(tmp_array+i)=output[i]; writeAudio(); /*Write the decoded audio to file*/ /*fwrite(out, sizeof(short), FRAME_SIZE, fout);*/ itr_c+=160;count++; }

} //*************************************************************** void CSounddetectandplayDlg::fft() { fill_buff(); fftw_execute(plan_forward); int i=3; for(int k=0;k<15;k++) { *(cbits+k)=outData[i][1] <0 ? *(cbits+k)=outData[i][0] <0 ? *(cbits+k)=outData[i+1][1]<0 ? *(cbits+k)=outData[i+1][0]<0 ? *(cbits+k)=outData[i+2][1]<0 ?

*(cbits+k) *(cbits+k) *(cbits+k) *(cbits+k) *(cbits+k)

| | | | |

0x80: 0x40: 0x20: 0x10: 0x08:

*(cbits+k) & *(cbits+k) & *(cbits+k) & *(cbits+k) & *(cbits+k) &

0x7f; 0xbf; 0xdf; 0xef; 0xf7;

NWFP UET, Final Year Project Report, 2007

Page 115

*(cbits+k)=outData[i+2][0]<0 *(cbits+k)=outData[i+3][1]<0 *(cbits+k)=outData[i+3][0]<0 i=i+4; } return; }

? ? ?

*(cbits+k) | 0x04: *(cbits+k) & 0xfb; *(cbits+k) | 0x02: *(cbits+k) & 0xfd; *(cbits+k) | 0x01: *(cbits+k) & 0xfe;

//*************************************************************** void CSounddetectandplayDlg::fill_buff() { fread(giData,8,160,fp); // memcpy(realData,giData[32],1024); for(int i=0;i<N;i++) inData[i][0]=realData[i]=giData[32+i]; Hilbert(); return; } //************************************************************** void CSounddetectandplayDlg::Hilbert() { fftw_execute(plan_r2c); for(int b=N/2+1;b<N;b++) { comData[b][0]=comData[N-b][0]; comData[b][1]=-comData[N-b][1]; } for(int k=0;k<N;k++) { anaData[k][0]=-h[k]*comData[k][1]; anaData[k][1]=h[k]*comData[k][0]; } fftw_execute(plan_c2i); for(int i=0;i<N;i++) inData[i][1]=imagData[i]/N; return; } //*************************************************************** void CSounddetectandplayDlg::writeAudio() { WAVEHDR* current; // c=(char*)tmp_array; current = &frag[waveCurrentBlock]; /* * first make sure the header we're going to use is unprepared */ /* if(current->dwFlags & WHDR_PREPARED) waveOutUnprepareHeader(wOut, current, sizeof(WAVEHDR));*/ // memcpy(current->lpData, c, BUF_SIZE); current->lpData=c; m_cProgress.StepIt();

NWFP UET, Final Year Project Report, 2007

Page 116

waveOutPrepareHeader(wOut, current, sizeof(WAVEHDR)); waveOutWrite(wOut, current, sizeof(WAVEHDR)); Sleep(15); /* * point to the next block */ waveCurrentBlock++; waveCurrentBlock %= NUM_BUF; } //*************************************************************** void CALLBACK waveOutProc(HWAVEOUT wOut, UINT uMsg, DWORD dwInstance, DWORD dwParam1,DWORD dwParam2) { int* freeBlockCounter = (int*)dwInstance; /* * ignore calls that occur due to openining and closing the * device. */ if(uMsg != WOM_DONE) return; EnterCriticalSection(&waveCriticalSection); *(freeBlockCounter)++; LeaveCriticalSection(&waveCriticalSection); } //*************************************************************** void CSounddetectandplayDlg::OnExit() { // TODO: Add your control notification handler code here DeleteCriticalSection(&waveCriticalSection); delete [] giData; delete [] h; delete [] cbits; delete [] output; delete [] tmp_array; for(int i=0;i<NUM_BUF;i++) { if(frag[i].dwFlags & WHDR_PREPARED) Result=waveOutUnprepareHeader(wOut,&frag[i],sizeof(WAVEHDR)); // HeapFree(GetProcessHeap(),0,frag[i].lpData); // ZeroMemory(&frag[i],sizeof(WAVEHDR)); } // HeapFree(GetProcessHeap(),0,frag); fftw_free(realData); fftw_free(imagData); fftw_free(inData); fftw_free(outData); fftw_free(anaData); fftw_free(comData); fftw_destroy_plan(plan_forward); fftw_destroy_plan(plan_r2c); fftw_destroy_plan(plan_c2i); /*Destroy the decoder state*/ speex_decoder_destroy(state); /*Destroy the bit-stream struct*/

NWFP UET, Final Year Project Report, 2007

Page 117

speex_bits_destroy(&bits); waveOutClose(wOut); if(fp) { fclose(fp); fp=NULL; } OnOK(); }

NWFP UET, Final Year Project Report, 2007

Page 118

Appendix A

FFTW
Introduction:
THE FASTEST FOURIER TRANSFORM IN THE WEST (FFTW) is a set of C subroutines that is used to calculate the DISCRETE FOURIER TRANSFORM (DFT) of a set of given data.

NWFP UET, Final Year Project Report, 2007

Page 119

FFTW does not use a set of fixed and predefined routines for computation of DFT. Rather, the underlying hardware of the system comes into play to compute the DFT and IDFT. It is so because FFTW wants to maximize the performance. The two steps that FFTW fallows to generate the DFT are: First, FFTW searches and learns the best path to find the DFT of the given data on your machine. It stores the details of that path in a structure called plan. This structure contains the information of the most efficient path that will be utilized subsequently in the coming program. Then, when called, the FFTW executes the DFT of the given data set, as many a times as possible, using the information of the plan structure. One may wonder that what if a user needs to calculate the DFT of the data only once. Then the time spent in the search for best path is a loss. The FFTW provides the solution to this problem using fast planners, where the computational efficiency of DFT is bargained for the faster estimation of the best path. Another approach is to store the previously computed plans on disk and call them when they are required in the program. FFTW is organized in three levels of interfacing. Basic interface that is used by most of the users. It provides the user an interface for computation of single transform of contiguous data. The advanced interface computes the DFT of multiple arrays. The guru interface is utilized when the other interfaces are not sufficient for the user application and there is a need to add extension in the current interface of the FFTW.

The Usage of FFTW:


Almost all of the programs that incorporate FFTW are of the following syntax. #include <fftw3.h> ... { fftw_complex *in, *out;` fftw_plan p; ... in = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * N); out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) *N); p = fftw_plan_dft_1d(N, in, out, FFTW_FORWARD, FFTW_ESTIMATE); ... fftw_execute(p); /* repeat as needed */ ... fftw_destroy_plan(p); fftw_free(in); fftw_free(out); }

NWFP UET, Final Year Project Report, 2007

Page 120

It should be noted that every program that uses the FFTW must be linked to the FFTW library. First we allocate the input and output arrays. You can allocate them in any way that you like, but it is recommend using fftw_malloc, which behaves like malloc except that it properly aligns the array when SIMD instructions (such as SSE and Altivec) are available. The data is an array of type fftw_complex, which is by default a double[2] composed of the real (in[i][0]) and imaginary (in[i][1]) parts of a complex number. The next step is to create a plan, which is an object that contains all the data that FFTW needs to compute the FFT. This function creates the plan: fftw_plan fftw_plan_dft_1d(int n, fftw_complex *in, fftw_complex*out,int sign, unsigned flags); The first argument, n, is the size of the transform you are trying to compute. The size n can be any positive integer, but sizes that are products of small factors are transformed most efficiently (although prime sizes still use an O(n log n) algorithm). The next two arguments are pointers to the input and output arrays of the transform. These pointers can be equal, indicating an in-place transform. The fourth argument, sign, can be either FFTW_FORWARD (-1) or FFTW_BACKWARD (+1), and indicates the direction of the transform you are interested in; technically, it is the sign of the exponent in the transform. The flags argument is usually either FFTW_MEASURE or FFTW_ESTIMATE. FFTW_MEASURE instructs FFTW to run and measure the execution time of several FFTs in order to find the best way to compute the transform of size n. This process takes some time (usually a few seconds), depending on your machine and on the size of the transform. FFTW_ESTIMATE, on the contrary, does not run any computation and just builds a reasonable plan that is probably sub-optimal. In short, if your program performs many transforms of the same size and initialization time is not important, use FFTW_MEASURE; otherwise use the estimate. The data in the in/out arrays is overwritten during FFTW_MEASURE planning, so such planning should be done before the input is initialized by the user. Once the plan has been created, we can use it as many times as we like for transforms on the specified in/out arrays, computing the actual transforms via fftw_execute(plan): void fftw_execute(const fftw_plan plan); If you want to transform a different array of the same size, you can create a new plan with fftw_plan_dft_1d and FFTW automatically reuses the information from the previous plan, if possible. (Alternatively, with the guru interface we can apply a given plan to a different array). When you are done with the plan, you deallocate it by calling fftw_destroy_plan(plan): void fftw_destroy_plan(fftw_plan plan);

NWFP UET, Final Year Project Report, 2007

Page 121

Arrays allocated with fftw_malloc should be deallocated by fftw_free rather than the ordinary free (or delete). The DFT results are stored in-order in the array out, with the zero-frequency (DC) component in out[0]. If in != out, the transform is out-of-place and the input array in is not modified. Otherwise, the input array is overwritten with the transform. Users should note that FFTW computes an unnormalized DFT. Thus, computing a forward followed by a backward transform (or vice versa) results in the original array scaled by n. Single and long-double precision versions of FFTW may be installed; to use them, replace the fftw_ prefix by fftwf_ or fftwl_ and link with -lfftw3f or -lfftw3l, but use the same <fftw3.h> header file. Many more flags exist besides FFTW_MEASURE and FFTW_ESTIMATE. For example, we can use FFTW_PATIENT if we willing to wait even longer for a possibly even faster plan. We can also save plans for future use.

Multi-Dimensional Transforms:
Multi-dimensional transforms work much the same way as one-dimensional transforms: we allocate arrays of fftw_complex (preferably using fftw_malloc), create an fftw_plan, execute it as many times as we want with fftw_execute(plan), and clean up with fftw_destroy_plan(plan) (and fftw_free). The only difference is the routine that is used to create the plan: fftw_plan fftw_plan_dft_2d(int nx, int ny,fftw_complex *in, fftw_complex *out,int sign, unsigned flags); fftw_plan fftw_plan_dft_3d(int nx, int ny, int nz, fftw_complex *in, fftw_complex *out,int sign, unsigned flags); fftw_plan fftw_plan_dft(int rank, const int *n,fftw_complex *in, fftw_complex *out,int sign, unsigned flags);

Real Data Transform:

In many practical applications, the input data in[i] are purely real numbers, in which case the DFT output satisfies the Hermitian redundancy: out[i] is the conjugate of out[n-i]. It is possible to take advantage of these circumstances in order to achieve roughly a factor of two improvement in both speed and memory usage. fftw_plan fftw_plan_dft_r2c_1d(int n, double *in, fftw_complex *out, unsigned flags); fftw_plan fftw_plan_dft_c2r_1d(int n, fftw_complex *in, double *out, unsigned flags); Note that r2c are always FFTW_FORWARD and c2r are always FFTW_BACKWARD. NWFP UET, Final Year Project Report, 2007 Page 122

Appendix B

SPEEX
Introduction:
SPEEX is an Open Source/Free Software patent-free audio compression format designed for speech. The SPEEX Project aims to lower the barrier of entry for voice applications by providing a free alternative to expensive proprietary speech codecs. Moreover, SPEEX is well-adapted to Internet applications and provides useful features that are not present in most other codecs.

NWFP UET, Final Year Project Report, 2007

Page 123

The Technology
SPEEX is based on CELP and is designed to compress voice at bitrates ranging from 2 to 44 kbps. Some of SPEEX's features include: Narrowband (8 kHz), wideband (16 kHz), and ultra-wideband (32 kHz) compression in the same bitstream Intensity stereo encoding Packet loss concealment Variable bitrate operation (VBR) Voice Activity Detection (VAD) Discontinuous Transmission (DTX) In-progress fixed-point port Acoustic echo canceller Note that SPEEX has a number of features that aren't in other codecs such as intensity stereo encoding.

The libspeex API (Programming with SPEEX):


The SPEEX codec corresponds to a packet of 20msec of data. This means that the speech signal from which the frame is generated using SPEEX codec will be spanning over 20msec. Encoder: Programming of the SPEEX codec is relatively easy. Every program incorporating SPEEX codec must have the header file; #include <speex/speex.h> The speex.h header file is present in the folder speex, which is present in the downloaded data from the speex.org site. Next, we must declare a structure to hold the compressed (or decompressed) data. This structure is declared as fallows. SpeexBits bits; The SPEEX encoder state is formed using the command; void *enc_state; speex_bits_init (&bits); enc_state = speex_encoder_init (&speex_nb_mode); The speex_nb_mode clarifies that the encoding will be carried out at 8 kHz. For wide band mode, replace speex_nb_mode with speex_wb_mode, and this will tell SPEEX that treat input as wideband ( 16 kHz ). Once all the initialization is done, then for every input packet, we have to perform the following steps. speex_bits_reset (&bits); speex_encode_init (enc_state, input_frame, &bits); nbBytes = speex_bits_write (&bits, byte_ptr, MAX_NB_BYTES); The input_frame is a (float*) generated using a (short*) pointing towards the beginning of the speech frame, byte_ptr is a (char*) where the encoded frame will be written, MAX_NB_BYTES is the maximum number of bytes that can be written to byte_ptr without causing an overflow and nbBytes is the actual number of bytes written to byte_ptr (the encoded size in bytes).

NWFP UET, Final Year Project Report, 2007

Page 124

After youre done with the encoding, free all resources with: speex_bits_destroy(&bits); speex_encoder_destroy(enc_state);

Decoder: In order to decode speech using SPEEX, you first need to: #include <speex/speex.h> You also need to declare a SPEEX bit-packing struct SpeexBits bits; and a SPEEX decoder state void *dec_state; The two are initialized by: speex_bits_init(&bits); dec_state = speex_decoder_init(&speex_nb_mode); For wideband decoding, speex_nb_mode will be replaced by speex_wb_mode. If you need to obtain the size of the frames that will be used by the decoder, you can get that value in the frame_size variable with: speex_decoder_ctl(dec_state, SPEEX_GET_FRAME_SIZE, &frame_size); Again, once the decoder initialization is done, for every input frame: speex_bits_read_from(&bits, input_bytes, nbBytes); speex_decode_int(dec_state, &bits, output_frame); where input_bytes is a (char *) containing the bit-stream data received for a frame, nbBytes is the size (in bytes) of that bit-stream, and output_frame is a (float *) and points to the area where the decoded speech frame will be written. A NULL value as the first argument indicates that we dont have the bits for the current frame. When a frame is lost, the SPEEX decoder will do its best to "guess" the correct signal. After youre done with the decoding, free all resources with: speex_bits_destroy(&bits); speex_decoder_destroy(dec_state);

NWFP UET, Final Year Project Report, 2007

Page 125

Appendix C

The Secret Rabbit Code


Secret Rabbit Code (aka libsamplerate) is a Sample Rate Converter for audio. One example of where such a thing would be useful is converting audio from the CD sample rate of 44.1kHz to the 48kHz sample rate used by DAT players. NWFP UET, Final Year Project Report, 2007 Page 126

Usage:
The simple API consists of a single function : int src_simple (SRC_DATA *data, int converter_type, int channels) ; The use of this function rather than the more fully featured API requires the caller to know the total length of the input data before hand and that all input and output data can be held in the system's memory at once. It also assumes that there is a single constant ratio between input and output sample rates. The first parameter to src_simple is a pointer to an SRC_DATA struct defined as follows: typedef struct { float *data_in, *data_out ; long input_frames, output_frames ; long input_frames_used, output_frames_gen ; int end_of_input ;

double src_ratio ; } SRC_DATA ; The fields of this struct which must be filled in by the caller are: data_in : A pointer to the input data samples. input_frames : The number of frames of data pointed to by data_in. data_out : A pointer to the output data samples. output_frames : Maximum number of frames pointer to by data_out. src_ratio : Equal to output_sample_rate / input_sample_rate. When the src_simple function returns output_frames_gen will be set to the number of output frames generated and input_frames_used will be set to the number of input frames used to generate the provided number of output frames. The src_simple function returns a non-zero value when an error occurs

NWFP UET, Final Year Project Report, 2007

Page 127

Appendix D

The Hilbert Transform


In the beginning of the 20th century, the German scientist David Hilbert (1862-1943) showed that the function sin(wt) is the Hilbert transform of cos(wt). This gave us the /2 phase-shift operator, which is a basic property of the Hilbert transform. NWFP UET, Final Year Project Report, 2007 Page 128

Introduction:
A real function and its Hilbert transform are related to each other in such a way that they together create a so-called strong analytic signal. The strong analytic signal can be written with amplitude and a phase where the derivative of the phase can be identified as the instantaneous frequency. The Fourier transform of the strong analytic signal gives us a one-sided spectrum in the frequency domain. The Hilbert Transform is given by

If g(t) is the Hilbert Transform of f(t), then we have an other relation;

If we create a Fourier series of a function f(t) and changes the sine functions to cos functions and cos functions to sine functions we get the Hilbert transform of f(t). This method gives us the ability, in an easy way, to study the unavoidable truncation errors (a finite number of terms in the Fourier series) for the Hilbert transform of a periodic rectangular wave.

Numerical Computation of the Hilbert Transform:


To numerically compute the Hilbert Transform of N samples, with N odd, let us define a sequence h[n] given by

or in closed notation, we can write h[n] as

If N is even, then h[n] equals

Or in closed form,

NWFP UET, Final Year Project Report, 2007

Page 129

The discrete Hilbert Transform of sequence f[n] is defined by the convolution of the form

If we want to adopt the DFT algorithm, we need to compute the following steps.

Where DFT indicates Discrete Fourier Transform and DHT indicates the Discrete Hilbert Transform. Note that the DFT algorithm can be replaced by FFT. The convolution algorithm is faster than the DFT algorithm.

NWFP UET, Final Year Project Report, 2007

Page 130

References
Books
Juha Heiskala, John Terry, OFDM Wireless LANS, a Theoretical and Practical Guide Ramjee Prasad, OFDM for Wireless Communications Systems, 2004 Henrik Schulze, Christian Luders, Theory and Applications of OFDM and CDMA, Wideband Wireless Communications, 2005 Hiroshi Harada, Ramjee Prasad, Simulation and Software Radio for Mobile Communication Chia Sheng Peng, WLAN 802.11 OFDM Transceiver Algorithm, Architecture and Simulation Results, 2001 L. Hanzo, W.T. Webb, T. Keller, Single and Multicarrier Quadrature Amplitude Modulation: Principles and applications for Personal Communications, Broadcasting and WLAN

Websites
en.wikipedia.org www.complextoreal.com www.speex.org www.fftw.org www.msdn2.com www.cplusplus.com

Papers
Basem Nakhal, Tammam Sahli and Ziad Zein, Wireless OFDM based Real-time Video Streaming Kai Chuang Chung and Gerald E. Sobelman, FPGA Based Design of a Pulsed OFDM System Michael Speth, Stefan A. Fechtel, Gunnar Fock, Heinrich Meyr, Optimum Receiver Design for Wireless Broad-Band Systems Using OFDM Anibal Luis Intini, Orthogonal Frequency Division Multiplexing for Wireless Networks, Standard IEEE 802.11a Gavin Yeung, Mineo Takai, Rajive Bagrodia, Detailed OFDM Modeling in Network Simulation of Mobile Ad hoc Networks Yun Chiu, Dejan Markovic, OFDM Receiver Design Sinem Coleri, Mustafa Ergen, Channel Estimation Techniques Based on Pilot Arrangements in OFDM Systems Mathias Johansson, The Hilbert Transform

NWFP UET, Final Year Project Report, 2007

Page 131

Вам также может понравиться