You are on page 1of 33

Department of Electrical and Electronic Engineering Imperial College London

EE3-19, EE9AO10, EE9CS7-27, EE9SC5

Real Time Digital Signal Processing

Course homepage:

Section 3 Interrupt Programming and Data Conversion

Paul D. Mitcheson Room 1112, EEE

E3-19 Real Time Digital Signal Processing / PDM v1.7 1/33

1. In the last section

We looked at: Different starting points for writing executable code on the TI DSPs. o C o Linear ASM o ASM Linear ASM couldnt be directly translated into machine code because it did not take into account out of order instruction completion and parallelism o An assembly optimiser was needed to create the true ASM and this was used in the build process before the linker The TI C compiler is very good the effort/performance trade-off pushes us to write in C. We looked at an overview of Code Composer Studio (CCS) and briefly reviewed the C language Interrupts are useful when writing time critical programs. o But they increase complexity context must be saved and restored as we begin and finish the interrupt service routine.

2. In this section
We will look how to write interrupt driven code in Code Composer Studio o Using the peripherals in the DSK (A/D D/A converter) and the libraries for doing so Board and Chip support libraries (BSL and CSL) o How to do context save/restore with interrupt driven code o As an example we will look at an interrupt driven program to output a sine wave to the line-out on the DSK Sampling and Data conversion o Quantisation and periodic sampling o Aliasing o A/D and D/A Conversion getting data into and out of the processor Nyquist rate converters and oversampling converters

E3-19 Real Time Digital Signal Processing / PDM v1.7


3. Writing Code for the DSK

Last time we looked at an overview of the C language and the concept of interrupts. In this section we will look at writing C code specifically for the TI 6713 DSK board and in particular at writing interrupt driven code.

3.1. Libraries
During the course of writing code for the DSK, you will make use of certain pre-written libraries. Two of the most important sets of libraries are: BSL board support libraries CSL Chip support libraries

The BSLs contain high level routines which support features on the DSK board (but not on the C67x DSP chip itself). The most important features of the BSLs that you will use are the functions which allow you to configure and read from the D/A converter and write to the A/D converter. The BSL function calls make use of the Chip Support Library (CSL). The CSL contains routines which support on-chip peripherals. This includes the interrupt controller, the DMA controller and the MCBSP serial ports. Some of the CSL libraries are edma.h and irq.h, which give access to the DMA controller and the IRQ hardware respectively. You will then also use some of the standard C libraries that some of you may already be familiar with, including stdio.h and math.h A complete list of peripheral support libraries included with the DSK are given below: C6000 CSL Modules CSL CSL initialization CHIP Specify device CACHE Cache DAT Device-independent data copy/fill DMA Direct memory access EDMA Enhanced direct memory access EMIF External memory interface HPI Host port interface IRQ Interrupt controller MCBSP Multichannel buffered serial port PWR Power down STDINC Standard include TIMER Timer C6713 DSK BSL Modules (additional library headers) BOARD SETUP BSL initialization CODEC Access audio codec DIP SWITCH Read board DIP switches FLASH Program FLASH ROM LED Write LEDs To use the DSK peripherals you need to do #include <dsk6713.h> and #include <dsk6713_aic23.h> (The A/D and D/A codec on the board is an aic23 device, hence the library name)

E3-19 Real Time Digital Signal Processing / PDM v1.7


3.2. Handles a method of reading from and writing to hardware

A handle is a reference which gives the programmer an abstracted interface to a hardware device (or at an operating system level, to a file on the file-system). To a programmer, a handle is just a variable which we read from to get data out of a piece of hardware and write to in order to push data into a piece of hardware. A handle is the returned value when we open a device, e.g. codec_handle=open_codec(). Thus, if we are going to read and write from a hardware device (as we will typically do with any embedded system software) we need to be comfortable with the concept of handles. So, the typical use of a handle in C on an embedded system is as follows: DAC_handle = open_DAC(); write_to_DAC(DAC_handle, value); Typically, any peripheral we want to write to or read from will simply be accessible at a particular memory location. If we want to write_to_DAC(DAC_handle, value) we could just write to that address, like this: write_to_DAC(0x35A2, value); assuming the DAC is accessible on the memory bus at address 0x35A2. Or even (worse) this: *0x35A2 = value; (where the left side of the expression in C is a dereferenced pointer and means the value stored at address 0x35A2)

So why did we not just do that?

Manipulating memory directly can easily lead to errors by writing to the wrong address A handle with a name is much nicer to use
So why not just make the handle a pointer with a name from a #define?: #define DAC_handle = 0x35A2 write_to_DAC(DAC_handle, value);

There are several reasons, but the main ones are: A piece of hardware may have several addresses associated with it (e.g. data output, control registers etc. The DAC has a control register to set sampling rate for instance). This direct memory access would thus mean we would need to write the correct sort of data to the correct address. This is messy and makes mistakes very easy. Functions written to work with the extra layer of indirection provided by a handle mean that these different hardware addresses associated with a device can all be accessed through one handle. 4/33

E3-19 Real Time Digital Signal Processing / PDM v1.7

In systems running multiple processes, a handle can act as a locking mechanism so that two or more processes cannot corrupt each other by independently changing settings of a hardware device

Therefore, the routines which use the handle deal with resource allocation and resource access for us and remove the possibility of us corrupting memory by writing to the wrong addresses.

Why do we use handles?

The abstraction allows board specific details to be changed without having to rewrite user code just recompile with an updated library It stops the programmer worrying about hardware specific details and lets him/her write everything to the same handle It makes programming the device simpler and safer because the access functions to the work for us
So how do we write to a peripheral such as the A/D converter on the DSK board? We need to configure it and then read data from it. Write to the control register for setting up the device (sample rate etc) Read the data to be converted from a different address (probably spatially close to the control register address although not necessarily).

Both would be done through the same handle. In summary: When writing to a peripheral such as the codec, we need to write to several different memory locations: o Control registers o Data values

We can do all this with one handle and associated functions without worrying about the actual value of the handle, and as a user we never have to change the value anyway.

3.3. Using BSL and CSL

The general method to use these libraries to talk to a peripheral is to: Declare variables that will be used for reading and writing to and from and a variable for the device handle Open the peripheral. This reserves the resource of the device and returns a handle to the device so it can be written to and read from by functions using the handle Configure the device. You might want to set the baud rate of a serial port, or the sampling rate of an A/D converter Read and write to the peripheral

E3-19 Real Time Digital Signal Processing / PDM v1.7


The functions we have available for reading and writing to the aic23 A/D and D/A codec are:
DSK6713_AIC23_openCodec() DSK6713_AIC23_closeCodec() DSK6713_AIC23_config() DSK6713_AIC23_read() DSK6713_AIC23_write() DSK6713_AIC23_rset() DSK6713_AIC23_rget() DSK6713_AIC23_outGain() DSK6713_AIC23_mute() DSK6713_AIC23_setFreq() Allocate an identifying handle for an instance of a codec Release a codec handle Set parameters on codec registers Read 32 bits from the codec data stream Write 32 bit value to the codec data stream Set the value of a codec control register Return the value of a codec register Set the codec output gain Enable/disable the codec mute mode Set the codec sample rate

All of these functions are found in the dsk6713_aic23.h BSL library header file. Lets see what the code looks like to use handles: DSK6713_AIC23_CodecHandle H_Codec; DSK6713_AIC23_Config Config = { \ 0x0017, ... }; Declare handle Configuration structure see later

void main(){ Set hardware to default settings DSK6713_init(); Open and configure H_Codec = DSK6713_AIC23_openCodec(0, &Config); the resource DSK6713_AIC23_write(H_Codec, ((Int32)(sample))) } Write a value using handle Notice that: The variable Config is of type DSK6713_AIC23_Config. This type is defined in the BSL library header. The A/D and D/A codec is an aic23. The address of the configuration variable (&Config) is passed to the open codec configuration function, rather than the values in the Config structure. When you pass in the value of a variable to a function we say it is a call by value. When you pass in the address of the value we say this is a call by reference

Why? A call by value copies the variable onto the stack and a large structure can lead to a stack overflow especially on an embedded processor Therefore it is advisable not to pass a structure to a function in C - you should pass the address of that structure more efficient The dereferencing of the pointer must occur within the DSK6713_AIC23_openCodec function. (A pointer is a value of a memory address. A dereferenced pointer gives the value stored at the address pointed to by the pointer)
At this point we know how to safely write to and read from the D/A and A/D converter using some TI provided library functions. You should also note that if a call by value variable is modified in a fuinction, only the copy on the stack context of that function is modified and thus changes are lost on returning from the function, unless the value is returned. With a call by reference, the original value of the variable is modified directly, so changes persist on returning from the function.

E3-19 Real Time Digital Signal Processing / PDM v1.7


3.4. Generating a Sine Wave output on the line-out on the DSK

In this section we will look at how to generate a sine wave output from the DSK using interrupts. In lab 2 you will see a function, called sinegen, which generates samples of a sine wave which we want to output to the DAC. Each time it is called it generates the next sample. Diagrammatically, we have:

Sinegen generates these samples

Figure 1 Sinegen function generates sinewave values

In terms of the hardware, we want to achieve the following:


Speaker sineGen McBSP 5-bit bus DAC

Fsamp (8 kHz)

Figure 2 Hardware view of sinewave generation

A sinegen function generates data and sends them to the DAC using the McBSP serial port device.

The aic23 codec (an external chip to the DSP core) that we are using as the DAC is connected to the McBSP serial port (which is built into the DSP core itself). The serial bus is actually a 5 wire bus, with the DAC providing the clock to obtain data from the McBSP (i.e. the DAC is the serial master).

E3-19 Real Time Digital Signal Processing / PDM v1.7



Setting the frequency of the Sine Wave

The codec on this DSK operates at a sampling frequency of between 8 and 96 kHz. We will mainly be using 8 kHz and this is set when the codec is opened (i.e. when we get a handle to the codec) by setting the correct bits in the config structure. Assuming the sample frequency is set to 8 kHz, then when outputting samples via the D/A converter, the output only changes at a frequency of 8 kHz.

How are we going to ensure our software writes to the DAC at 8 kHz? Lets see
If you write to the D/A converter using the function DSK6713_AIC23_write(), then if less than 125 s has passed since the last output, the function returns a 0 and no data is written to the DACs buffer. If more than 125 s has passed then the sample is put into the DACs buffer and the function DSK6713_AIC23_write() returns a 1.

In lab 2 you will create a sine wave by using the polling. When the DSK6713_AIC23_write function is called, it simply spins in a while loop, polling the aic23 until the DAC/ADC is ready for writing/reading. This would cause a sample to be output at 8 kHz.
while (!DSK6713_AIC23_write(H_Codec, ((Int32)(sample * L_Gain))));

Essentially: The DAC is clocked at 8 kHz Every time the DAC is clocked, new data from its internal buffer is stored on its analogue output port and it reads from the buffer of the McBSP device into its own input buffer. This frees the buffer on the McBSP serial port, meaning that it can again receive data Next time DSK6713_AIC23_write is called, it can successfully write to the buffer in the McBSP. If DSK6713_AIC23_write is called before its buffer has been emptied, it refuses the data causing the function to exit with a return value of 0.

But in the last lecture we talked about interrupt driven programs. We could have the DAC interrupt the processor whenever the DAC is ready to receive data.

Why use interrupt instead of polling in real time software?

When polling a device, a program can do not other useful tasks. An interrupt program can do useful things until hardware needs to be serviced.
As long as the sine wave generation part of the program takes less than 125 s to run (it actually takes less than 1 s), the McBSP serial port can notify the CPU that it is ready to receive a sample as its buffer becomes free, and so a sample can be provided.

E3-19 Real Time Digital Signal Processing / PDM v1.7


Our hardware diagram will now look like this:


Speaker sineGen McBSP 5-bit bus DAC

Transmit interrupt

Fsamp (8 kHz)


Figure 3 Interrupt driven sine wave generation

The DAC is clocked at 8 kHz Every time the DAC is clocked, new data from its internal buffer is stored on its analogue output port and it then reads from the buffer of the McBSP device into its own input buffer. This frees the buffer on the McBSP serial port, meaning that it can again receive data At that point the McBSP causes a TX interrupt to tell the program that it has space in its buffer to receive data (i.e. a previous TX of data to the DAC has been successful). A call to DSK6713_AIC23_write is now guaranteed to be successful so no time is spent spinning in a while loop.

In other words, each time the McBSP TX interrupt occurs, a new sample should be written to the DAC through the serial port, using the DSK6713_AIC23_write() function.

How do we write a function in CCC in the C language to be executed when an interrupt occurs?

E3-19 Real Time Digital Signal Processing / PDM v1.7


3.5. Writing an ISR in CCS

We want to be able to call a C function when a particular interrupt occurs. How do we do this? There is a specific configuration tool in Code Composer to set up which interrupts call which C functions called the DSP/BIOS configuration tool. You will be guided through using this in lab 3. Double click on an interrupt number (for example HWI_INT4) and the dialogue box to configure the interrupt appears As we discussed previously, a particular interrupt source can be mapped to a CPU interrupt number. In this example the source MCSP_1_Transmit (the McBSP 1 TX interrupt) will be mapped to Processor interrupt 4. We can set a function name for the C function that will be called if this interrupt is triggered. An important thing to note here: You should set the C function name in the configuration tool as the name you are going to use with an additional _ character prefixed. You will NOT use the underscore when writing the function prototype and definition in your C source file. The underscore in the configuration tool is a marker to tell CCS that there will be a C function to run for this particular ISR. This marker is not generic to all platforms but is TI specific. Put the tick in the use dispatcher check box under the dispatcher tab. This means that CCS will auto-generate suitable code to deal with the context save and restore necessary when running an ISR for you!

Figure 4 CCS DSP bios configuration tool, interrupt tab

The interrupts are shown in priority order in the DSP bios configuration tool. HWI_RESET has the highest priority and HWI_INT15 the lowest.

E3-19 Real Time Digital Signal Processing / PDM v1.7


Figure 5 Dispatcher configuration

What is the structure of our final interrupt driven program?

Include libraries and declare variables, including the config structure and handle for the codec:

Full structure for config of codec


E3-19 Real Time Digital Signal Processing / PDM v1.7


Function to initialise codec: Reset hardware Configure and open codec

Set the McBSP to work with 32bit frames

Generate an interrupt every frame The resolution and sampling rate of the codec are configurable Dont worry about data frame definitions the codec is very configurable and the skeleton code for making it talk to the serial port is provided for you. Just use it.

Initialise the hardware interrupts

It is good practise to disable the interrupts before altering them so that you do not get vectored into an ISR when you are making changes to interrupts. The main function follows

We will then have our ISR function after main which you will write in lab - which will output the sinewave samples as interrupts occur. E3-19 Real Time Digital Signal Processing / PDM v1.7 12/33

4. Data Conversion
There are two critical parts to almost any digital signal processing system that we have yet to discuss. Many DSP systems will need to interface with the analogue world on either or both of their input and output

This course




A to D Converter

DSP Processor

D to A Converter
Processed Speech, sound, pictures, demand position, demand velocity etc


Unprocessed Speech, sound, pictures, measured position, measured velocity etc

Signal coding, filtering, control etc

There are two stages involved in turning a continuous analogue signal into a digital representation. They are:

Sampling recording the value of the signal at specific points in time Quantisation having a finite number of digital values which can represent the analogue signal at those points in time

E3-19 Real Time Digital Signal Processing / PDM v1.7


5. Sampling
If we take a continuous input signal (in this case a sine wave) and sample it, by multiplying it by a train of delta functions running at a frequency of fs, we have the following time and frequency domain representations:
Time domain Frequency domain

-fb fb

t -2fs -fs

Periodic components in frequency domain

fs 2fs

-2fs -fs fs 2fs

Figure 6 Multiplication in time is convolution in frequency

The Fourier transform of the delta function train is itself a chain of delta functions spaced at Fsamp in the frequency domain. When we convolve the frequency domain representations of the input and sampling signal together, the spectrum of our sampled signal comprises copies of our original baseband spectrum around multiples of the sampling frequency. We can see this because the convolution integral between the two signals is:

f g =

f () g ( )d

E3-19 Real Time Digital Signal Processing / PDM v1.7


Where f() is the frequency domain representation of the original signal being sampled and g() is the frequency domain representation of the sampling signal. For the output of the integral to be non-zero, we require f() and g(-) to both be non-zero. For f(), this occurs when:

= fb
and for g(-), this occurs when:

-= fs
Substituting one into the other, we find that non-zero components occur in the output of the convolution when:

= fbfs
as shown in the lower right of Figure 6.

For a baseband signal with a more realistic spectrum, we might have something like this:
Frequency domain

Periodic components in frequency domain





Reconstruction filter pass band





Figure 7 Sampling a broadband signal

In this case, the original signal can be recovered by putting a filter around the sampled signal with a cut off frequency set to only allow the original baseband frequencies through (because the reconstructed signal has exactly the same spectrum as the original). Now we are in a good position to understand Nyquist and Shannon. E3-19 Real Time Digital Signal Processing / PDM v1.7 15/33

5.1. Harry Nyquist and Claude Shannon

In order for there to be no loss of information when sampling a signal (i.e. so that the original signal can be perfectly reconstructed from the sampled signal), that signal must be sampled at twice the maximum frequency present in the signal being sampled. This is commonly referred to as Nyquists theorem and the minimum frequency at which data needs to be sampled for there to be no data loss is called the Nyquist frequency or Nyquist rate.

Figure 8 Sampling at different rates time domain left and frequency right

If a signal is sampled at well over the Nyquist rate, then is clear that information about the signal exists in enough detail in order to join the dots and reconstruct the signal. This is clear in the first two plots above. Sampling is like multiplying the incoming signal by a train of unit delta functions. In the frequency domain, this has the effect of producing images of the sampled spectrum around multiples of the sampling frequency. If the sampling frequency is too low, then the images of the spectrum of the baseband signal and the true baseband signal will overlap. we can see this effect in the time domain in the third and fourth plot. In the fourth plot, the baseband signal has become totally corrupted with an image signal around fs This constitutes information loss. If you filter away frequency components higher than the original baseband signal you still have corruption of the reconstructed baseband signal as there is corruption at below that frequency Therefore we must sample at the above the Nyquist rate in order for no information loss. This ensures that no signal copies in the sampled signal appear in the baseband

The existence of images of the sampled signal in the baseband is known as aliasing and causes signal corruption E3-19 Real Time Digital Signal Processing / PDM v1.7 16/33

5.1.1. Anti-Aliasing Filters

An important realisation here is that it is not only that sampling a signal at a certain frequency means that information in that signal above half the sampling frequency is lost it means that if we sample a signal containing frequencies higher than half the sampling frequency that those frequencies can totally corrupt our representation of the lower frequencies because they get aliased into the baseband. We must ensure that no frequency above half the sampling frequency is present in the signal being sampled. This requires the use of an anti-aliasing filter before sampling is performed.

Anti-aliasing filter


A to D Converter

DSP Processor

fs/2 Figure 9 Antialiasing filter is required before sampling

E3-19 Real Time Digital Signal Processing / PDM v1.7


5.2. Do we really understand those frequency domain plots?

What are we looking at on a plot with positive and negative frequency? What is negative frequency, anyway?

The spectral components on a frequency/amplitude plot show the components of ejt vectors rotating at that frequency. Positive vectors rotate anti-clockwise and negative vectors rotate clockwise In the real world, there are only real sine and cosine waves, not rotating vectors in a complex plane. The sum of the positive and negative rotating vectors of frequency and - gives the cosine wave (with a certain phase) at that frequency.

Ae jt + Ae jt A cos t = 2

Ae jt Ae jt A sin t = 2j

The sum of the two rotating vectors (at and -) is always real valued and is shown above graphically for sine and cosine waves so the actual component present in the signal at a given value of has amplitude of the sum of the two vectors and a phase of the angle between the positive component and the real axis at t=0. Cosine is therefore deemed to have a phase angle of zero (the phase of A/2 is zero and the vectors start on the real axis) and sin has an angle of -/2 (the phase of A/2j is -/2 and the vectors start aligned with the imaginary axis). Other phases are of course represented by coefficients with both a real and an imaginary part.

E3-19 Real Time Digital Signal Processing / PDM v1.7


6. Analogue to Digital Converters

Two of the main performance metrics for ADCs are: Resolution how many bits we use to describe the signal when we convert it to a digital format Sampling rate how many time per second we can generate digital representations of the input

Different applications place differing requirements on these metrics. Lets take an example.

6.1. Mobile phone

A mobile phone system is shown below. There are 2 D/A and 2 A/D converters. One pair does the digitisation and reconstruction of the voice signal and the other pair the analogue construction and digitisation of the high frequency RF signal. We can clearly see that: The audio converters require high resolution (so that voice is sampled accurately) but only need to be around 8 kHz sampling rate The RF stage converters must be high frequency but can be relatively low resolution if the signal transmission is FM or some form of coded digital (where the info is not carried in the amplitude of the modulated signal)

There are times when speed and accuracy can be traded for each other
E3-19 Real Time Digital Signal Processing / PDM v1.7 19/33

6.2. D/A Converters

It may seem sensible in the context of a DSP system to look at A/D converters before D/A converters because this is the order in which they come as information is processed. However, we will look at them the other way round. The reason why will be clear at the end of this section. A D/A converter must produce an analogue output voltage proportional to a digital input word. The task of designing a good D/A converter is primarily an exercise in analogue circuit design involving the summation of weighted voltage or currents to generate an output representative of the digital input. This is beyond the scope of this course. However, to look at architectures for A/D converters, we need to have a basic understanding of D/A converters, as these tend to be present in many A/D converters. A most basic D/A converter implementation is a summing amplifier using a weighted resistor chain based around an opamp, or a similar circuit using an R/2R ladder. We will not look at these circuits in any detail here you should be familiar with them already (from 1st year lab).

Figure 10 Common D/A architectures

These converters are both simple but require accurately matched resistors. It is easier to achieve this match in the case of the R/2R ladder than the summing junction converter. With this basic understanding of D/A converters, we are free to look at A/D converters. Note that there are many more sophisticated D/A converters than the ones described above.

6.3. Types of A/D converters

There are 4 common types of A/D converters: Flash converters Sub-ranging converters Successive approximation converters Oversampling converters

E3-19 Real Time Digital Signal Processing / PDM v1.7


6.3.1. Flash
The flash category of A/D converters is possibly the most obvious way of constructing such a device. An input voltage is concurrently compared to all possible quantisation levels and via a string of comparators. Each comparator for which the input signal is higher than the reference has a high output. A thermometer decoder constructs a binary value from the comparator outputs. The thermometer decoder is so called because it is decoding the output of the comparators, which is similar to reading a mercury thermometer.

Figure 11 Flash Converter

Fast all of the comparators operate in parallel and thermometer logic is of the form of a sum of product which also works in parallel Relatively simple to design


Randomness of input-offset voltages on the comparators limits the resolution Large silicon area (high cost, high power). The number of comparators is proportional to 2N The input capacitance of the converter increases with number of bits and can be large
E3-19 Real Time Digital Signal Processing / PDM v1.7 21/33

6.3.2. Sub-ranging
The idea behind the sub-ranging converter is to split the quantisation task up into two stages to reduce the silicon area and power consumption.

Figure 12 Sub-ranging Converter

How does it work?

The input signal is coarsely quantised by the first M-bit converter. These form the M most significant bits. The converted number is then put through a D/A to find its analogue equivalent. This analogue signal will always be less than or equal to the value at the input that we are trying to digitise (think how the flash converter works). This signal is subtracted from the input signal and the difference signal is digitised by an (NM) bit converter. This signal forms the least significant bits. The full scale range of the second converter should be 1LSB of the first converter The total number of bits is N


This converter takes less silicon area than the flash converter (less power, cheaper) Silicon area only scales as 2*2N/2 instead of 2N
Only half the speed of the flash converter. Not really any better accuracy (although can be for the same silicon area as we can use better (=more silicon area per device) comparators) 22/33

E3-19 Real Time Digital Signal Processing / PDM v1.7

6.3.3. Successive Approximation

The Successive approximation converter (which you should have seen before) hunts for a suitable digital value by generating a digital value in a successive approximation register and comparing the analogue equivalent of this with the input signal. The algorithm for the conversion is: N is the MSB and 1 is the LSB for n=N:1 Set the bit n to a 1. Check to see if the equivalent analogue output is greater than the input signal. o If yes, set bit n back to zero o If no, keep the bit at a one n-end The application of the algorithm can be seen in Figure 12. The bits are set in sequence from MSB to LSB and the bit is kept at a 1 if the output of the D/A is less than the input value being converted. In the example below, the first two bits are kept at 1, the next is set to zero, the next a 1, then a zero then a 1. Thus the final digital value of the input signal is 110101.

Figure 13 Successive approximation A/D converter

The Successive approximation converter can be quite accurate and quite fast. It only relies on a 1bit A/D converter block a comparator - and has a fixed conversion time.

Note that for all these basic topologies, the digital value of the signal is always a rounded down version of the analogue signal (apart from the few instances in time where the analogue voltage lies exactly on a quantisation level)

E3-19 Real Time Digital Signal Processing / PDM v1.7


6.4. Quantisation Noise

The addition of noise to a signal is inherent when the signal is quantised. Noise is not introduced from the fact that the signal is sampled in time (if we meet the Nyquist criterion). Remember that if we take perfectly accurate samples at above the Nyquist rate then no information is lost and the signal can in theory be reconstructed with perfect fidelity. The added noise in A/D conversion comes from the fact that the equivalent analogue value of the recorded digital sample does not exactly match the amplitude of the sampled analogue signal. This is because of the finite number of levels of the signal after quantising (because of the finite number of bits used to store the value of the digital data). The total number of quantisation steps of an N-bit converter is (2N-1), which we will approximate here as 2N (this leads to a very small error for any reasonable number of bits). Let 1 LSB have an analogue value of i.e. each quanta has a value of . =/2N where A is the full scale analogue input value of the A/D converter.

As the analogue input to a converter changes from zero to the full scale input, then the voltage quantisation error between the analogue equivalent of the quantised signal and the actual analogue signal changes as with the input signal. This quantisation process is shown in the left part of Figure 13. In the case of the standard comparator thresholding DAC architectures that we have described so far, as the analogue input voltage changes, the digital representation is always rounded down. In other words, for the example of a 3-bit ADC with full scale input voltage of 7 V, for any input voltage between 0 and 1 V, the digital representation is 0, and for input signals between 1 and 2 V the digital equivalent value is 1. When that digital signal is converted back into an analogue form, this would make the noise introduced by quantisation vary between 0 and -. Thus, the average error voltage is -/2 If, however, we were to add an analogue voltage corresponding to /2 to the input before doing the AD conversion, the 3-bit converter would behave as shown by the solid line in Figure 13 left Then the error voltage would vary between -/2 and /2 The average value of the error would then be zero This is clearly a good thing and is how most real ADCs are configured Infinite resolution ADC (3 bits shown on y-axis would be 3 MSBs of infinite resolution ADC) 3-bit ADC response

3-bit ADC shifted by 0.5LSB

Figure 14 Relationship between analogue and digital values (l) and Quantisation error of sampled signal (r)

E3-19 Real Time Digital Signal Processing / PDM v1.7


We now want to work out the signal to quantisation-noise ratio to get an idea of how important the quantisation noise effect is and importantly to see how it alters with the number if bits. We expect the signal to quantisation-noise ratio to increase with the number of bits because as the number of bits increases, each quantum has a smaller value and so the error must reduce. For a converter where the quantisation error varies between -/2 and /2: We assume that the probabilities of the quantisation noise being of a particular value between -/2 and /2 are equal. This means that any time a sample is quantised, there is an equal probability that the quantisation error is of any value between -/2 and /2, giving the following probability density function:
pd(e) 1/ Error voltage



Remember that the area between two points under a PDF tells us probability that something occurs between those points. Therefore the total area under the PDF must be 1.

We want to know the long term average power of the quantisation noise. For a random signal which takes different values with different probabilities, the long term average of that signal is calculated by integrating the product of the of the possible signal values signal and the probability that the signal takes those values. This is called the expected value integral. Thus, the expected value of the error voltage is given in this case by the integral of the product of the error and the probability that the error is of that value:

Expected value of error voltage =

e PDF (e)de

Where e is the error voltage. The evaluation of this integral is zero which makes sense. we expect that the average error voltage will be zero as overestimates and underestimates will cancel each other in the long term average. However, the expected error power into a 1 resistor is given by the following. Note that we cant just square the average voltage (the average voltage of an sine wave is zero but the power when that is applied to a resistor is non-zero)

Expected value of error power =

PDF (e)de

Integrating between -/2 and /2 shows us that the normalised noise power is equal to: Normalised expected noise power = 2 12 25/33

E3-19 Real Time Digital Signal Processing / PDM v1.7

For a sinusoidal signal with Amplitude Am, the normalised (1 equivalent) signal power is:

and so the ratio of the signal to quantisation noise power is thus: SNR = Ps Am / 2 = Pn 2 12

Assuming we are using the full range of the ADC, then: 2Am=2N

And thus the SNR is given by

3 2N 3 2 which can be written in dB as 10 log 2 2 N : 2 2

SNR=(6.02N + 1.76)dB
That is the commonly quoted result for the performance of an ADC.

How much did that addition of the 0.5 LSB help us?
If we had used an ADC without the 0.5LSB adjustment (i.e. the error moved between 0 and -) then the noise power would have been: Normalised expected noise power = And the SNR would have been:
3 SNR = 2 2 N 8

2 3

which could be written as: SNR = (6.02N 4.26)dB

this 0.5 LSB shift was exactly equivalent to adding 1 bit of resolution to the converter!

E3-19 Real Time Digital Signal Processing / PDM v1.7


6.5. ADC Sampling Time Jitter

We want to sample a signal at specific time intervals corresponding to a certain sample frequency. Whilst the average sampling rate may be accurate, the actual time between specific samples will not be constant because of random jitters in the clocks signals,

Figure 15 Sampling time jitter

If there is a jitter in the clock, the fact that the sample was taken at the wrong time might mean that the recorded sample is inaccurate (if the signal crosses a quantisation boundary in that time). If there is any (even infinitely small) jitter, then the digital binary output could be corrupted if the signal was very close to a quantisation boundary at the time a sample was supposed to be taken (because it will have passed it by the time the sample is actually taken). We must therefore allow some amount of error due to sampling time jitter. The specifications for A/D converters are usually set so that this jitter causes no more than 0.5 LSBs error in the output. Consider a sine wave: A = A0 sin in t The maximum rate of change of input at frequency in is: A0 = 2f in A0 And thus the maximum change of the input signal during a time tjitter is:
2f in A0 t jitter

The amplitude of 0.5 LSB is: A0 2N

E3-19 Real Time Digital Signal Processing / PDM v1.7


Thus we require: A0 2f in A0 t jitter < N 2 Thus:

t jitter < 1 2 f in
N +1

Thus our clock signal must have less jitter than this value to be used as a clock for an A/D converter to maintain the LSB accuracy.

E3-19 Real Time Digital Signal Processing / PDM v1.7


6.6. Signal Reconstruction

Remember that the sampled signal has the following type of behaviour in the frequency domain:

-fs -fs/2 fs/2 fs

In order to restore the signal we must filter off the higher frequencies by passing the sample train of delta functions (of area proportional to the value of the sampled signal) through an LPF. This is shown below:

Sampled Deltas being output

Analogue output


Ideally, we require a brick wall filter:

A Brick wall filter

-fs -fs/2 fs/2 fs

Figure 16 Signal reconstruction with an ideal filter

E3-19 Real Time Digital Signal Processing / PDM v1.7


There are two three problems with this implementation: The filter cannot be ideal We cannot (easily) design analogue circuitry to output a train of delta functions at high frequency.

The compromise is the following:

Delta-functions of sampled signal

Analogue output


Use a zero order hold circuit to generate the output from the deltas.

Why is zero order hold so called?

The circuit immediately outputs the equivalent analogue value of the digital sample no 1st or 2nd order interpolation is attempted The sample is held until the next sample is output
What does the ZOH look like in the frequency domain? A ZOH makes a pulse from a delta function and the Fourier Transform of a pulse is a sinc function. Thus, the filtering properties of a ZOH are those of a sinc function.

Figure 17 Signal reconstruction with a ZOH

The ZOH clearly contains some higher frequencies that we do not wish to see at the output, so a typical system has a ZOH followed by a further antialiasing filter on the output, again with a cut off at half the sampling rate. However, we can further improve performance still further. E3-19 Real Time Digital Signal Processing / PDM v1.7 30/33

6.7. Over-sampling Converters

The converters we have looked at so far are referred to as Nyquist rate converters. They digitise an input at at least twice the maximum frequency present in the signal being digitised. What happens if we greatly increase the sampling frequency of a signal? Lets see in the frequency domain:

Figure 18 Oversamping makes the filter more effective

The oversampling of a signal makes the ZOH operation more accurate for our needs Signal reconstruction is better! Any additional filter that follows the ZOH has less stringent requirements on the narrowness of its transition band.

Consequently, many A/D converters employ over-sampling in order to make signal reconstruction easier. The A/D converter on the DSK board you use is oversampled. When you specify a certain sampling rate, this means it behaves as a Nyquist rate converter operating at that rate (even though internally it operates much faster).

6.8. Quantisation Noise reduction in Oversampled A/D Converters

Another (probably unexpected) benefit of oversampling a signal is that the quantisation noise can be reduced. Previously we considered the fact that quantisation noise was dependent only on the number of bits and independent of the sampling rate (as long as we were sampling at at least the Nyquist rate). In actual fact, an N-bit converter can operate with lower quantisation noise if operated at a higher sampling rate and with some clever trickery. As an example, it is possible from only two digital levels to generate an analogue signal of any wave shape as long as the magnitude of that wave lives within the bounds of the digital signals. E3-19 Real Time Digital Signal Processing / PDM v1.7 31/33

This is heavily used in power engineering and communications and is called pulse width modulation (PWM).

Input analogue signal

Figure 19 PWM (image taken from wikipedia)

1-bit ADC representation

If the PWM saw-tooth frequency is high enough, then a low pass filtered version of the 1 bit ADC signal is an (almost) perfect representation of the input wave. It is clear that our SNR for an LPF version of the PWM (digital) signal in this case has a much greater SNR than it would have had if we had used a 1-bit ADC working at the Nyquist rate. We are relying here on the fact that with good time resolution (i.e. high sampling rate), the average value of N digital levels can be made to look like M digital levels (M>N). Implementation of a converter using this concept requires the use of a dither signal (which performs the same function as our saw tooth in the PWM example), although we wont talk about the circuit details here. A common implementation of an oversampled DAC is called a (Delta-Sigma) modulator. These designs can do even more sophisticated things such as pushing the noise further out of the base-band (this is called noise shaping). The codec on the DSK board is a type.

E3-19 Real Time Digital Signal Processing / PDM v1.7


You can think of this process as moving the quantisation noise (which is still present) up to higher frequencies. In the case of PWM and a 1-bit digital signal representation shown in Figure 18, the quantisation noise occurs at and above the PWM sawtooth frequency. If the sawtooth operates at a higher frequency than the frequencies present in the signal, the quantisation noise can be filtered out. The movement of quantisation noise from baseband to higher frequencies is called noise shaping.

7. Summary of this section

We looked at: And: A/D converter sampling requirements A/D and D/A architectures Quantisation noise Sample jitter Signal reconstruction o Oversampling How to write interrupt driven programs in CCS The CCS environment looked after the context save and restore for us We can now write an interrupt driven program in Code Composer

8. In the next section...

Number formats o Fixed point and floating point number representations o The implications of using each type accuracy effects on stability of digital filters

E3-19 Real Time Digital Signal Processing / PDM v1.7