Вы находитесь на странице: 1из 5

Lecture 2 Analog Digital Conversion: The analog digital conversion is divided into two parts.

I- Quantification II- Coding Quantification: It consists into mapping the samples of a signal to a finite discrete set of amplitudes. So each amplitude interval at the input of the qunatizer is mapped to 1 value which is the middle of the interval. See figure below Input signal amplitude Quantized value ( between 2 dashed line)

q 0 -q

First interval second interval third interval

Example of a quantizer with 3 levels (-q,0,q) Another representation of the quantization is given in the following figure. Uniform Qunatification
3q 2q q -3q -2q -q 0 -q -2q -3q q 2q 3q Input value

The quantization process introduces a noise which is the difference between the real amplitude value and the quantified value. The quantization noise power= q 2 / 12 The Quatization error has the following distribution.

e(x) q/2 -q/2 q/2 -q/2 x

So the errors are equally distributed between q/2 and +q/2 with e(x)=x. each error=1/q. The quantization error= the variance of the quantization error Noise power=

probability of

1 E ( x ) = e ( x). p( x ).dx = q
2 2 q / 2

q/2

q/2

q / 2

1 e3 = e ( x ). d ( x ) . q 3
2

q/2

=
q / 2

1 q3 q3 q2 ( + )= q 24 24 12

The input signal full scale voltage (peak to peak) = L*q where L= the number of levels. For the previous example L=3. (L * q ) 2 [ ] V max 2 L2 .q 2 2 The input signal power = = = 2 2 8

So the signal to noise ratio is given by :


SNR = 10log( Ps Vmax 2 / 2 3 ) = 10log( ) = 10. log( L2 ) = 1,76 + 20. log( L) dB. 2 Pnoise 2 q /12

So its obvious that by increasing the number of quantification levels we can reduce the noise power and increase the signal to noise ratio. II Coding: The coding operation consists in attributing a digital code for each of the quantized levels. For representing L levels we need n=Log2(L) bits or L=2n. For example we need 3 bits to generate 8 different codes for 8 different quantization levels 3 bits =log2(8 levels ) or 23 =8. The signal to noise ratio could be expressed in term of bits of the analog digital converter by simply replacing L in the previous equation by 2n.
SNR = 1,76 + 20. log( L) = 1,76 + 20. log(2 n ) = 1,76 + 6,02n dB. Remark: - Each additional bit used for coding 6 dB increase in the SNR. - Each additional bit means that the number of quantification levels will be multiplied by a factor of 2.

Example: Human voice is situated in the frequency band 80 Hz-3400 Hz. The voice is digitized using n bits ADC, the sampling frequency is 8000 samples/s. We require an SNR of 40 dB for good reproduction of the voice Calculate n and the bit rate of the resulting digital signal. Solution:

SNR=40 n=7 bits because 6 bits will give an SNR<40 dB. Bit rate= 8000 samples/s * 7 bits/sample = 56 Kbps. Remark: the SNR calculated corresponds to a full scale input signal. What will be the SNR in the case of a signal with a smaller amplitude.
a2 SNR = 10log( Ps a2 / 2 2.a 2 max ) = 10. log( ) = 10. log( )= Pnoise q 2 / 12 q2

12.a 2 max Where a is the amplitude of the input signal and amax=L.q/2 is the max amplitude of the input signal.

a2 = 10. log( a 2 max 2.q 2 ) = 10. log( 12.a 2 max a2 ) + 10. log( ) = 10. log( ) + 6,02 N + 1,72 = a 2 max 2.q 2 a 2 max a2

12.a 2 max a 20. log( ) + 6,02 N + 1,72 a max

So the SNR will decreases if a < amax.


For example for a amax/2 or Pmax -6 dB The SNR = SNR)full scale- 6dB. So the SNR decreases dramatically for small signals which is not good. For a signal with max amplitude= 4 q for example and n=8 bits.
amax = L.q/2 = 2 n.q 2

a=4q

a/amax=

22 q 2 .q / 2
8

= 25

SNR decreases by a 20. log( ) = 20. log(2 5 ) = 100. log(2) = 30dB a max

The problem is that that human ear is sensitive to small signals and they have higher probability of existence so they should be quantified with low quantization error . This is due to the fact that the error is situated in an interval of [-q/2,q/2] Whatever the amplitude of the signal , So for smaller input values error becomes even higher than the signal Solution: In order to improve the SNR for small signals we should make the error small when the signal is small . This could achieved by modifying the signal (compressing it) with a function which compress the Signal when the amplitude is high and stretch the signal when the amplitude is small. The most known compression LAW is called law. It has the following expression: ln(1+ | .x |) x: is the normalised input value. Maximum=1 y= 1+

y=

ln(1+ | .x |) 1+

law compression 1

x/xmax

The compressed signal is quantified with uniform quantizer as shown in the following figure.

law compression 3q 2q q 0
X

Uniform quantification Without compression

1
Non uniform quantification

x/xmax

As we can see the quanization intervals on the x axis are not the same. Smaller amplitudes are quantified with smaller intervals smaller errors The maximum error amplitude for each interval= X /2. Remark: The compressed signal is quantified with the same quantizer (uniform on the y axis). The American and Japanese standards uses this compression LAW with a = 255 (high compression) We need for decoding to use the inverse of the function used for compression . Otherwise we will obtain a different signal This operation is called expanding. The whole operation of compression, quantization, expansion is called compounding. The max error amplitude = the value of interval /2. ( before it was q/2 q/2, the interval value =q). The full scale of a such quantizer is given by the following expression:

SNRdB = 10. log(C.2 2n ) where C =

3 [ln(1 + )]2

SNRdB = 10. log(C ) + 6n = + 6n The following figure shows a comparison of the uniform and non uniform quantization for different input amplitudes.

The SNR of non uniform quantizer is less than that of the uniform quantizer but its advantage is that, the SNR does not decrease for small amplitude like in the case of uniform quantizer

Вам также может понравиться