Вы находитесь на странице: 1из 31
NEC 602/ EEC cot i DsGQxrTAL Commumicanioy Ang 10) Noise caginl Ream 4 (4) Tre PSO oF digital signal combe Cokrslted by a OT puly chebe. The PSO is ston: ‘ ae ar {PCI Puy. HePulye Stobe | | Ls = lize) Rs 1@) Symbol veahew AEP — 3270 ait oS as Lot ican) 2S Aes - on a acy rot gistovtion fon coupe duu 4(€) Tre COF ele = Yran oF eH ol cl ee 2) Q Fx ZO / Gd ex) =e GO Fxce Gy) ¥x Ou) efile) &e 6. (ibtethgnk PBR 1OSK SEA LG) oxerageoralect in ape exexs pes unit hire: 40 2 he © oie peobebility pe is He expectahion vole BE Rat Toke Cary Jonad 23 > C ppverinele eshirele ob bid eves Pac / ell one imum Pex, atime soroutd gy tre iret one of He er : te awl at delayed. vouion of tro inpuk Sigrel eee bs whn phe inpat ive 1S stebionanay oval wh wil ee rene Sreurn billene is meu oh 7 vane the abl RWG decin on mating: init tillerar = mgm docnnoyry- or Ris re no: of bit ALi) En Spates Spectrum medulolion, Romnal bamdurith amd-ronsmit Pow en, Ss acct tise Cay He Sakeok steer bamdsidth while Conventional medulaltion SY Hom aim fo minonigt the banduidhhand, good gpectal ef-Ficiercy, sx ypjris meade of Sigmabbing, symbol Is sabsuptmbed 6 a. Posilive ed bya nagehire Pulse of amplitude Dall syrbol us ide. Pox Symbol 0, he 4d) pulse of ormbLitvol A Fo The minimum bamduidth Setgeisad clabend Ww ee highest” Pundam ental Cray, of Laave fom - This should 6-6 ae small og posHisle + Lily Pewoursble Powers spectzol domly '> Te signal spectrum seule be rmokched te the chamrnel Psrogumey seesporge » Zeree hc 6m ponent is ps0 Feroble- Gi) Timing Ceboct) Recovery :> There are two orthonormal basis functions, ¢,(t) and #2(t), contained in the expan- sion of s,(t). Specifically, ¢ ,(t) and 2(t) are defined by a pair of quadrature carriers: galt) = feotente Ost=sT (6.25) lt) = B sin(2mft), O] tozerolevel -——>} _Demutiplexer atl sequence ‘encoder . “7 alt) ; $20) = V2 sin(2mp.9 (a Threshold = 0 71} Decision fy a re Recelved 0 puraee chanel Estimate of signal transmitted binary x0) sequence 7 fre 0 oe) Threshold = 0 Quadrature channel CO) Ficure 6.8 Block diagrams of (2) QPSK transmitter and (b) coherent QPSK receiver. binary PSK signals, which may be detected independently due to the orthogonality of ¢ (s and ¢,(t). Finally, the two binary PSK signals are added to Produce the desired QPsx Signal. = The QPSK receiver consists of a pair of correlators with a common input and. Sup plieg with a locally generated pair of coherent reference signals ¢.(t) and @ alt), as in Figure 6.86. The correlator outputs x; and %2, produced in response to ‘the received Signal (0), are each compared with a threshold of zero. If ™>0O0,a decision is made in favor of symbol 1 for the in-phase channel output, but if x, <0, a decision is made in favor of symbol 0, Similarly, if x. > 0, a decision is made in favor of symbol 1 for the qWadrature channel output, but if x. < 0, a decision is made in favor of symbol 0, Finally, these two binary sequences at the in-phase and quadrature channel outputs are combined in a mul. Q. No. 2 (c) Because (a) the Gaussian distribution is a good approximation to most unimodal distributions with finite variance, of which there are many; and (b) the Central Limit Theorem says that as the number of samples from a finite-variance distribution increases, their mean is increasingly well approximated by a normal distribution. The normal distribution is one of the most commonly used probability distribution for applications. 1 When we repeat an experiment numerous times and average our results, the random variable representing the average or mean tends to have a normal distribution as the number of experiments becomes large. 2 The previous fact, which is known as the central limit theorem, is fundamental to many of the statistical techniques we will discuss later. Ina random experiment, a trial consists of four successive tosses of a coin. If we define an RV x as the number of heads appearing in a trial, determine P,(x) and F(x). A total of 16 distinct equiprobable outcomes are listed in Example 10.4. Various proba- bilities-can be readily determined by counting the outcomes pertaining to a given yalue of x. For example, only one outcome maps into x = 0, whereas six outcomes map into x = 2. Hence, P,(0) = 1/16 and P,(2) = 6/16. In the same way, we find P,(0) = Px(4) = 1/16 Py(1) = Px(3) = 4/16 = 1/4 P,(2) = 6/16 = 3/8 The probabilities P,(x;) and the corresponding CDF F,(x;) are shown in Fig. 10.7. FQ) Q. No.2 (d) Let the received pulse p(t).be time limited to 7, (Fig. 13.1). Note that here we use 4 general symbol 7, rather than 75 because the pulse may not be binary. We shall keep the discussion as general as possible at this point. There is a possibility of increasing p = Ap/y by passing the received pulse through a filter that enhances the pulse amplitude at some instant 4m and simultaneously reduces the noise power oj (Fig. 13.1). We thus seek a filter with a transfer function #1 («) that maximizes , where Paltm) pa (3.26) Because Polt) = F'[P(@)H @)] acho i jot = en Po) H@e! deo we have = Poltm) = 5 if. _Pla)H(o)el* do (133) Also, ee = RO = sf. Sy(@)|H(@)P do (134) Hence, LS, H@)P (oe! da} a ee Oe aE (13.5) 2 f@ Su@)|H(@)P do In the Schwarz inequality (Appendix B), if we identify X(w@) = H(w)./S,(@) and Y(@) = P(w)e/™ /,/5,(@), then it follows from Eq. (13.5) that z ar aor Pe)? esd Pes one (13.6) with equality only if Pel)" kP(—w)e~Jem wena) a | = Se Hence P(—w)e~Jomm H(@)= Ea (13.60) where k is an arbitrary constant. For white channel noise $,(w) = IN/2, and Eqs. (13.6) become lia 1 a 2 a 2Ep Pax = aay JPOP do = FP (137) where Ey is the energy of p(t), and H(@) =k P(—w)eJ%™ (13.70) where k’ = 2k/Nis an arbitrary constant. The unit impulse response h(t) of the optimum filter is given by AQ) = FR Poe] Eple)+n(s) tpn, (a) PA) + nr) bm —_ (b) Figure 13.1 Scheme to minimize the error Probability in threshold detection, Note that p (—1) P(—@) and e~/%™ represents the time delay of 1, seconds. Hence, AQ) =k’ pG, -1) (13.70) The signal p(y — ¢) is the signal p (1) delayed by iy. Three €8SES, In < To, fm = To, and ‘m > To} are shown in Fig. 13.2. The first case, fy < T,, yields a noncausal impulse response, which is unrealizable.* Although the other two cases yield physically realizable filters, the last ASC, Im > T,, delays the decision-making instant 4m aN unnecessary length of time. The case im = 7. gives the minimum delay for decision making using a realizable filter. In our future discussion, we shall assume fy, = T,, unless otherwise mentioned, Observe that both p(£) and h(¢) have a width of T, seconds. Hence p(t), which is a convolution of p(t) and A(t), has a width of 27; seconds, with its peak occurring at ¢ = 7, Also, because P,(w) = P(w)H(w) = UP@)PeT, p(t) is symmetrical about ¢ = T ig. 13.1.4 : The arbitrary constant X’ in Eq, (13.7) multiplies both the Signal and the noise by the same factor and does not affect the ratio p. Hence, the error probability, or the system performance, is independent of the value of &/. For convenience, we choose k’ 1. This gives AO = p(T, 1) (13.8a) and H() = P(—a)e~I*% (13.8) ‘The optimum filter in Eqs. (13.8) is known as the matched filter.” At the ‘output of ve the cisnal to rms noise amplitude ratio is maximum at the decision-making instant t = T,. METHOD II ve Cellet ada ie ecidieee aeldcl dasa Vigan 4A ot hitey aneat eae! iant filter of impulse response h(¢). The filter input x(t) consists of a pulse signal g(t) corrupted by additive channel noise w(t), as shown by x(t) = g(t) + u(t), OStsT (4.1) where T is an arbitrary observation interval. The pulse signal g(t) may represent a binary symbol 1 or 0 in a digital communication system. The 1(t) is the sample function of a white noise process of zero mean and power spectral density No/2. It is assumed that the receiver has knowledge of the waveform of the pulse signal g(t). The source of uncertainty lies in the noise w(t). The function of the receiver is to detect the pulse signal g(t) in an optimum manner, given the received signal x(#). To satisfy this requirement, we have to optimize the design of the filter so as to minimize the effects of noise at the filter output in some statistical sense, and thereby enhance the detection of the pulse signal g(t). Since the filter is linear, the resulting output y(t) may be expressed as. yt) = galt) + nit) (4.2) where g,(t) and n(t) are produced by the signal and noise components of the input x(t), respectively. A simple way of describing the requirement that the output signal component Bolt) be considerably greater than the output noise component n(2) is to have the filter make the instantaneous power in the output signal g,(¢), measured at time ¢ = T, as large as possible compared with the average power of the output noise (2). This is equivaleat to maximizing the peak pulse signal-to-noise ratio, defined as = 1gotT)|? (43) n= Epeie)] LUnear tme- Signal 0) | invariant titer ot | 7 20) 3 impulse response [ee ae time t= T ‘White noise wd) oy Figure 4.1 Linear receiver. p where |g,(T)|? is the instantaneous Power in the output signal, E is the statistical expec- tation operator, and E[n”(¢)] is a measure of the average output noise power. The require- ment is to specify the impulse response h(t) of the filter such that the output signal-to- noise ratio in Equation (4.3) is maximized. Let G(f) denote the Fourier transform of the known signal g(t), and H(f) denote the frequency response of the filter. Then the Fourier transform of the output signal g,(t) is equal to H(F)G(f), and g.(t) is itself given by the inverse Fourier transform g.6) =| HUNG) explnafy af (44) Hence, when the filter output is sampled at time t = T, we have (in the absence of channel noise) 2 IgolT)? = | iE HGF) expliafT) df (4.5) Consider next the effect on the filter ourput due to the noise 1(t) acting alone. The Power spectral density Sx(f) of the output noise n(¢) is equal to the power spectral density of the input noise w(t) times the squared magnitude response IHU) (see Section 1.7). Since w(t) is white with constant power spectral density No/2, it follows that Ng Sulf) = 3° LAA)? (4.6) The average power of the output noise n(t) is therefore Ela(e)] = if _ Sutf) df =2 awry ‘Thus substituting Equations (4.5) and (4.7) into (4.3), we may rewrite the expression for the peak pulse signal-to-noise ratio as [BUNCH expt 2epr) | (4.7) a= (4.8) Rel iawn df Our problem is to find, for a given G(f), the particular form of the frequency response Hig) of the filter chat makes 7 a maximum. To find the solution to this optimization Problem, we apply « mathematical result known as Schwar2’s inequality to the numenin of Equation (4.8). A derivation of Schtvara’s inequality is given in Chapter 5. For now it suffices to say that if we have two complex functions d(x) and ¢a(x) in che real variable x, satisfying the conditions fi letey? ae <0 [i lest? ae co then we may write | J esterate ax] = [ lotizde floated ayy The equality in (4.9) holds if, and only if, we have $12) = kb) (4.1 where k is an arbitrary constant, and the asterisk denotes complex conjugation. Returning to the problem at hand, we readily see that by invoking Schwarz’ i. equality (4.9), and setting $ (x) = H(f) and 62(x) = G(f),exp(j7fT), the numerator ip Equation (4.8) may be rewritten as fame eotann arf =f annie [ened ay Using this relation in Equation (4.8), we may redefine the peak pulse signal-to-noise rato as neZ[_iowrar any The right-hand side of this relation does not depend on the frequency response Hi of the filter but only on the signal energy and the noise power spectral density. Consequenty, the peak pulse signal-to-noise ratio 7 will be a maximum when H(f) is chosen so that the equality holds; that is, aa 2 toe JIG? af a3) Correspondingly, H(/) assumes its optimum value denoted by Hooe(f). To find this opt- mum value we use Equation (4.10), which, for the situation at hand, yields Hoplf) = RG*(f) exp(—j2afT) (4.14 where G*(f) is the complex conjugate of the Fourier transform of the input signal g(i), and k is a scaling factor of appropriate dimensions. This relation states that, except ft the factor k exp(—j2-7fT), the frequency response of the optimum filter is the same as tte complex conjugate of the Fourier transform of the input signal. Equation (4.14) specifies the optimum filter in the frequency domain. To characterize it in the time domain, we take the inverse Fourier transform of Hef) in Equation (4.14) to obtain the impulse response of the optimum filter as haglt) =k [Gp expl—PaftT ~ 21 4F a Since for a real signal g(t) we have G*(f) = G(—f), we may rewrite Equation (4.15) a holt) = RJ G(-f) expl-ieafit ~ 0) af = kL Oc exp bagi - a) af a = kg(T - 1) Equation (4.16) shows that the impulse response of the optimum filter, except for scaling factor k, is a time-reversed and delayed version of the input signal g(t); that is," r@) Q.No.2 (e) EOR) MALU tate ee esne ae : 2 7.7a first converts the incoming binary data sequence {b,} into a polar NRZ waveform b(t), which is followed by two stages of modulation. The first stage consists of a product modulator or multiplier with the data signal b(2) (representing a data sequence) and the PN signal c(t) (representing the PN sequence) as inputs. The second stage consists of a binary PSK modulator. The transmitted signal x(t) is thus a direct-sequence spread binary phase-shift-keyed (DS/BPSK) signal. The phase modulation 6(t) of x(t) has one of two values, 0 and 7, depending on the polarities of the message signal b(t) and PN signal c(t) at time # in accordance with the truth table of Table 7.3. Figure 7.8 illustrates the waveforms for the second stage of modulation. Part of the modulated waveform shown in Figure 7.6c is reproduced in Figure 7.8a; the waveform shown here corresponds to one period of the PN sequence. Figure 7.8) shows the wave: form of a sinusoidal carrier, and Figure 7.8c shows the DS/BPSK waveform that results from the second stage of modulation. Binary Polar nonretum- | 200) mt) data sequence >! “‘to-zero level Binary PSK (oy) encoder “nedulater [> x02 @ f PN code Carrier generator @ fie Decision [2 Sav ify >0 0 avi se > sayoituco w Ficone 7.7 Direct-sequence spread coherent phase-shift keying. (a) Transmitter. (b) Receiver. The receiver, shown in Figure 7.76, consists of two stages of d. i » shows 7b, ges of demodulation. In th first stage, the received signal y(¢) and a locally generated cartier are applied to a product modulator followed. by a low-pass filter whose bandwidth is equal to that of the original eee m(e). This stage ef ai] demodulation process reverses the phase-shift keying lied to the transmitted signal. The second stage of demodulati = deapreading by sauitiohsing pee Ieee eee Seage of demodulation performs spectrum _%———" @ Cartier be ° ' we ® [Pe a 40 Ae ° ' whe © Figure 7.8 (a) Product signal m(t) = c(t)blt). (b) Sinusoidal carrier. (¢) DS/BPSK signal. Figure 7.9. We are permitted to do this because the spectrum spreading, and the binary phase-shift keying are both linear operations; likewise for the phase demodulation and spectrum despreading. But for the interchange of operations to" be feasible, it is important to synchronize the incoming data sequence vend the PN sequchce. The model of Figure 79 also includes representations of the channel and the receiver. In this model, it is assumed that the interference j(t) limits performance, so that the effect of channel noise may be ignored. ‘Accordingly, the channel output is given by y(t) = x(t) + jl) = e(t)sit) + jl?) ee 1 1 transmitter —————"1 Channel =~ Receiver —————— 1 me Cale 0 coherent |__., Estimate <7 1 ' x )}— >| detector of bid) 1 r team t { Local 1 \ camer 1 FicuRE 7.9 Model of direct-sequence spreat binary PSK system. where s(t) is the binary PSK signal, and c(2) is the PN signal. In the channel model included in Figure 7,9, the interfering signal is denoted by j(t). This notation is chosen purposely to be different from that used for the interference in Figure 7.56. The channel model in Figure 7.9 is passband in spectral content, whereas that in Figure 7.5b is in baseband form. In the receiver, the received signal y(t) is first multiplied by the PN signal c(t} yielding an output that equals the coherent detector input x(t). Thus, u(t) = elt)y(t) 4 = c(t)s(t) + oe)j(t) (7.13) = st) + o(t)i(e) In the last line of Equation (7.13), we have noted that, by design, the PN signal c(t) satisfies the property described in Equation (7.10), reproduced here for convenience: lt)=1 forallr Equation (7.13) shows that the coherent detector input (2) consists of a binary PSK signal Importance of P-N sequence spread-spectrum systems, the receiver correlates a locally generated signal with the received signal. Such spread-spectrum systems require a set of one or more "codes" or "sequences" such that Like random noise, the local sequence has a very low correlation with any other sequence in the set, or with the same sequence at a significantly different time offset, or with narrow band interference, or with thermal noise. Unlike random noise, it must be easy to generate exactly the same sequence at both the transmitter and the receiver, so the receiver's locally generated sequence has a very high correlation with the transmitted sequence. Qs adeprive EXE) Muattupey dearer 4 (Mos. dye Pxoblem oF ae Jo over (ome eppeet (Mud) ts se mari? tbo Powe « SoudrPl . tuvd- Com quatre Hye, Revhoed § Aquos Povey uit oud — percoauk from | ty, Retuer fo Phe Srosarmirpey , | Cupratpee Reembuer Pottery adn toop “aby yyy Cepeal epider dfie #4 Res uso) S79Xop p- Myo: aft Qa toy (Mud) trp peneray aun oy ly strap tly Rewer hay Aeeety to ap ge (NJ Sfovop tam af et st LQ ah oy. a gi fet RE MK)» eg PA Rey (KI LI4 Hl ey. Uy Reefuay hay Knouslacge & F () wryery Toray " seth (Vi @® $Preadig J tpuan ee rors - Co mie Aiton oer | RE SEK)» @.,. Sfebred Yo yp Novre fampte . (nts) Tea erelain dtp Arp Perence Me Forse Lab Bg Mone Connadong JUgld 2quap?ay 0 Udrtory fo PD Ryytk) Rial) © = Rismtk) |] ih (kK) TK Roy tlk) = &aj2( kK) ~~ Re MK) 5; 4 | es } | ! } } / F Ke Amt tk Rmi4ts) 2m ,mik). = a rt SO Ye RE gil? Sic! Jnilk) 9iSPa. tye : | poatiny \ fhm ke is amit Vo, u ng. | = ae aves [abl ay [able Sean +9 .01P “Pipaar temple boy . Mvo ta Pe Laaittia, fhe Foal °f pup Reeetoey sy +o sit eT ih Staunmouan ser do Vaatiasbeolte! 0M 2, Recataeg fpcas | Veto Vadues ef () opetmumn MOO 2 && pee ae (makimum [Key Hood hiatus)” eee uae a) fikelgnond aft nal ty, keg 2upod Ayenbol: 4 use 18 99M stb mo clutoly opium moo Reboey tS hood Recel ver sf complex ify ¢ ; fat fo. HF yy mar a2ke!? hood MOD 7 nof many applrohen du 40 corelalor dy . moo fr d dun Apu Lomo panty secafuer \THE ORtenlabr Mu Sg ata oeknds Guar Le e-é remove HYP Monin muahplfanian do MAD anoup aiypireug user: < re. MAE dipperend © ser + daurstol non BGO) Core/ad'ou p Matin (RED Grangiokian MOP. red tase bg uO fag musrrpleniug (rye) aor at eprelaty fp L427 b Pumas op het, eae fgna) ’ ; ji : Re lrq = Oote +74 hx Fuis oe tore ato)" aperah'oiy leq ue thy nese fem (NM ART) Lat Cour, oteug de uuyer Arma) “k paies ola wire | tay be, applteal co dobect \ ye" ser Stemat * Symbol - oe P4 only Pg RE Sth ae fa fi they coud Mac ty zero. ~ foun (OMA tse abfein” fesrprrmauce Udouptap to STup.ep Dttf Wey, OFDM S-. Orfuogouas frequoruy Oru iBy oO oO o MULT PLAT Ug ay Atbrop of feo we olf ul Pour NAPPI NT Ag ty udod’e a Sgt, Chanel Ladi ize | Mruoh’p /é 4ob- Cousarer oy ou NePpteb orig © pour bys atu adafHiy) Aly SUb- Poe oro™, ym ase ivy do Marimrre faRSY Ord osey pues fap prug » Chaune, aH tide " quokyuy = houseuey Sub - (aur Pn a fom tutfem = aya Pei ely Ortyegona] to one Wnottey, thu thoy assy OHH ab/e 70 OteH/ap toh Hno utd Catey feat vg ) “¢. nem Aydin wr able fo Maxinize (pee tray oe Leaf Proud Cry? ey eubouriva Chane] Men pey Perea ¢@ . r-B Channel o ooo bem, He Marimam Fousey Aub tossjof Corres pono) di reepely : piutimum Powe Bf kor Meany QNo. 2 (g) tocorrelation Function of a Random Process on fits enter characteristics of a random process is its autocorrelation function, which leads to the spectral information of the random Process. The frequency content of a process depends on the rapidity of the amplitude change with time. This can be measured by correlating amplitudes at 1) and f +t. The random process x(¢) in Fig. Lda is a slowly varying process compared to the process y(¢) in Fig. 11.4b. For x(#), the amplitudes at t) and 4 +1 are similar (Fig. 11.4a), that is, have stronger correlation. On the other hand, for YG@), the amplitudes at and f, +7 have little resemblance (Fig. 11.46), that is, have weaker correlation. Recall that correlation is a measure of the similarity of two RVs. Hence, we can use correlation to measure the similarity of amplitudes at r and f2 = fy + z. If the RVs x(t) and X(Q) are denoted by x1 and x2, respectively, then for a real random process," the autocorrelation function Rx (ti, t) is defined as Ralhy 2) = x@)x(Q) = RD (11.2a) This is the correlation of RVs x(41) and x(a). Itis computed by multiplying amplitudes at 4 and t of a sample function and then averaging this product over the ensemble. It can be seen that for a small r, the product x12 will be positive for most sample functions of x(t), but the product yxy2 will be equally likely to be positive or negative. Hence, X73 will be larger than ¥i¥2. Moreover, x; and x2 will show correlation for considerably larger values of *, whereas y1 and ys will lose correlation quickly, even for small r, as shown in Fig. 11.4. a FE x) (a) © Thus, Rx(¢,, %), the autocorrelation function of x(t), provides valuable information about the frequency content of the process. In fact, we shall show that the PSD of x(t) is the Fourier transform of its autocorrelation function, given by (for real processes) Ret) = 20 poe =f if 1¥2Pxyx (1s 2) dxydxp (11.2) cod -00 Hence, Rx (ti, t2) can be derived from the joint PDF of x; and x2, which is the second-order PDE. Show that the random process X(t) = Acos-(w-t + @) where © is an RV uniformly distributed in the range (0, 27), is a wide-sense stationary process. The ensemble (Fig. 11.5) consists of sinusoids of constant amplitude A and constant frequency «., but the phase © is random, For any sample function, the phase is equally likely to have any value in the range (0, 2zr). Because © is an RV uniformly distributed over the range (0, 27), one can determine! p,(x, t) and, hence, x(f), as in Eq. (11.1). For this particular case, however, x(f) canbe determined directly as follows: X() = Ac0s (@t + ©) = Acs (@,t +O) Because cos (w,t + @) is a function of an RV ©, we have [see Eq. (10.57b)] 2x COs (Wet + O) = 7 CoS (Wet + 8) pe (6) dd 0 Because po (9) = 1/27 over (0, 277) and 0 outside this Tange, 2m cos (@,t +O) = xf cos (wt + 0)d9 =0 2n Jo Hence, x =0 ; (11.5a) ‘Thus, the ensemble mean of sample-function amplitudes at any instant t is zero, The autocorrelation function Ry(t1, 2) for this Process also can be determined directly from Eq, (11.2a), ay Relth, 2) = AP 00s (chy + ©) 008 (och, + O) = A? cos (ach + ©) cos (ah + ©) GG eee aioe ays = 5 [ese + os Toc FA) +76) The first term on the right-hand side contains no RV. Hence, cos[w, (t2 — t1 Mis cos [eo (t— t,)] itself. The second term is a function of the RV ©, and its mean is a a ee 08 [c(t +1) +20] = xf cos [we (t2 +11) + 20]d0 =0 0 Hence, Az Retr, ta) = > 008 [eoelta — 4] (11.56) or Az R,@)= > COS MT tT=h-h (L1.Sc) From gs. (11.5a,b) it is clear that x(t) is a wide-sense stationary process. x, 4) x, &) = | ange x(t &) eras a p (+) « 08 = —_ a eee @® Pl oat 064 ie PC) = 9g KO-2 4 0-8 XO =zo-|8 po =o'34 (OS rlbj-ot Average Information per Message: Entropy of a Source i il iti ith probabilities P; Consider a memoriless source m emitting messages m1, m2, ..., my wi ‘probs ty Pay .++y Py Fespectively (P, + P; +-+++ P, = 1). A memoriless source implies that cach ‘message emitted is independent of the previous message(s). By the definition in Eq. (15.3) [or Eq, (15.4)], the information content of message m; is J,, given by h= toe 5 bits (15.6) The probability of occurrence of m, is P,.Hence, the mean, oraverage, information per message emitted by the source is given by )“*_, P,J; bits. The average information per message of a source mis called its entropy, denoted by H1(m). Hence, Him) =) Ph bits ist (5.74) =~) PlogP, bits (15.76) iat The entropy of a source is a function of the message probabilities. Itis interesting to find the message probability distribution that yields the maximum entropy. Because the entropy is a meviure of uncertainty, the probability distribution that generates the maximum uncertainty will have the maximum entropy. On qualitative grounds, one expécts entropy to be maximurs when all the messages are equiprobable. We shall now show that this is indeed true, Because 1(m) is a function of Py, Ps,..., Py, the maximum value of H(m) is found from the equation d#'(m)/dP; = Ofori = 1, 2, ..., n, with the constraint that Py =1— (Ppt rte + Pra) (15.8) Because H(m) = — YP; log P, 59) i= Consider a discrete memoryless source whose mathematical model is defined by Equations (9-1) and (9.2), The entropy H() of such a source is bounded as follows: 0 = HY) = log, K (9.10) where K is the radix (number of symbols) of the alphabet {fof the source. Furthermore we may make two statements: ¥ 1. H(F) =°0, if and only if the probability p, = 1 for some k, and the Temaining probabilities in the set are all zero; this Jower bound on entropy corresponds to ng uncertainty. 2. HP) = log: K, if and only if py = 1/K for all & (ie, all the symbols in the alphaber are equiprobable); this upper bound on entropy corresponds to maxineum uncertainty. aN A zero-memory source emits six messages with probabilities 0.3,0.25,0.15,0.12,0.1,and0.08, Find the 4-ary (quaternary) Huffman code. Determine its average word length, the efficiency, and the redundancy. ; In this case, we need to add one dummy message to satisfy the required condition of r-+k(r — 1) messages and proceed as usual. The Huffman code is found in Table 15.3, The length L of this code is L = 0.3(1) + 0.25(1) + 0.15(1) + 0.12(2) + 0.1(2) + 0.08(2) + 0(2) =13 4-ary digits Also, Ha(m) = -5 Plog, Pi t= = 1.209 4-ary units The code efliciency 7 is given by The redundancy y = 1 — 1) = 0.07. Table 15.3 Code Reduced Sources m “030 ° 030 ° ma 025 2 030 1 ms os 3 025 2 m on 10 ois 3 ms 110 u m 0.08 2 m 0.00 B Q.No. 4 The Hamming distance d(x, y) between two vectors x, y is the number of | coefficients in which they differ, e.g. in d(00111, 11001) = 4 in (0122, 1220) = 3. the Hamming bound is a limit on the parameters of an arbitrary block code: it is also known as the sphere-packing boundor the volume bound from an interpretation in terms of packing ballls in the Hamming metric into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which itscode words are embedded. A code which attains the Hamming bound is said to be a perfect code. Next, in order to find a relationship between n and k, we observe that 2" vertices, or words, are available for 2* data words, and 2" ~2* are redundant vertices, How many vertices, or words, can lie within a Hamming sphere of radius 1? The number of sequences (of n digits) that differ from a given sequence by j digits is the number of the combination of n things taken j ata time and is given by (3) {see Bq, (10.16)]. Hence, the number of ways in which up to t erors can ocearis given by 3), ('7). Thus foreach code word, we must leave jn (7) number of words unused. Because we have 2* code words, we must leave 2* Da( ") words unused. Hence, the total number of words must be at least tg tO (2) _ cass * 7E()-#E(') But the total number Of words, or vertices, available is 2", Hence, a> ,) ats X(') (16.2a) joo Observe that n—k =m isthe numberof check digits. Hence, Eq. (16.28) can be expressed as t es () (16.2b) joo M This is known as the Hamming bound. Itshould also be remembered that the Hamming bound 6 Ti Hiab 7 ito onl, a GON ee VAI dole wos Grenanoler mobic 3 code Wwoxel = rr ooo Oo arora 1s roy =| ol 1} at hey ovesin a Gripodt tes Se cowed Gn cho is frootte) ha Q.No.5 advantage of cyclic codes over most other types is tha they are easy to encode, of codes. Futtherpocy cyele coda poses a well Sols ieadiec gat adel wie hae ie the development of very efficient decoding schemes for them, | ‘A binary code is said to be a cyclic code if it exhibits two fundamental properties: 1. Linearity property: The sum of any two code words in the code is also a code word 2. Cyclic property: Any eyclic shift of a code word in the code is also a code word, > gyslemabic CF, 4) evelic coda go= Sy ord a o o1d Consider a dole Vecrx a = 4 dud= + teat Pad “K du - % oe of a) Ser tthe, ot 14 omsh tomee Wigs ==—=—E(n] WAM+ POW) el aye ea yaa f 4 otteo 11 Ss A Rowet P= Romar eid a. ana t FOO Ferme 1 PowolP= ¢ ee woh esi a wk 960 “a ee 3 H+ H fPowsl Po Pere ocht-s a . FO) ~ apace im 707 HW Row ofP < Ren 0-4 3 eS eR) FOO 9% n41 A= fleoeo fel of = dot pe 12 le Os 6] © 11 Decdiry Table e€ Ss Joooees Ce olesoss | yf a ool aeee fie zn ee 0001099 | pl { a ° ° ooeso!9d 110d , be Pe ots 6 61o9-6t 5 (Oho Peer oe 6 oe 08ols/ ool i oe (ope see) ple { Viol'ss7 who w= \olloo a7 tel |

Вам также может понравиться