Вы находитесь на странице: 1из 11

UNIT I SIGNALS AND ITS REPRESENTATION

Review Of Fourier Transform:


Any periodic function of interest in physics can be expressed as a series in sines and cosines we have already seen that the quantum wave function of a particle in a box is precisely of this form. The important question in practice is, for an arbitrary wave function, how good an approximation is given if we stop summing the series after N terms. We establish here that the sum after N terms, function , can be written as a convolution of the original function with the

that is,

The structure of the function (plotted below), when put together with the function , gives a good intuitive guide to how good an approximation the sum over N terms is going to be for a given function . In particular, it turns out that step discontinuities are never handled perfectly, no matter how many terms are included. Fortunately, true step discontinuities never occur in physics, but this is a warning that it is of course necessary to sum up to some N where the sines and cosines oscillate substantially more rapidly than any sudden change in the function being represented. We go on to the Fourier transform, in which a function on the infinite line is expressed as an integral over a continuum of sines and cosines (or equivalently exponentials that arguments analogous to those that led to now give a function ). It turns out such that

Confronted with this, one might well wonder what is the point of a function

which on

convolution with f(x) gives back the same function f(x). The relevance of will become evident later in the course, when states of a quantum particle are represented by wave functions on the infinite line, like f(x), and operations on them involve integral operators similar to the

convolution above. Working with operations on these functions is the continuum generalization of matrices acting on vectors in a finite-dimensional space, and is the infinite-dimensional representation of the unit matrix. Just as in matrix algebra the eigenstates of the unit matrix are a set of vectors that span the space, and the unit matrix elements determine the set of dot products of these basis vectors, the delta function determines the generalized inner product of a continuum basis of states. It plays an essential role in the standard formalism for continuum states, and you need to be familiar with it!

Fourier Series
Any reasonably smooth real function in a Fourier series, defined in the interval can be expanded

where the coefficients can be found using the orthogonality condition,

and the same condition for the

to give:

Note that for an even function, only the An are nonzero, for an odd function only the Bn are nonzero.

Shannon's Theorem Shannon's Theorem gives an upper bound to the capacity of a link, in bits per second (bps), as a function of the available bandwidth and the signal-to-noise ratio of the link. The Theorem can be stated as: C = B * log2(1+ S/N)

where C is the achievable channel capacity, B is the bandwidth of the line, S is the average signal power and N is the average noise power. The signal-to-noise ratio (S/N) is usually expressed in decibels (dB) given by the formula: 10 * log10(S/N) so for example a signal-to-noise ratio of 1000 is commonly expressed as 10 * log10(1000) = 30 dB. Here is a graph showing the relationship between C/B and S/N (in dB):

Shannon-Hartley Theorem The Shannon-Hartley Theorem is a theorem in information theory that establishes an upper bound for the carrying capacity of a communications channel that is based on the bandwidth of the channel and the noise in the channel. To understand the theorem, you need to understand three things: 1. the effect of noise on the total number of unique signals that can be created 2. the relationship between the number of unique signals and the number of bits that can be sent per signal 3. the limiting factors on the number of symbols that can be sent per unit of time (e.g., second).

Huffman coding

In computer science and information theory, Huffman coding is an entropy encoding algorithm used for lossless data compression. The term refers to the use of a variable-length code table for encoding a source symbol (such as a character in a file) where the variable-length code table has been derived in a particular way based on the estimated probability of occurrence for each possible value of the source symbol. It was developed by David A. Huffman while he was a Ph.D. student at MIT, and published in the 1952 paper "A Method for the Construction of Minimum-Redundancy Codes". Huffman coding uses a specific method for choosing the representation for each symbol, resulting in a prefix code (sometimes called "prefix-free codes", that is, the bit string representing some particular symbol is never a prefix of the bit string representing any other symbol) that expresses the most common source symbols using shorter strings of bits than are used for less common source symbols. Huffman was able to design the most efficient compression method of this type: no other mapping of individual source symbols to unique strings of bits will produce a smaller average output size when the actual symbol frequencies agree with those used to create the code. A method was later found to design a Huffman code in linear time if input probabilities (also known as weights) are sorted. Joint entropy

Joint entropy is a measure of the uncertainty associated with a set of variables.

The joint entropy of two variables

and

is defined as

where

and

are particular values of .

and

, respectively,

is the probability of is defined to be 0 if

these values occurring together, and

For more than two variables

this expands to

where defined to be 0 if

are particular values of .

, respectively,

is the is

probability of these values occurring together, and

Conditional entropy

In information theory, the conditional entropy (or equivocation) quantifies the remaining entropy (i.e. uncertainty) of a random variable given that the value of another random variable is known. It is referred to as the entropy of conditional on , and is written . Like other entropies, the conditional entropy is measured in bits, nats, or bans.

More precisely, if taking a certain value , then possible values that may take.

is the entropy of the variable

conditional on the variable over all

is the result of averaging

Given discrete random variable with support entropy of given is defined as:

and

with support

, the conditional

Convolutional code In telecommunication, a convolutional code is a type of error-correcting code in which


each m-bit information symbol (each m-bit string) to be encoded is transformed into an nbit symbol, where m/n is the code rate (n m) and the transformation is a function of the last k information symbols, where k is the constraint length of the code.

Use of convolutional code : Convolutional codes are used extensively in numerous applications in order to achieve reliable data transfer, including digital video, radio, mobile communication, and satellite communication. These codes are often implemented in concatenation with a hard-decision code, particularly Reed Solomon. Prior to turbo codes, such constructions were the most efficient, coming closest to the Shannon limit. Cyclic code

In coding theory, cyclic codes are linear block error-correcting codes that have convenient algebraic structures for efficient error detection and correction. Definition Let be a linear code over a finite field of block length n. is called a cyclic code, if

for every codeword c=(c1,...,cn) from C, the word (cn,c1,...,cn-1) in obtained by a cyclic right shift of components is again a codeword. Same goes for left shifts. One right shift is equal to n 1 left shifts and vice versa. Therefore the linear code is cyclic precisely when it is invariant under all cyclic shifts. Cyclic Codes have some additional structural constraint on the codes. They are based on Galois fields and because of their structural properties they are very useful for error controls. Their structure is strongly related to Galois fields because of which the encoding and decoding algorithms for cyclic codes are computationally efficient.

Channel code

In digital communications, a channel code is a broadly used term mostly referring to the forward error correction code and bit interleaving in communication and storage where the

communication media or storage media is viewed as a channel. The channel code is used to protect data sent over it for storage or retrieval even in the presence of noise (errors).

Block code

In coding theory, block codes refers to the large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The main reason why the concept of block codes is so useful is that it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way. Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors. Examples of block codes are ReedSolomon codes, Hamming codes, Hadamard codes, Expander codes, Golay codes, and ReedMuller codes. These examples also belong to the class of linear codes, and hence they are called linear block codes. Minimum distance

In coding theory minimum distance is often calculated using the Hamming distance of two codewords. It can also be calculated in other ways. For example, the minimum distance of a linear code can be calculated by finding the smallest number of linearly dependent columns in its parity check matrix.

Hamming distance

the Hamming distance between two strings of equal length is the number of positions at which the corresponding symbols are different. Put another way, it measures the minimum number of substitutions required to change one string into the other, or the number of errors that transformed one string into the other.

Forward error correction(FEC)

In telecommunication, information theory, and coding theory, forward error correction (FEC) or channel coding[1] is a technique used for controlling errors in data transmission over unreliable or noisy communication channels. The central idea is the sender encodes their message in a redundant way by using an error-correcting code (ECC). The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first errorcorrecting code in 1950: the Hamming (7,4) code. The redundancy allows the receiver to detect a limited number of errors that may occur anywhere in the message, and often to correct these errors without retransmission. FEC gives the receiver the ability to correct errors without needing a reverse channel to request retransmission of data, but at the cost of a fixed, higher forward channel bandwidth. FEC is therefore applied in situations where retransmissions are costly or impossible, such as when broadcasting to multiple receivers in multicast. FEC information is usually added to mass storage devices to enable recovery of corrupted data. FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initial analog-to-digital conversion in the receiver. The Viterbi decoder implements a soft-decision algorithm to demodulate digital data from an analog signal corrupted by noise. Many FEC coders can also generate a bit-error rate (BER) signal which can be used as feedback to fine-tune the analog receiving electronics.

Automatic repeat request

Automatic Repeat reQuest (ARQ), also known as Automatic Repeat Query, is an errorcontrol method for data transmission that uses acknowledgements (messages sent by the receiver indicating that it has correctly received a data frame or packet) and timeouts (specified periods of time allowed to elapse before an acknowledgment is to be received) to achieve reliable data transmission over an unreliable service. If the sender does not receive an acknowledgment before the timeout, it usually re-transmits the frame/packet until the sender receives an acknowledgment or exceeds a predefined number of re-transmissions. The types of ARQ protocols include

Stop-and-wait ARQ Go-Back-N ARQ Selective Repeat ARQ

These protocols reside in the Data Link or Transport Layers of the OSI model.

ShannonFano coding

In the field of data compression, ShannonFano coding, named after Claude Shannon and Robert Fano, is a technique for constructing a prefix code based on a set of symbols and their probabilities (estimated or measured). It is suboptimal in the sense that it does not achieve the lowest possible expected code word length like Huffman coding; however unlike Huffman coding, it does guarantee that all code word lengths are within one bit of their theoretical ideal . The technique was proposed in Shannon's "A Mathematical Theory of Communication", his 1948 article introducing the field of information theory. The method was attributed to Fano, who later published it as a technical report.[1] ShannonFano coding should not be confused with Shannon coding, the coding method used to prove Shannon's noiseless coding theorem, or with ShannonFanoElias coding (also known as Elias coding), the precursor to arithmetic coding. In ShannonFano coding, the symbols are arranged in order from most probable to least probable, and then divided into two sets whose total probabilities are as close as possible to being equal. All symbols then have the first digits of their codes assigned; symbols in the first set receive "0" and symbols in the second set receive "1". As long as any sets with more than one member remain, the same process is repeated on those sets, to determine successive digits of their codes. When a set has been reduced to one symbol, of course, this means the symbol's code is complete and will not form the prefix of any other symbol's code. The algorithm works, and it produces fairly efficient variable-length encodings; when the two smaller sets produced by a partitioning are in fact of equal probability, the one bit of information used to distinguish them is used most efficiently. Unfortunately, ShannonFano does not always produce optimal prefix codes; the set of probabilities {0.35, 0.17, 0.17, 0.16, 0.15} is an example of one that will be assigned non-optimal codes by ShannonFano coding. For this reason, ShannonFano is almost never used; Huffman coding is almost as computationally simple and produces prefix codes that always achieve the lowest expected code word length, under the constraints that each symbol is represented by a code formed of an integral number of bits. This is a constraint that is often unneeded, since the codes will be packed end-to-end in long sequences. If we consider groups of codes at a time, symbol-by-symbol Huffman coding is only optimal if the probabilities of the symbols are independent and are some power of a half, i.e., . In most situations, arithmetic coding can produce greater overall compression than either Huffman or ShannonFano, since it can encode in fractional numbers of bits which more closely approximate the actual information content of the symbol. However, arithmetic coding has not superseded Huffman the way that Huffman supersedes ShannonFano, both because arithmetic coding is more computationally expensive and because it is covered by multiple patents.

Signal-to-noise ratio

The signal-to-noise ratio, the bandwidth, and the channel capacity of a communication channel are connected by the ShannonHartley theorem. Signal-to-noise ratio is defined as the power ratio between a signal (meaningful information) and the background noise (unwanted signal):

where P is average power. Both signal and noise power must be measured at the same or equivalent points in a system, and within the same system bandwidth. If the signal and the noise are measured across the same impedance, then the SNR can be obtained by calculating the square of the amplitude ratio:

where A is root mean square (RMS) amplitude (for example, RMS voltage). Because many signals have a very wide dynamic range, SNRs are often expressed using the logarithmic decibel scale. In decibels, the SNR is defined as

which may equivalently be written using amplitude ratios as

The concepts of signal-to-noise ratio and dynamic range are closely related. Dynamic range measures the ratio between the strongest un-distorted signal on a channel and the minimum discernable signal, which for most purposes is the noise level. SNR measures the ratio between an arbitrary signal level (not necessarily the most powerful signal possible) and noise. Measuring signal-to-noise ratios requires the selection of a representative or reference signal. In audio engineering, the reference signal is usually a sine wave at a standardized nominal or alignment level, such as 1 kHz at +4 dBu (1.228 VRMS). SNR is usually taken to indicate an average signal-to-noise ratio, as it is possible that (near) instantaneous signal-to-noise ratios will be considerably different. The concept can be

understood as normalizing the noise level to 1 (0 dB) and measuring how far the signal 'stands out'. Prefix code

A prefix code is a type of code system (typically a variable-length code) distinguished by its possession of the "prefix property"; which states that there is no valid code word in the system that is a prefix (start) of any other valid code word in the set. For example, a code with code words {9, 59, 55} has the prefix property; a code consisting of {9, 5, 59, 55} does not, because "5" is a prefix of both "59" and "55". With a prefix code, a receiver can identify each word without requiring a special marker between words. Prefix codes are also known as prefix-free codes, prefix condition codes and instantaneous codes. Although Huffman coding is just one of many algorithms for deriving prefix codes, prefix codes are also widely referred to as "Huffman codes", even when the code was not produced by a Huffman algorithm. The term comma-free code is sometimes also applied as a synonym for prefix-free codes[1][2] but in most mathematical books and articles (e. g. [3][4]) it is used to mean self-synchronizing codes, a subclass of prefix codes. Using prefix codes, a message can be transmitted as a sequence of concatenated code words, without any out-of-band markers to frame[disambiguation needed ] the words in the message. The recipient can decode the message unambiguously, by repeatedly finding and removing prefixes that form valid code words. This is not possible with codes that lack the prefix property, for example {0, 1, 10, 11}: a receiver reading a "1" at the start of a code word would not know whether that was the complete code word "1", or merely the prefix of the code word "10" or "11".

Вам также может понравиться