Вы находитесь на странице: 1из 41

A Training report On

Submitted in partial fulfillment of the B.tech final year

Training Incharge
Mr. Ravinder Naithaini Sir

Under the guidance of

Mr. Ravinder Naithaini Sir

Submitted by:
Sandeep Verma (0806531059)

Doordarshan Prasar Bharti Broadcasting carporation of India (PBBCI) 24,Ashok Marg Hazrat Ganj Lucknow -226001

1) 2) 3) History Overview / Present status Base Band Communication 3.1) Modulation 4) 5) 6) 7) 8) 9) 10) 11) 12) 13) 14) 15) 16) 17) 18) 19) 20) Studio Centre Mixing Amplifier Distortion Head phones / Microphone FM Transmitter Hard Disk Based Recording System Resolution CCVS TV Transmitter Antenna System Satellite communication DTH Earthing Co-Channal interface Layer Communication Protocol Design 7 9 10 10 12 13 15 18 20 21 23 24 29 30 34 35 39 40

Chapter name

Page no.
5 6

Ravinder Naithaini , Assistant Station Engineer DOORDARSHAN KENDRA LUCKNOW for his valuable guidance without which it would have been difficult for me to complete my training. I also express my gratitude to Mr. Ravinder Naithani and Mr. Rajeh Kumar, who helped me a lot in understanding the various processes and concepts involved. It was really a great experience working in the DD Kendra and learning from such experienced engineers with hands on experience on the subject.

At the very beginning I would like to thank DOORDARSHAN KENDRA, LUCKNOW and all the employees of DOORDARSHAN KENDRA, LUCKNOW, by whose active response I was able to complete this summer vocational training. I shall be highly grateful to SHRI R. NAITHANI for rendering his valuable guidance and help to know more about facilities at DOORDARSHAN KENDRA, LUCKNOW and also in preparing this report.

I am very thankful to Sh. V.B. Patel , Sh. S.N. Yadav , Sh. Tarun Saxena , Sh. Manoj Gupta , Sh. D.P. Singh, Sh Soman Ghosh ,Sh. Brijesh Srivastava , Sh. Harsh Ojha , Smt . Anju Srivastava ,Sh. G.D. Mishra , Sh. N.K. Bhatt , Sh. A.K. Dubey , Smt.Kaveri Basu , Sh. Rohit Bhatt , Sh. S.P. Kanchan , Sh. Kamal Patel , Sh. Sumanto Gupta , Sh. Raju Bhatnagar , Smt. Ira Upreti ,Smt. Meena Verma , Sh. Sudhir Nigam , Sh. P.P. Singh for providing their immense help and suggestions.


This is to certify that SANDEEP VERMA S/O Shri. SRI RAM VERMA, B. Tech 3rd year (ELECTRONICS AND COMMUNICATION) of B.S.A. COLLEGE OF ENGINEERING AND TECHNOLOGY, MATHURA (U.P.) has completed his FOUR weeks of summer training under my guidance from JUNE 23rd 2011 TO JULY 21st 2011 from DOORDARSHAN KENDRA 24 , ASHOK MARG LUCKNOW(U.P.) His performance and conduct during the above period was found disciplined and well mannered. This training is under the curriculum of the college/institute of study and is further evaluated by the college/institute for awarding grade/marks.

Date: ...../......./........ Place: LUCKNOW


HISTORY Lucknow Doordarshan started functioning on 27th Nov. 1975 with an interim setup at 22, Ashok Marg, Lucknow. The colour transmission service of National Channel (only with Transmitter) started from 15-8-82. While the regular colour transmission service from studio was started in 1984 with ENG gadgets. During Reliance Cup, OB Van came to Kendra for outdoor telecast having 4 colour camera chain, recording equipments, portable microwave link. In March 1989 new studio complex started functioning. EFP Van came to DDK Lucknow in 1989 with compliment of 3 colour camera chain and recording setup for outdoor telecast. The entire recording of studio/van have been replaced to Beta format High Band edit VCR and still in use as the old recording are on H.B. UP Regional Service telecast with uplinking facility from studio (DDK, Lucknow) started in January 1998 on INSAT-2B. This service was changed to INSAT-2D (T) ARAB SAT. on 14-7-98. The news feeds are up-linked to Delhi ocassionally from Lucknow Earth Station. Studio programme is transmitted from 10 KW-TV transmitter installed at Hardoi Road through Studio Transmitter Microwave Link. Besides this, one 16 feet PDA is being installed at TV Transmitter site to receive the down link signal of Regional Service telecast from studio via ARAB SAT. on INSAT-2D (T). Site of 22 Ashok Marg, Lucknow is being utilized by Doordarshan Training Institute (for staff training) having one studio (12m x 6m) and colour camera chain. The DTI Lucknow was inaugurated in September 1995.

An Overview
Doordarshan Kendra Jaipur is part of the DD India, the largesttelevision network in the world. The first television programmewas viewed by the people of Rajasthan on 1st August,1975 underthe satellite Instructional Television Experiment in the districts of Kota, Sawai Madhopur & Jaipur. Special educational programmed were then produced at Delhi on 1st March,1977, UpgrahDoordarshan Kendra was set up at Delhi. The programmes produced at UDK for Jaipur were relayed. On 1st June 1987 Jaipur Doordarshan Kendra was set up at Jhalana Doongri and transmission started on 6th July 1987. Initially the Kendra produced only 30 min of programming and this was gradually increased to about 4 hrs. From 2nd Oct 1993 the LPTs located atAjmer, Udaipur and Bikaner and HPT at Bundi were connected with DDK Jaipur via satellite. Doordarshan introduced commerc ialservice at Jaipur Kendra on 11th Dec 1993.

Doordarshan Jaipur is the only Programme Production Center inthe Rajasthan. The studies are housed at Jhala Doongri, Jaipur andthe transmitter is located at Nahargarh Fort. As per the census figures of 2001, the channel covers 79% by population and 72% by area of Rajasthan. On 1 May 1995 telecast of DD2 programmes commenced from Jaipur using a 100W LPT. Now DD2converted as DD NEWS is being telecast from a 10KW HPT set up in 2000. The reach of the NEWS channel is 11% by area and 32%by population. There are 47.58% snf 35.83% cables homes inUrban Rajasthan and 25.69% TV, 7% cables homes in Rural Rajasthan. Presently this Kendra originates over 4hrs of daily programming(25hrs & 30 mins Weekly) in Hindi & Rajasthani. Programmes are also telecast in Sindhi, Urdu, English & Sanskrit. This Kendra originates to News bulletins daily 1 in hindi & 1 in Rajasthani and feeds important stories for the national bulletins including regular contribution in Rajyon Se Samachar at 1740 hrs daily on DD NEWS.

TECHNICAL INFORMATION OF TRANSMITTING FACILITIES AT DDK, JAIPUR:Doordarshan Kendra, Jaipur is equipped with studio, two terrestrial transmitters and one digital up-link station. The two terrestrial transmitters are of 10 KW power each. One is for DDNational and the other is for DD-News telecasting.


DD-NEWS :CH #31 (VHF-Band-III) Pictures IF: 551.25 MHz, Sound IF: 556.75 MHz DOWNLINK PARAMETERS OF DD-NEWS SATELLITE PROGRAMMES Satellite Position 93.5 degree Satellite Insat 3A Transponder Uplink Frequency 6165.5 MHz (Horizontal) Downlink Frequency 3940.5 MHz (Vertical) Modulation QPSK FEC Symbol rate 6.25 Mps Azimuth Angle 144.32 degree Elevation Angle 52.33 degree

Technical Overview
DDK Jaipur has the following main departments which manage the production,storage transmission and maintenance of the two DD National channels and the DD Rajasthan channel. 1. STUDIO 2. PRODUCTION CONTROL ROOM (PCR) 3. VIDEO STORAGE AND TRANSMISSION ROOM(VTR) 4. MAIN SWITCHING ROOM(MSR) 5. DIGITAL EARTH LINK STATION 6. TRANSMITTER Camera and lights and other equipment required for production of a feed. Camera control unit or CCU It is in the studio that all aspects related to the production of a video takes place. The DDK has two large studios and a small studio for news production.The PCR is where the post production activities like minor editing and management of feed during a live program takes place. The production manager sits in the PCR and directs the camera men and selects the angles sound parameters etc during the production stage in the PCR. It is in the PCR that we can control all the studio lights and all the microphones and other aspects. The PCR has a vision mixer and an audio mixer. Its working and other aspects are discussed in detail in the following pages. The PCR is where the phone in console and other systems are also kept. The VTR is the next section where copies of all programs are stored. All the programs shot in the camera are simultaneously recorded in the VTR. Also the VTR plays back all the videos as and when required. Videos of pre-recorded events are queued up in the VTR and are played back without a break. Videos of famous people and important events are stored in the central film pool.The MSR stores all the circuitry of the DDK. All the camera base units, all the vision mixer base units and all the audio processor base units are kept in MSR. The audio chain and video chain of MSR is explained in detail. The monitoring and control of all activities takes place in MSR. It is the MSR which decides what is to go in air. The MSR also performs some additional functions like logo addition etc. The next station is the earth station which has an uplink chain, simulcast transmitters, audio processors video processors, up converters, modulators etc. The earth station is in fully digital domain. The last stage is the transmitter which has the antenna and facilities for terrestrial transmission.

Baseband vs. Passband Communication Systems. MODULATION

The process of shifting the baseband signal to passband range for transmission is known as MODULATION and the process of shifting the passband signal to baseband frequency range at the receiver is known as DEMODULATION. In modulation, one characteristic or more of a signal (generally a sinusoidal wave) known as the carrier is changed based on the information signal that we wish to transmit. The characteristics of the carrier signal that can be changed are the amplitude, phase, or frequency, which result in Amplitude modulation, Phase modulation, or Frequency modulation

Types of Amplitude Modulation (AM)

AM is itself divided into different types:

1. Double Sideband with carrier (we will call it AM): This is the most widely
used type of AM modulation. In fact, all radio channels in the AM band use this type of modulation . 2. Double Sideband Suppressed Carrier (DSBSC): This is the same as the AM modulation above but without the carrier.

3. Single Sideband (SSB): In this modulation, only half of the signal of the DSBSC
is used.

4. Vestigial Sideband (VSB): This is a modification of the SSB to ease the

generation and reception of the signal.

Time Base Modulation

Many techniques exist for the time-scale modification of audio. This refers to changing the time duration of an audio sample without changing the pitch or other spectral characteristics. Because the pitch is not changed, small amounts of time scale modification are typically not noticeable. This is the basis for the watermarking approach described here. By modulating the time base of an audio signal, information can be undetectably encoded in it. As shown in Figure 1, short time regions of the signal are either compressed or expanded by an perceptible amount (exaggerated in the figure for illustration). We call this method time base modulation as the underlying time basis is modulated by the watermark function. The sequence and degree of compression or expansion encode the watermark information. The watermark is detected by comparing the watermarked copy with the reference (unmarked) audio. Time-alignment of the watermarked and reference audio produces a tempo map that indicates how the time base of the watermarked audio has been altered. In regions of compression or expansion, the tempo map will deviate from a straight line and the embedded watermark data may be recovered from these deviations. Though this method will not work on audio with undetectable spectral change, such as silence, there are few compelling reasons to watermark such content. The watermark can encode copyright information, a ryptographic signature, or information that specifically identifies a particular copy of the source audio. This is highly useful, for example, to hide encryption keys or for digital rights management. If each legitimate user of a copyrighted work is given a file with a unique watermark, the watermark found in illicitly distributed copies can identify the source1


microphones, announcer console, switching console, telephone lines / STL and Transmitter. Normally the programmes originate from a studio centre located inside the city/town for the convenience of artists. The programme could be either live or recorded. In some cases, the programme can be from OB spot, such as commentary of cricket match etc. Programmes that are to be relayed from other Radio Stations are received in a receiving centre and then sent to the studio centre or directly received at the studio centre through RN terminal/telephone line. 8

All these programmes are then selected and routed from studio to transmitting centre through broadcast quality telephone lines or studio transmitter microwave/VHF links .

Studio Centre
The Studio Centre comprises of one or more studios, recording and dubbing room, a control room and other ancilliary rooms like battery room, a.c. rooms, switch gear room, DG room, R/C room, service room, waiting room, tape library, etc. The size of such a centre and the number of studios provided depend on the programme activities of the station. The studio centres in AIR are categorised as Type I, II, III and IV. The number of studios and facilities provided in each type are different. For example a type I studio has a transmission studio, music studio with announcer booth, a talks studio with announcer booth, one recording/dubbing room and a Read Over Room. Type II has one additional drama studio. The other types have more studios progressively. Broadcast Studio A broadcast studio is an acoustically treated room. It is necessary that the place where a programme for broadcast purposes is being produced should be free of extraneous noise. This is possible only if the area of room is insulated from outside sound. Further, the microphone which is the first equipment that picks up the sound, is not able to distinguish between wanted and unwanted signals and will pick up the sound not only from the artists and the instruments but also reflections from the walls marring the quality and clarity of the programme. So the studios are to be specially treated to give an optimum reverberation time and minimum noise level. The entry to the studios is generally through sound isolating lobby called sound lock. Outside of every studio entrance, there is a warning lamp, which glows Red when the studio is ON-AIR. The studios have separate announcers booths attached to them where first level fading, mixing and cueing facilities are provided. Studio Operational Requirements Many technical requirements of studios like minimum noise level, optimum reverberation time etc. are normally met at the time of installation of studio. However for operational purposes, certain basic minimum technical facilities are required for smooth transmission of programmes and for proper control. These are as follows: Programme in a studio may originate from a microphone or a tape deck, or a turntable or a compact disc or a R-DAT. So a facility for selection of output of any of these equipments at any moment is necessary. Announcer console does this function. Facility to fade in/fade out the programme smoothly and control the programme level within prescribed limits. Facility for aural monitoring to check the quality of sound production and sound meters to indicate the intensity (VU meters). For routing of programmes from various studios/OB spots to a central control room, we require a facility to further mix/select the programmes. The Control Console in the control room performs this function. It is also called switching console. Before feeding the programmes to the transmitter, the response of the programme should be made flat by compensating HF and LF losses using equalised line amplifiers.(This is applicable in case of telephone lines only) Visual signalling facility between studio announcer booth and control room should also be provided. If the programmes from various studios are to be fed to more than one transmitter, a master switching facility is also required.

As already mentioned, various equipments are available in a studio to generate programme as given below: Microphone, which normally provides a level of 70 dBm. Turntable which provides an output of 0 dBm. Tape decks which may provide a level of 0 dBm. CD and RDAT will also provide a level of 0 dBm. The first and foremost requirement is that we should be able to select the output of any of these equipments at any moment and at the same time should be able to mix output of two or more equipments. However, as we see, the level from microphone is quite low and need to be amplified, so as to bring it to the levels of tape recorder/ tape decks. Audio mixing is done in following two ways: economical since it requires one single pre-amplifier for all low level inputs, but quality of sound suffers in this system as far as S/N ratio is concerned. Noise level at the input of best designed pre-amplifier is of the order of 120 dBm and the output levels from low level equipment 70 dBm. In low level mixing, there is signal loss of about 10 to 15 dB in mixing circuits. Therefore, the S/N ratio achieved in low level mixing is 35 to 40 dB only. High level mixing system requires one pre-amplifier in each of the low level channels but ensures a S/N of better than 50 dB. All India Radio employs High level mixing. Announcer Console Most of the studios have an attached booth, which is called transmission booth or Announcer booth or play back studio. This is also acoustically treated and contains a mixing console called Announcer Console. The Announcer Console is used for mixing and controlling the programmes that are being produced in the studio using artist microphones, tape playback decks and turn tables/CD players. This is also used for transmission of programmes either live or recorded. The technical facilities provided in a typical announcer booth, besides an Announcer Console are one or two microphones for making announcements, two turn tables for playing the gramophone records and two playback decks or tape recorders for recorded programmes on tapes. Recently CD and Rotary Head Digital Audio Tape Recorder (R-DAT) are also included in the Transmission Studio. Audio block schematic of transmission studio is shown in Fig. 4.



Amplifier is one of the basic building blocks of modern electronics. The present day electronics would not exist without this. Amplification is necessary because the desired signal is usually too weak to be directly useful. Present day amplifiers used in studios are mostly employing ICs and transistors.

Terms Used With Reference To Amplifiers

If you look at the technical specifications of any amplifier used in a studio, you will come across number of terms such as Input Impedance Input Level Output Impedance Output Level Gain Noise and Equivalent Input Noise Frequency response Distortion. Some of these terms have been explained briefly in the following paragraphs.

Input Impedance
It is defined as the impedance which we get while looking into the input terminals of an amplifier. The input impedance of a pre-amplifier determines the amount of a.c. voltage the pre-amplifier will get from a microphone. The input impedance also decides the noise performance of the amplifier. For best noise performance, the input impedance of a pre amplifier should exceed ten times the source impedance. It is because of this reason that the input impedance of a pre amplifier is always 2000 ohm or more. In some amplifiers a bridging input is provided. This implies that the input impedance is 10,000 ohm or greater using a special input transformer. Bridging input permits several amplifiers to be connected across a line without upsetting the impedance match of other equipment.

Output Impedance
The actual impedance seen when looking into the output terminals of an amplifier is called its output impedance. This term should not be confused with load impedance. Load impedance is defined as a specified impedance into which a device is designed to work. Many times the load impedance is higher than the output impedance. For example the output impedance of equalised line amplifier type lab 568 is less than 50 ohm while the specified load impedance is 600 ohm.

Distortion in amplifiers
The amplification of a sinusoidal signal to the input of an ideal class - A amplifier will result in a sinusoidal output wave. Generally the output waveform is not an exact replica of the inpu signal waveform because of various types of distortions that may arise either from the inherent non-linearity in the characteristics of the active device or from the influence of the associated circuit. The types of distortions that may exist either separately or simultaneously are called non-linear distortion, frequency distortion and delay or phase shift distortion. Non linear distortion: This type of distortion results from the production of new frequencies in the output which are not present in the input signal. These new frequencies or harmonics, result from the existence of non-linear dynamic curve for the active devices. The distortion is sometimes referred to as amplitude distortion or harmonic distortion. This type of distortion is more prominent when the signal levels are quite large so the dynamic operation spreads over a wide range of the characteristics.


Frequency Distortion :
This type of distortion exists when the signal components of different frequencies are amplified differently. In a transistor amplifier, this type of distortion may be caused either by the internal device capacitances or it may arise because of the associated circuit such as, the coupling components. If the frequency response characteristic is not a straight line over the range of frequencies under consideration, the circuit is said to exploit frequency distortion over this range.

Phase shift or delay distortion :

Phase shift distortion results from unequal phase shifts of signals of different frequencies. This type of distortion is not important in audio frequency amplifiers since the human ear is incapable of distinguishing relative phases of different frequency components. But it is very objectionable in the system that depends on the wave shape of the signal for their operation e.g. in television.

Noise and Equivalent Input Noise

The term noise used broadly to describe any spurious electrical disturbances that causes an output when the signal is zero. Noise may be produced by causes which may be external to the system or internal to the system regardless of where it originates in the amplifier, the noise is conveniently expressed as an equivalent noise voltages at the input that would cause the actual noise output. This noise is amplified along with the signal and tends to mask up the signal at the output. If in an amplifier, the noise at output is 50dB below the output signal level, then the equivalent noise at the input of the amplifier, which has a gain of 70 dB, will be -120 dbm.

Audio Amplifiers Used In All India Radio

The following are some of the audio amplifiers used in AIR. All these amplifiers are designed to have a frequency response within 1 dB from about 30 Hz to 10 KHz with respect to 1 KHz and a total harmonic distortion (THD) of less than 1% at maximum rated output power. PreAmplifier Pre-amplifier is the first amplifier in the broadcast chain. The output from a microphone or a pick up which is at very low level (-70 dBm) is fed to its input. The amplified signals obtained from this amplifier are given to the programme amplifier through a fader box or through a mixing console. The normal gain of this amplifier is about 50 dB. In some pre -amplifiers a variable gain between 40 to 50 dB is provided. A special feature of this amplifier is that the noise contributed by this is very low. Usually, an input transformer is provided at the input of the pre-amplifier. This input transformer has the tappings for 50,200 and 500 ohms input impedances. The tapping is selected so as to match with the output impedance of the microphone or pick up. It may be noted that in the Keltron Announcer Consoles, input impedance of the pre-amplifier is of a higher value (more than 1.5 K ohm). Programme Amplifier normal input level to this amplifier varies from -45to 20 dBm. This amplifier gives a maximum output of +27 dBm. It has a gain of 70 dB which is variable from 0 to 70 dB. The input and output impedance are usually 600 ohm. The output obtained from the programme amplifier is of a sufficiently high level and can be handled without the risk of picking up electrical noise. Monitoring Amplifier The output available from the programme amplifier is however, not enough to drive loudspeaker. Therefore, monitoring amplifiers are provided to boost these signals further. A part of the output signal from the programme amplifier is given to the monitoring amplifier



Introduction In addition to control room and studios, dubbing/recording rooms are also provided in a studio complex. Following equipments are generally provided in a recording/dubbing room : i) Console tape recorders ii) Console tape decks iii) Recording/dubbing panel having switches, jacks and keys etc. The above equipments can be used for the following purpose For recording of programmes originating from any studio. For recording of programmes available in the switching consoles in control room. For dubbing of programmes available on cassette tape. For editing of programmes For mixing and recording of programmes Recording Room A block schematic of a typical recording room is shown in figure 12. Two numbers of CTRs and two numbers of Push Button switches have been shown. Outputs from various studios and switching consoles have been given to multiple pads 1,2,3 and 4. Outputs from the multiple pads are wired to PB switches. Three numbers of receptacles for cassette outputs have been provided. Transformers T1 and T2 transform the output impedance of the cassette recorder to 600 ohm. The output of CTR # 1 is wired to PB switch # 2 through MP # 6. With this arrangement output of CTR # 1 can be recorded on CTR # 2. Please carefully note the impedances and levels at various points. Red and green lamps are provided on the control panel for indications from and to control room and studios. Dubbing Room A block schematic of a typical dubbing room is shown in figure 11. The arrangement is similar to the recording room except that an additional tape deck and a mixer unit have been provided. This arrangement allows mixing of programmes.

Headphones basically work on the same principles which are applicable to loudspeakers. However, with headphones the acoustical loading is achieved by intimacy of the ear units to the ears. Thus even very small units are capable of providing very good bass performance. Most headphones used for high quality applications are either moving coil or electrostatic. Headphone impedances range from 4 to 1000 ohms. Specifications of a stereo headphone type EM 6201 (Philips) are given below : Frequency range 20 to 20 kHz Matching impedance 4 to 32 ohms Maximum input 0.1 watt. For checking levels on a studio chain headphones with higher impedance should be used. Headphones are classified into mono, stereo and four channel headphones according to the number of channels.

It is important that all studios be maintained in the best condition at all times. The maintenance schedule suggested below should therefore be carried out very regularly.

i) Where marblex is provided it must be swept to remove dirt and soil. If any liquids are spilled, they should be mopped up immediately.Every fortnight, the floor must be washed with soap and water, then wiped with damp cloth or mopped with clean water. Detergents, harsh soaps and chemical cleaning agent should be avoided. Soft soaps will give best results. To remove stub born marks, scrub with a soft coir brush or fine plastic wire brush bright look. The following precautions may be followed Marblex floors should be protected from heavy point loads. Furniture and other heavy articles should not be dragged on the marblex 13

flooring Kerosene, petrol, turpentine or any polish containing spirit, should not be used for cleaning marblex flooring. Hard scrubbing must be avoided. A door mat should invariably be used near the entrance to keep the dust away. The exposed edges of marblex must be protected to by fixing aluminium or wooden strips or angles.ii) Where Linoleum is provided, it should be mopped with cloth soaked in soft soap solution and thereafter polished with a floor polish. It is generally not possible to clean all studio floors every day, and therefore a schedule should be drawn up indicating the studios that are to be attended to every day so that, at least over the period of one week, all the studio floorings are attended to.

very important role in the art of sound broadcasting. It is a device which converts accoustical energy into electrical energy. In the professional broadcasting field microphones have primarily to be capable of giving the highest fidality of reproduction over audio bandwidth.

Microphone Classification
Depending on the relationship between the output voltage from a microphone and the sound pressure on it, the microphones can be divided into two basic groups.

Pressure Operated Type

In such microphones only one side of the diaphragm is exposed to the sound wave. The output voltage is proportional to the sound pressure on the exposed face of the diaphragm with respect to the constant pressure on the other face. Moving coil, carbon, crystal and condenser microphones are mostly of this type. In their basic forms, the pressure operated microphones are omni-directional.

Velocity or Pressure Gradiant Type

In these microphones both sides of the diaphragm are exposed to the sound wave. Thus the output voltage is proportional to the instantaneous difference in pressure on the two sides of the diaphragm. Ribbon microphone belongs to this category and its polar diagram is figure of eight.

Types of Microphones
There are many types of microphones. But only the most common types used in broadcasting have been described here. Dynamic or Moving Coil Microphone This is common broadcast quality microphone which is rugged and can be carried to outside broadcast/recording etc. It consists of a strong permanent magnet whose pole extensions form a radial field within a round narrow gap. A moving coil is supported within this gap and a dome shaped diaphragm usually of aluminium foil is attached to the coil. The coil is connected to a microphone transformer whose secondary has sometimes tapings to select proper impedance for matching. With sound pressure changes, the diaphragm and coil move in the magnetic field, therefore, emf is induced in the speech coil, Ribbon/ Velocity Microphone Corrugated aluminium foil about 0.1 mm thick forms a ribbon which is suspended within two insulated supports. The ribbon is placed within the extended poles of a strong horse shoe 14

magnet. The ribbon moves due to the difference in pressure (at right angles to its surface) i.e. from the front or rear of the mike. There exists the maximum pressure difference between the front and rear of ribbon because of maximum path difference. The sound does not develop any pressure gradient when it comes from the sides of the microphones because there is no path difference. It reaches the front and rear of ribbon at the same time, hence no movement of ribbon. Thus, this microphone is bi-directional and follows figure of eight directivity pattern with no pick up from sides. Such a microphone has a clarity filter. This is a series resonant circuit at low frequencies across the primary of microphone transformer. When switched to the Talk or Voice position, the response is modified cutting down low frequencies by about 8 dB at 50 Hz. This filter should therefore not be in circuit during music performances. All the other types of microphones are pressure operated whereas ribbon mike operates on pressure gradient which results in the change in velocity of the ribbon. Thus it is also called the Velocity microphone.

There is too much over-crowding in the AM broadcast bands and shrinkage in the night-time service area due to fading, interference, etc. FM broadcasting offers several advantages over AM such as uniform day and night coverage, good quality listening and suppression of noise, interference, etc. All India Radio has gone in for FM broadcasting using modern FM transmitters incorporating state-of-art technology. The configurations of the transmitters being used in the network are : 3 kW Transmitter 2 x 3 kW Transmitter 5 kW Transmitter 2 x 5 kW Transmitter Salient Features of BEL/GCEL FM Transmitters 1. Completely solid state. 2. Forced air cooled with the help of rack-integrated blowers. 3. Parallel operation of two transmitters in passive exciter standby mode. 4. Mono or stereo broadcasting 5. Additional information such as SCA signals and radio traffic signals (RDS) can also be transmitted. 6. Local/Remote operation 7. Each transmitter has been provided with a separate power supply. 8. Transmitter frequency is crystal controlled and can be set in steps of 10 kHz using a synthesizer. Modern FM Transmitter Simplified block diagram of a Modern FM Transmitter is given in Fig.1. The left and right channel of audio signal are fed to stereo coder for stereo encoding. This stereo encoded signal or mono signal (either left or right channel audio) is fed to VHF oscillator and modulator. The FM modulated output is amplified by a wide band power amplifier and then fed to Antenna for transmission. Voltage controlled oscillator (VCO) is used as VHF oscillator and modulator. To stabilize its frequency a portion of FM modulated signal is fed to a programmable divider, which divides the frequency by a factor N to get 10 kHz frequency at the input of a phase and frequency comparator (phase detector). The factor N is automatically selected when we set the station carrier frequency. The other input of phase detector is a reference signal of 10 kHz generated by a crystal oscillator of 10 MHz and divided by a divider (1/1000). The output of phase detector is an error voltage, which is fed to VCO for correction of its frequency through rectifier and low pass filter 15

. (Fig. 1 Block Diagram of Modern FM Transmitter ) Simplified block diagram of a 2 x 3 kW FM transmitter is shown in Fig.2. 2 x 3 kW Transmitter setup, which is more common, consists of two 3 kW transmitters, designated as transmitters A and B, whose output powers are combined with the help of a combining unit. Maximum of two transmitters can be housed in a single rack along with two Exciter units. Transmitter A is provided with a switch-on-control unit (GS 033A1) which, with the help of the Adapter plug-in-unit (KA 033A1), also ensures the parallel operation of transmitter B. Combining unit is housed in a separate rack.

(Fig.2 Block Diagram Of 2x3 Kw Fm Transmitter) Low-level modulation of VHF oscillator is carried out at the carrier frequency in the Exciter type SU 115. The carrier frequency can be selected in 10 kHz steps with the help of BCD switches in the synthesizer. The exciter drives four 1.5 kW VHF amplifier, which is a basic module in the transmitter. Two such amplifiers are connected in parallel to get 3 kW power. The transmitter is forced air-cooled with the help of a blower. A standby blower has also been provided which is automatically selected when the pre-selected blower fails. Both the 16

blowers can be run if the ambient temperature exceeds 40oC. Power stages are protected against mismatch (VSWR > 1.5) or excessive heat sink temperature by automatic reduction of power with the help of control circuit. Electronic voltage regulator has not been provided for the DC supplies of power amplifiers but a more efficient system of stabilization in the AC side has been provided. This is known as AC-switch over. Transmitter operates in the passive exciter standby mode with help of switch-on-control unit. When the pre-selected exciter fails, standby exciter is automatically selected. Reverse switch over, however, is not possible. A simplified block diagram of a 2 x 5 kW FM Transmitter is also given in Fig. 3.

(Fig.3 RF Block Schematic of 2x5 kW FM Transmitter) The Exciter (SU115) is, basically, a self-contained full-fledged low power FM Transmitter. It has the capability of transmitting mono or stereo signals as well as additional information such as traffic radio, SCA (Subsidiary Channel Authorisation) and RDS (Radio Data System) signals. It can give three output powers of 30 mW, 1 W or 10 W by means of internal links and switches. The output power is stabilized and is not affected by mismatch (VSWR > 1.5), temperature and AC supply fluctuations. Power of the transmitter is automatically reduced in the event of mismatch. The 10 W output stage is a separate module that can be inserted between 1 W stage and the low pass harmonics filter. This stage is fed from a switching power supply which also handles part of the RF output power control and the AC supply stabilizations. In AIR set up this 10 W unit is included as an integral part of the Exciter. This unit processes the incoming audio signals both for mono and stereo transmissions. In case of stereo transmission, the incoming L and R channel signals are processed in the stereo coder circuit to yield a stereo base band signal with 19 kHz pilot tone for modulating the carrier signal. It also has a multiplexer wherein the coded RDS and SCA signals are multiplexed with the normal stereo signal on the modulating base band. The encoders for RDS and SCA applications are external to the transmitter and have to be provided separately as and when needed. Frequency Generation, Control and Modulation The transmitter frequency is generated and carrier is modulated in the Synthesiser module within the Exciter. The carrier frequency is stabilized with reference to the 10 MHz frequency from a crystal oscillator using PLL and programmable dividers. The operating frequency of the transmitter can be selected internally by means of BCD switches or externally by remote control. The output of these 17

switches generates the desired number by which the programmable divider should divide the VCO frequency (which lies between 87.5 to 108 MHz) to get a 10 kHz signal to be compared with the reference frequency. The stablised carrier frequency is modulated with the modulating base band consisting of the audio (mono and stereo), RDS and SCA signals. The Varactor diodes are used in the synthesizer to generate as well as modulate the carrier frequency. Switch-ON Control Unit (Type GS 033 A1) The switch-on-control unit can be termed as the brain and controls the working of the transmitter A. It performs the following main functions: 1. It controls the switching ON and OFF sequence of RF power amplifiers, rack blower and RF carrier enable in the exciter. 2. Indicates the switching and the operating status of the system through LEDs. 3. Provides automatic switch over operation of the exciter in the passive exciter standby mode in which either of the two exciters can be selected to operate as the main unit. 4. It provides a reference voltage source for the output regulators in the RF amplifiers. 5. It is used for adjusting the output power of the transmitter. 6. It evaluates the fault signals provided by individual units and generates an overall sum fault signal which is indicated by an LED on the front panel. The fault is also stored in the defective unit and displayed on its front panel. Adapter Unit (KA 033A1) Adapter Unit is a passive unit which controls transmitter B for its parallel operation with transmitter A in active standby mode. The control signals from the Switch-on control unit are extended to transmitter B via this Adapter unit. If this unit is not in position the transmitter B can not be energized. 1.5 kW VHF Amplifier (VU 315) This amplifier is the basic power module in the transmitter. It has a broad band design so that no tuning is required for operation over the entire FM Broadcast band. RF power transistors of its output stages are of plug in type which are easy to replace and no adjustments are required after replacement. Each power amplifier gives an output of 1.5 kW. Depending on the required configuration of the transmitter, output of several such amplifiers is combined to get the desired output power of the transmitter. For instance, for a 3 kW set-up two power amplifiers are used whereas for a 2 x 3 kW set-up, 4 such amplifiers are needed. The simplified block diagram of 1.5 kW Power Amplifier is given in Fig. 4. Fig. 4 Block Diagram of 1.5 kW Amplifier VU 315 Ref. Drg.No.:-STI(T)444(DC196) This amplifier requires an input power of 2.5 to 3 W and consists of a driver stage (output 30 W) followed by a pre-amplifier stage (120 W). The amplification from 120 W to 1500 W in the final stage is achieved with the help of eight 200 W stages. Each 200 W stage consists of two output transistors (TP 9383, SD1460 or FM 150) operating in parallel. These RF transistors operate in wide band Class C mode and are fitted to the PCB by means of large gold plated spring contacts to obviate the need for soldering. The output of all these stages is combined via coupling networks to give the final output of 1.5 kW. A monitor in each amplifier controls the power of the driver stage depending on the reference voltage produced by the switch-on-control unit. Since this reference voltage is the same for all the VHF amplifiers being used, all of them will have the same output power. Each amplifier has a meter for indicating the forward .


system deals mainly with texts and digits. These texts and digits are in digital form and are recorded on the hard disk of the computer. So, if the analog audio or video is converted into digital form, it can be recorded on the hard disk of the computer. Then the computer can manipulate the audio/video in the similar way as it manipulates texts and digits. This is what 18

is known as hard disk based recording system. Hard disk based recording system used for audio is also known as Audio Work Station. Before we discuss the hard disk based recording system, let us see the constraints of analogue system and benefits of digital system.

Analogue System The constraints

Inflexible - Insert/drop is not easy. Quality - It degrades with usage. Editing - Cumbersome and time consuming Automation - Complete automation is difficult. Limited signal to noise ratio and dynamic range. Compression technique is not possible. Signal quality degrades due to tape hiss, modulation noise, dropouts, wow and flutter, cross talk, distortion etc. There is slow progress of analogue equipments.

Digital System - The benefits

It is highly flexible and versatile. No quality degradation with multiple usage. Highly reliable, efficient and nondestructive editing Complete automation is possible. Excellent signal to Noise Ratio - better than 90 dB.Dynamic Range - better than 90 dB Cross Talk - better than 90 dB Distortion - 0.004% Wow & Flutter - Nil Quality depends only on the quality of conversion process. It is possible to use compression technique. This results in bandwidth saving. Tremendous benefits in the areas of post-production, archiving and multi-generation. Since the signal is in binary form, it is immune from noise. Hard disk recording facilitates random access. Centralised storage system can be provided. There is fast progress of digital equipment. There are two types of hard disk based (audio) recording system : i) Dedicated system and ii) Networked Hard Disk Based System. Dedicated System In dedicated system, there is only one computer terminal for recording or play. This is mainly used for editing purpose with a special key board (edit controller). Networked Hard Disk Based System In networked hard disk based system a number of work stations (computer terminals) are connected together to the main server (central server). They work in a LAN environment. This system has been shown in figure 1 (a) and (b). This system facilitates the following :- Integrated studio automation system Simplified operational task Reduced handling cost Elimination of monotonous repeat works. Instant and random access to all audio clips. Detailed logging of on-air events Access restriction and security.


A picture can be considered to contain a number of small elementary areas of light or shade which are called PICTURE ELEMENTS. The elements thus contain the visual image of the scene. In the case of a TV camera the scene is focused on the photosensitive surface of pick up device and a optical image is formed. The photoelectric properties of the pick up device convert the optical image to a electric charge image depending on the light and shade of the scene (picture elements). Now it is necessary to pick up this information and transmit it. For this purpose scanning is employed. Electron beam scans the charge image and produces optical image. The electron beam scans the image line by line and field by field to provide signal variations in a successive order. The scanning is both in horizontal and vertical direction simultaneously. The horizontal scanning frequency is 15,625 Hertz. The vertical scanning frequency is 50 Hz. The frame is divided in two fields. Odd lines are scanned first and then the even lines. The odd and even lines are interlaced. Since the frame is divided into 19

2 fields the flicker reduces. The field rate is 50 Hertz. The frame rate is 25 Hertz (Field rate is the same as power supply frequency).

The scanning spot (beam) scans from left to right. The beam starts at the left hand edge of the screen and goes to right hand edge in a slightly slanty way as the beam is progressively pulled down due to vertical deflection of beam (as top to bottom scanning is to take place simultaneously). When the beam reach the right hand edge of the screen the direction of beam is reversed and goes at a faster rate to the left hand edge (below the line scanned). Once again the beam direction is reversed and scanning of next line starts. This goes on till the beam completes scanning 312 and half lines reaching the bottom of the screen. At this moment the beam flies back to top and starts scanning starting from half line to complete the next 312 and half lines of the frame. To avoid distortions in the picture whenever the beam changes its direction, it is blanked out for a certain duration. The horizontal blanking period is 12 microseconds. Since each line takes 64 micro seconds the active period of line is 64 -12 = 52 micro seconds. (Since 625 lines are scanned at the rate of 25 Hz i.e. 25 cycles per second, the number of lines scanned in one second is 625 multiplied by 25 which yields 15,625. So the horizontal frequency is 15,625 hertz and hence each line takes 64 micro seconds). Similarly there is vertical blanking period and 25 TV lines are blanked out during the period. So in one frame 50 TV lines are blanked out. Hence effective lines are 625 minus 50 i.e. 575. The vertical resolution depends on the number of scanning lines and the resolution factor (also known as Kell factor). Assuming a reasonable value of Kell factor as 0.69. The vertical resolution is 575 multiplied by 0.69 which gives nearly 400 lines. The capability of the system to resolve maximum number of picture elements along scanning lines determines the horizontal resolution. It means how many alternate black and white elements can be there in a line. Let us also take another factor. It is realistic to aim at equal vertical and horizontal resolution. We have seen earlier that the vertical resolution is limited by the number of active lines. We have already seen that the number of active lines are 575. so for getting the same resolution in both vertical and horizontal directions the number of alternate black and white elements on a line can be 575 multiplied by Kell factor and aspect ratio. Therefore, the number of alternate black and white dots on line can be 575 x 0.69 x 4/3 which is equal to 528.

Grey Scale
In black and white (monochrome) TV system all the colours appear as gray on a 10-step gray scale chart.TV white corresponds to a reflectance of 60% and TV black 3 % giving rise to a Contrast Ratio of 20:1 (Film can handle more than 30:1 and eyes capability is much more). In black and white TV the concept of gray scale is adopted for costumes, scenery etc. If the foreground and back ground are identical in gray scale, they may merge and the separation may not be noticed clearly on the screen.

Brightness reveals the average illumination of the reproduced image on the TV screen. Brightness control in a TV set adjusts the voltage between grid and cathode of the picture tube

Contrast is the relative difference between black and white parts of the reproduced picture. In a TV set the contrast control adjusts the level of video signal fed to the picture tube. Brightness and contrast controls are to be adjusted in a TV set to reproduce faithfully as many gray scale steps as possible. Ultimately the adjustment depends on individual viewing habit. 20

Viewing Distance
Optimum viewing distance from TV set is about 4 to 8 times the height of the TV screen.hile viewing TV screen one has to ensure that no direct light falls on the TV screen.

Colour Composite Video Signal (CCVS)

Colour Composite Video Signal is formed with Video, sync and blanking signals. The level is standardized to 1.0 V peak to peak (0.7 volts of Video and 0.3 volts of sync pulse). The TV signals have varying frequency content. The lowest frequency is zero. (when we are transmitting a white window in the entire active period of 52 micro seconds the frequency is Zero). In CCIR system B the highest frequency that can be transmitted is 5 MHz even though the TV signal can contain much higher frequency components. (In film the reproduction of frequencies is much higher than 5 MHz and hence clarity is superior to TV system.) long shots carry higher frequency components than mid close ups and close ups. Hence in TV productions long shots are kept to a minimum. In fact TV is a medium of close ups and mid close ups.


In TV broadcast both the sound signal and the video signal are to be conveyed to the viewer using radio frequency. These two signals have very distinct features. The audio signal is a symmetrical signal without continuous current but the frequency does not exceed 20 kHz. The video signal consists of a logical component, the sync and the field sync and an analogue part according to the line picture scanning. This unsymmetrical signal thus has a continuous component. The frequency bandwidth also extends from 0 to 5 MHz. The two signals modulate the carrier waves whose frequencies and types of modulations are as per established standards. These modulated carriers are further amplified and then diplexed for transmission on the same line and antenna. This technique is used with High Power TV Transmitters. However for LPTs i.e. transmitters operating at sync peak power less than 1 kW, both the signals (video and audio) are modulated separately (In most of the present day TV transmitters the picture signal is amplitude modulated while the audio signal is frequency modulated) but amplified jointly using common vision and aural amplifiers. Both of these systems have merits and demerits. In the first case (separate amplification) special group delay equalisation circuit is needed because of errors caused by diplexer while in the second case inter modulation products are more prominent and special filters for suppressing them are required. Hence technique of joint amplification is suitable only for LPTs and not for HPTs. Though frequency modulation has certain advantages over amplitude modulation, its use for picture transmission is not permitted due to large bandwidth requirements, which is not possible due to very limited channel space available in VHF/UHF bands. Secondly as the power of the carrier and side band components go on varying with modulation in the case of FM, the signal with frequency modulation after reflection from nearby structures at the receiving end will cause variable multiple ghosts, which will be very disturbing. Hence use of FM for terrestrial transmission of picture signal is not permitted. Thus amplitude modulation is invariably used for picture transmission while frequency modulation is generally used for sound transmission due to its inherent advantages over amplitude modulation. As the picture signal is unsymmetrical, two types of modulation is possible. 21

i) Positive modulation Wherein the increase in picture brightness causes increase in the amplitude of the modulation envelope.

ii) Negative modulation

The increase in picture brightness causes reduction in carrier amplitude i.e. the carrier amplitude will be maximum corresponding to sync tip and minimum corresponding to peak white. In television though positive modulation was adopted in initial stages, negative modulation is generally adopted (PALB uses negative modulation) now a days, as there are certain advantages over positive modulation. Advantages of Negative Modulation


Another feature of present day TV Transmitters is vestigial side band transmission. If normal amplitude modulation technique is used for picture transmission, the minimum transmission channel bandwidth should be around 11 MHz taking into account the space for sound carrier and a small guard band of around 0.25 MHz. Using such large transmission BW will limit the number of channels in the spectrum allotted for TV transmission. To accommodate large number of channels in the allotted spectrum, reduction in transmission BW was considered necessary. The transmission BW could be reduced to around 5.75 MHz by using single side band (SSB) AM technique, because in principle one side band of the double side band (DSB) AM could be suppressed, since the two side bands have the same signal content. It was not considered feasible to suppress one complete side band due to difficulties in ideal filter design in the case of TV signal as most of the energy is contained in lower frequencies and these frequencies contain the most important information of the picture. If these frequencies are removed, it causes objectionable phase distortion at these frequencies which will affect picture quality. Thus as a compromise only a part of lower side band is suppressed while taking full advantage of the fact that: The radiated signal thus contains full upper side band together with carrier and the vestige (remaining part) of the partially suppressed LSB. The lower side band contains frequencies up to 0.75 MHz with a slope of 0.5 MHz so that the final cut off is at 1.25 MHz. RECEPTION OF VESTIGIAL SIDE BAND SIGNALS Corresponding to the VSB characteristics used in transmission an amplitude versus frequency response results. When the radiated signal is demodulated with an idealized detector, the response is not flat. The resulting signal amplitude during the double sideband portion of VSB is exactly twice the amplitude during the SSB portion. In order to equalize the amplitude, the receiver response is designed to have an attenuation characteristics over the double side band region appropriate to compensate for the two to one relationship. This attenuation characteristic, the so-called nyquist slope, is assumed to be in the form of a linear slope over the + 750 kHz (DSB region) with the visual carrier located at the mid point (-6 dB point) relative to SSB portion of the band. Such a characteristic exactly compensates the amplitude response non-symmetry due to VSB Modern practice for purposes of circuit and IF filter design simplification, also provides an attenuation of the upper end of the channel such that color sub carrier is also attenuated receivers have Nyquist characteristics for reception which introduces group delay errors in the low frequency region. Notch filters are used in receivers as aural traps in the vision IF and Video amplifier stages. These filters introduce GD errors in the high frequency region of the


video band. These GD errors are pre-corrected in the TV transmitters (using RX pre corrector) so that economical receiver filter design is possible.


TV Antenna System is that part of the Broadcasting Network which accepts RF Energy from transmitter and launches electromagnetic waves in space. The polarization of the radiation as adopted by Doordarshan is linear horizontal. The system is installed on a supporting tower and consists of antenna panels, power dividers, baluns, branch feeder cable, junction boxes and main feeder cables. Dipole antenna elements, in one or the other form are common at VHF frequencies where as slot antennae are mostly used at UHF frequencies. Omni directional radiation pattern is obtained by arranging the dipoles in the form of turnstile and exciting the same in quadrature phase. Desired gain is obtained by stacking the dipoles in vertical plane. As a result of stacking, most of the RF energy is directed in the horizontal plane. Radiation in vertical plane is minimized. The installed antenna system should fulfil the following requirements : a) It should have required gain and provide desired field strength at the point of reception. b) It should have desired horizontal radiation pattern and directivity for serving the planned area of interest. The radiation pattern should be omni directional if the location of the transmitting station is at the center of the service area and directional one, if the location is otherwise. c) It should offer proper impedance to the main feeder cable and thereby to the transmitter so that optimum RF energy is transferred into space. Impedance mismatch results into reflection of power and formation of standing waves. The standard RF impedance at VHF/UHF is 50 ohms.
Fig. 15 Turnstile Antenna and its Horizontal Pattern Ref. Drg.No:-STI(T)754,(DC506) High Power TV Transmitting Antenna System In the

In the High Power TV Transmitting antenna system, half wave dipole elements are mounted on the four faces of a square tower of suitable dimension for getting an approximate omni directional horizontal radiation pattern. If radiation in any particular direction is not desired, 23

the panels are left out in that direction. Dipole elements, supported by quarter wave line are backed by screened reflector to keep the radiation out of tower. The position of the panels are slightly offset from the center-either clockwise or anti clockwise as shown in fig.16 for achieving wide band impedance match. Required number of panels are stacked vertically at a spacing of nearly half wave length to provide desired gain. Panels thus stacked are divided into two groups upper half is called the upper bay and the lower half as lower bay. The constitution of antenna panels and feeding arrangements are described in the following paragraphs

Branch feeder cables

Two sets of branch feeder cables, connect the antenna panels. One set has the length L and other set's length is L + quarter wave length. The number of such cables in each set are half the total nos. of antenna panels. This condition applies when equal no. of panels are mounted on each face of the tower. The impedance of branch feeder cable is 50 ohms for band III and 72 ohms for Band-I. Feeding Arrangement For obtaining an omni directional horizontal radiation pattern, quadature-feeding technique is employed. Each panel as shown in fig. 18 is excited by equal amplitude of current but with 90 degree phase difference with respect to adjacent panel in a particular sequence i.e. clockwise or anti clockwise. This is realized by connecting branch feeder cables of L and L + quarter wavelength. (L = Wave length of center frequency corresponding to operating channel) in the correct sequence. Branch feeder cables from one junction box feed all the antenna panels constituting upper bay and those from second junction box feed the panels constituting the lower bay. The input port of the both junction boxes, as shown in the schematic 4 is connected by two separate main feeder of equal electrical length, which carry equal RF energy from the transmitter. In case of fault with one of the junction box, antenna panels in one bay or one of the main feeder cable itself, half of the power can be dissipated into dummy load situated in the transmitting building. Full power may also be radiated by connecting transmitter output to one of the main feeder only and thereby through antenna panels of one bay. Necessary provision for routing the RF Energy as desired, is made at U-link panels in the transmitting hall.


Satellite Communication is the outcome of the desire of man to achieve the concept of global village. Penetration of frequencies beyond 30 Mega Hertz through ionosphere force people to think that if an object (Reflector) could be placed in the space above ionosphere then it could be possible to use complete spectrum for communication purpose. History In an article published in Wireless World in 1945 Arthur C.Clarke foresaw that it would be possible to provide complete coverage of world from just three satellites, provided that they could be placed at Geostationary orbit i.e., at an orbit 35,799 KM above the equator. For placing the satellite at this height, the speed of satellite should be 3.074 km/sec. Or 11,200 km/hour. This was indeed the pioneering concept on satellite communication even though the article was meant to be scientific fiction. However, the required technology to put satellite in space was not available at that time and so the scientists and engineers did not take this article by Clarke seriously. USSR could master rocket technology and put Sputnik I the worlds first satellite in space in 1957 from Baikonour cosmodromme in Kazakhistan. This was the beginning of satellite era. Sputnik-I was a low orbit satellite and weighed 1100 lbs. This was indeed remarkable achievement by any standard especially lifting a pay load of 1100 lbs in the maiden attempt itself. Sputnik-I could broadcast radio signals to earth on 31.5 MHz. It 24

orbited 16 revolutions per day and its life was 90 days. In 1960 USA launched Echo-I and Echo-II from Cape Canaveral. These were passive satellites and could only reflect the signal mechanically and could not receive, amplify or change the frequency before re-transmission. In 1962 the worlds first active satellite Telstar was launched from Cape Canaveral by USA which made history in relaying Live Broadcast to Europe on 10 July, 1962. Telstar was also low orbit satellite but active satellite. In 1964, Syncom the worlds first geosynchronous satellite was launched by USA from Cape Canaveral. Syncom had relayed the Tokyo Olympic live Television coverage to USA for the first time in the world. Around the same time, Molniya satellite was launched by USSR but it was a low orbit active satellite with an inclination of 66o. This orbit was found the best suited to cover the northern part of USSR effectively and so this is preferred by the Russians even now. Intelsat-I (nick named as Early Bird) was launched on 2 April 1965. This was parked in geosynchronous orbit in Atlantic ocean and provided telecommunication or television service between USA and Europe. It had capacity for 240 one way telephone channels or one television channel. Subsequently Intelsat-II generation satellites were launched and parked in ocean and Pacific Ocean. During Intelsat III generation, not only Atlantic and Pacific ocean got satellites but also Indian Ocean got satellite for the first time. Now Arthur C.Clarkes vision of providing global communication using three Satellites with about 120 degrees apart became a reality. So far Intelsat has launched 7 generations of geosynchronous satellites in all the three regions namely Atlantic Ocean, Pacific Ocean and Indian Ocean. For national as well as neighbouring countries coverage, some of the following satellites are used: ANIK : Canadian satellite system INSAT : Indian Satellites AUSSAT : Australian Satellites BRAZILSAT : Brazilian Satellites FRENCH TELECOM : French Satellites ITALSAT : Italian Satellites CHINASAT : Chinese Satellites STATSIONAR, GORIZONT, Russian Satellites EKRAN. There are also satellites operated by private organisations and some of these are given below : GALAXY : Owned by Hughes Corporation SATCOM : Owned by RCA SBS : Satellite Business Systems PAS : Owned by PANAMSAT ASIASAT : Owned by Chinasatellites State Communication for Technology and industry Space prophet Arthur C. Clarke was the first person to predict the modern day satellite communication. In his scientific fiction in Wireless World in 1945 he foresaw that it would be possible to provide complete radio coverage using three satellites placed in Geosynchronous orbit at an height of 35,779 KM above earth or 42,157 KM from the centre of earth. It was a remarkable prediction at that time when it was felt it would be shear wastage time to think of placing satellite in space. He calculated that if a satellite had to remain in space at an height of 35,779 KM it should revolve with a velocity of 3.074 KM/second so as to acquire centrifugal force to neutralise the force of attraction of earth. With this velocity, the satellite will make one revolution in 23 hours 56 Minutes and 4 seconds and this is the actual period of rotation of earth on its own axis. This is known as sidereal day (reference of rotation for sidereal day is a distant star and for solar day, reference of rotation is sun). Solar day has 24 hours as the reference for rotation is sun. The earth is not remaining in the same location every day but is revolving around the sun and thus advancing about one degree per day. In a year, it would have advanced 365 degrees (about one full day) in a year and this would have produced one additional rotation of earth on its own axis. Thus there are 366.25 sidereal days in a year as compared to 365.25 solar days. This correspondingly reduces the sidereal day time from 24 hours (solar day) to 23 hours, 56 minutes and 4.1 seconds. Since earth is actually making one rotation on its own axis in 23 hours, 56 Minute and 4.1 seconds the geosynchronous satellite should also make one revolution during this period as to remain at same point while looking from earth. Two technologies responsible for the birth of Satellite Communication System are :1. Rocket Technology. 25

2. Microwave Technology The Second World War favoured the expansion of these two technologies. Satellite is basically a reflector in the sky. Advantages of satellite Communication The following are the advantages of satellite communication - This is only means which can provide multi access two way communication. Within the coverage area, it is possible to establish one way or two way communication between any two points. - The cost of transmitting information through satellite is independent of distance involved. - Satellite can be used for two way communication or broadcast purpose with the covered area. - Satellites are capable of handling very high bandwidth. Normally any satellite can accommodate about 500 MHz in C Band. For example the bandwidth of INSAT-I is 480 MHz in C Band and 80 MHz in S Band. INSAT-II has a bandwidth of 720 MHz in C Band and 80 MHz in S Band. - It is possible to provide large coverage using satellite. For example Geostationary satellite can cover about 42% of earth surface using global beam. - Satellite can provide signal to terrestrial uncovered pockets like valleys and mountainous regions. - Satellites can provide uniform signals for urban areas or rural areas unlike terrestrial service which will lay more signal to urban areas (where the transmitters are located) as compared to rural areas. - It is easy and quicker to establish new satellite link using SNG terminal or VSAT terminal from any point to any other point as compared to any other means.

Architecture of a Satellite Communication System

Figure 1 shows the various components of a Satellite Communication System. Basically it comprises two elements : a. Ground Segment b. Space Segment
The Space Segment The space segment contains the Satellite and all terrestrial facilities for the control and monitoring of the Satellite. This includes the tracking, telemetry and command stations (TT&C) together with the Satellite control centre where all the operations associated with station-keeping and checking the vital functions of the satellite are performed. In our case it is Master Control Facility (MCF) at Hassan. The radio waves transmitted by the earth stations are received by the satellite ; this is called the up link. The satellite in turn transmits to the receiving earth stations ;


Fig. 1 Architecture of a Satellite communication system The Ground Segment

The ground segment consists of all the earth stations ; these are most often connected to the end-users equipment by a terrestrial network or, in the case of small stations (Very Small Aperture Terminal, VSAT), directly connected to the end-users equipment. Stations are distinguished by their size which varies according to the volume of traffic to be carried on the space link and the type of traffic (telephone, television or data). The largest are equipped with antenna of 30 m diameter (Standard A of the INTELSAT network). The smallest have 0.6 m antenna (direct television receiving stations). Fixed, transportable and mobile stations can also be distinguished. Some stations are both transmitters and receivers. Others are only receivers ; this is the case, for example with receiving stations for a satellite broadcast system or a distribution system for television or data signals. Space Geometry

Types of Orbit
The orbit is the trajectory followed by the satellite in equilibrium between two opposing forces. These are the force of attraction, due to the earths gravitation, directed towards the centre of the earth and the centrifugal force associated with the curvature of the satellites trajectory. The trajectory is within a plane and shaped as an ellipse with a maximum extension at the apogee and a minimum at the perigee. The satellite moves more slowly in its trajectory as the distance from the earth increases. Radio Networking Terminal The Radio Networking terminal located at AIR stations receive S-Band or C Band transmissions. The programmes thus received after processing are fed to the transmitter for broadcast purposes. Thus RNT acts as the ground terminal for satellite signal reception. The block diagram of S-band RN terminal is shown in figure 10.


Block Diagram of Front End Convertor


Direct-to-Home Satellite Broadcasting (DTH)

INTRODUCTION There was always a persistent quest to increase the coverage area of broadcasting. Before the advent of the satellite broadcasting, the terrestrial broadcasting, which is basically localized, was mainly providing audio and video services. The terrestrial broadcasting has a major disadvantage of being localized and requires a large number of transmitters to cover a big country like India. It is a gigantic task and expensive affair to run and maintain the large number of transmitters. Satellite broadcasting, came into existence in mid sixties, was thought to provide the one-third global coverage simply by up-link and down-link set-ups. In the beginning of the satellite broadcasting, up-linking stations (or Earth Stations) and satellite receiving centers could had only been afforded by the Governments organizations. The main physical constraint was the enormous size of the transmitting and receiving parabolic dish antennas (PDA). In the late eighties the satellite broadcasting technology had undergone a fair improvements resulting in the birth of cable TV. Cable TV operators set up their cable networks to provide the services to individual homes in local areas. It rapidly grew in an unregulated manner and posed a threat to terrestrial broadcasting. People are now mainly depending on cable TV operators. Since cable TV services are unregulated and unreliable in countries like India now, the satellite broadcasting technology has ripened to a level where an individual can think of having direct access to the satellite services, giving the opportunity to viewers to get rid of cable TV. Direct-to-Home satellite broadcasting (DTH) or Direct Satellite Broadcasting (DBS) is the distribution of television signals from high powered geostationary satellites to a small dish antenna and satellite receivers in homes across the country. The cost of DTH receiving equipments is now gradually declining and can be afforded by common man. Since DTH services are fully digital, it can offer value added services, video-on-demand, Internet, e-mail and lot more in addition to entertainment. DTH reception requires a small dish antenna (Dia 60 cm), easily be mounted on the roof top, feed along with Low Noise Block Converter (LNBC), Set-up Box (Integrated Receiver Decoder, IRD) with CAS (Conditional Access System). A bouquet of 40 to 50 video programs can simultaneously be received in DTH mode.

DTH broadcasting is basically satellite broadcasting in Ku-Band (14/12 GHz). The main advantage of Ku-Band satellite broadcasting is that it requires physically manageable smaller size of dish antenna compared to that of C-Band satellite broadcasting. C-Band broadcasting requires about 3.6 m dia PDA (41dB gain at 4 GHz) while Ku-Band requires 0.6 m dia PDA (35dB gain at 12 GHz). The shortfall of this 6 dB is compensated using Forward Error Correction (FEC), which can offer 8 to 9 dB coding gain in the digital broadcasting. Requirement of transmitter power (about 25 to 50 Watts) is less than that of analog C-band broadcasting. The major drawback of Ku-Band transmission is that the RF signals typically suffer 8 to 9dB rain attenuation under heavy rainfall while rain attenuation is very low at CBand. Fading due to rain can hamper the connectivity of satellite and therefore rain margin has to be kept for reliable connectivity. Rain margin is provided by operating transmitter at higher powers and by using larger size of the dish antenna (7.2m PDA).

DOWN-LINK CHAIN Down-Link or receiving chain of DTH signal is depicted in Fig.2.

There are mainly three sizes of receiving antenna, 0.6m, 0.9m, and 1.2m. Any of the sizes can easily be mounted on rooftop of a building or house. RF waves (12.534GHz, 12.647GHz, 12.729 GHz) from satellite are picked up by a feed converting it into electrical signal. The electrical signal is amplified and further down converted to L-Band (950-1450) signal. Feed 29

and LNBC are now combined in single unit called LNBF. The L-Band signal goes to indoor unit, consisting a set-top box and television through coaxial cable. The set-top box or Integrated Receiver Decoder (IRD) down converts the L-Band first IF signal to 70 MHz second IF signal, perform digital demodulation, de-multiplexing, decoding and finally gives audio/video output to TV for viewing. IF signal (950-1450MHz)


The term earthing means connecting the neutral point of a supply system or non-current carrying parts of electrical apparatus to the general mass of earth in such a manner that at all times an immediate discharge of electrical energy takes place without danger. The function of earthing is two fold 1. It is for ensuring that no current carrying conductor rises to a potential with respect to general mass of earth than its designed insulation. 2. It is for the safety of the human beings from the electric shocks.

Methods of earthing
There are two popular methods of earthing : i) Pipe Earthing ii) ii) Plate Earthing Measurement of Earth Resistance The determination of resistance between the earthing electrode and the surrounding ground is of utmost importance. The resistance measurement is made by the potential fall method. The resistance area of an earth electrode is the area of soil around the electrode within which a voltage gradient measurable with commercial instrument exists. In Fig. 3(a) E is the earth electrode under test and A is an auxiliary earth electrode positioned so that two resistance areas do not overlap. B is a second auxiliary electrode placed half way between E and A. An alternating current of steady value is passed through the earth path from E to A and the voltage drop between E and B is measured. Then earth resistance R = V/I V = Voltage drop between E and B I = Current through the earth path. To ensure that resistance areas do not overlap, the auxiliary electrode B is moved to positions B1 & B2 respectively. If the resistance values determined are of approximately same magnitude in all the three cases, the mean of the three readings can be taken as the earth resistance of the earth electrode. Otherwise the auxiliary electrode A must be driven in at a point further away from E and the above test repeated until a group of three readings is obtained which are in good agreement. The use of alternating current source is necessary to eliminate electrolysis effects. The test can be performed with current at power frequency from a double wound transformer by means of a voltmeter and an ammeter or by means of an earth tester. The earth tester is a special type of meggar which sends AC through earth and DC through the measuring instruments. It has got four terminals P1, C1, P2 and C2 outside. The terminals P1, and C1 are shorted to form a common terminal which is connected to the earth electrode under test. The other two terminals C2 and P2 are connected to the auxiliary electrodes A and B respectively {Fig. 3(b)}. For measurement of earth resistance two electrodes A and B are 30

driven into the ground at a distance of 25 metres and 12.5 metres respectively from earth electrode E under test. The megger is placed on a horizontal firm stand free from surrounding magnetic field. The range switch is set to a suitable scale. The handle is then turned in proper direction at a slightly higher speed than rated one and the reading on the scale is noted. The three readings are taken for different distance. If they are practically the same, it is good otherwise average of these readings is taken as earth resistance.

Propagation of radio waves takes place by different modes, the mechanism being different in each case. Based on that, it can be classified as: 1. Ground (Surface) waves 2. Space (Tropospheric) waves 3. Sky (Ionospheric) waves For the purpose of allocation of frequency spectrum, the entire globe has been split into three regions by the International Telecommunication Union. India falls in Region 3 in the above

Before we discuss different modes of wave propagation, let us see the allocation of frequencies and desired field strength for various bands: Allocation of frequencies for Broadcasting long Wave Band =This is not used in India. Medium Wave (MW) Band MF - 300 to 3000 kHz 531 kHz to 1602 kHz

Short Wave (SW) Band

International Band 3900 to 4000 kHz (75 m) 7100 to 7300 kHz (41 m) 11650 to 12050 kHz (25 m) 21450 to 21850 kHz (13 m) 5950 to 6200 kHz (49 m) 9500 to 9900 kHz (31 m) 13600 to 13800 kHz (19 m) 25670 to 26000 kHz (11 m) 31

Tropical Band
2300 to 2490 kHz (120 m) 4750 to 4995 kHz 3200 to 3400 kHz (90 m)

5005 to 5060

Channel spacing - 5 kHz VHF 30 to 300 MHz UHF 300 to 3000 MHz

Short Wave
The signal laid at any receiving point using HF propagation must be above the noise field at that place by a specific amount of RF signal-to-noise ratio in absence of interference from other transmissions. This value of signal strength is generally termed as minimum usable field strength (Emin). The noise floor to calculate Emin is taken as the greatest one among the values of atmospheric noise, man-made noise and intrinsic receiver noise. The minimum usable field strength Emin is determined as the level which is higher by 34 dB than the noise floor. The value of Emin is specified as 38 dB (34 dB S/N ratio + 3.5 dB intrinsic receiver noise level) as floorvalue by WARC-HFBC87

Ground (Surface) Waves

Medium Wave (MW) signals propagate along the surface of the earth. It is normallyvertically polarized to avoid short circuiting of the electric component by the ground. Medium wave induces current in the ground over which it passes, and thus, loses some energy by absorption. This is made up to some extent by energy diffracted downward from the upper portions of the wave front. Because of diffraction, the wave front gradually tilts over. As the wave propagates over the earth, it tilts over more and more and the increasing tilt causes greater short circuiting of the electric field component of the wave and, hence, the field strength reduces and it will vanish after some distance. Range of such coverage depends on frequency, power of the transmitter, ground conditions like In day time, the field strength is steady since sky wave is completely absorbed by the D-layer of the atmosphere. During night, however, the D-layer disappears, reflections from E-layer affect MW transmission and lead to increased range of coverage, but, at the same time, interference possibilities also increase. When both ground wave and sky wave signals are received, fading occurs in those areas where the signals are of comparable strength and the area is called as fading zone. This fading zone should be kept as far away as possible from the transmitter. The optimum antenna that achieves this objective is of height 0.55 , where is the wavelength of the operating frequency.

Space (Tropospheric) Waves

They travel more or less in straight lines. As they depend on line of sight conditions, they are limited in their propagation by the curvature of the earth, except in very unusual circumstances. Space wave has two components: viz. Direct wave and reflected wave from 32

the surface of earth. Direct wave will be steady and strong It can be noted that not only the transmitting antenna height, but also the receiving antenna height is equally important. Radio waves normally propagate in a curved path due to refraction in the troposphere. This results in the signal reaching a distance greater than the Line of Sight. To account for the additional reach caused by the variation of refractive index in the atmosphere, the radius (a) of the earth is multiplied by a factor k. This is taken as 4/3.

Fresnel Zone
Propagation is not by single thread like ray. Certain volume around the line of sight called First Fresnel Zone is significant for propagation. This volume should be devoid of any surface, building, etc. causing reflections. Therefore, mere availability of line of sight alone is not sufficient, but the First Fresnel Zone must be clear.

Environment Effects
Built up area has little effect on low frequencies (few MHz). But above 30 MHz, obstruction loss and shadow loss become important. The attenuation by brick walls may be 2 - 5 dB at 30 MHz and increases to 10 - 40 dB at 3,000 MHz. Effects of trees and vegetation The effect of thick vegetation is to absorb RF energy and it is particularly more dominant for vertical polarization than horizontal polarization. This is one of the reasons why TV broadcasting uses horizontal polarization mostly.

Clutter losses
The loss due to natural and man made obstruction can only be statistically evaluated and a certain allowance made in the calculations of field strength. Such losses, in general, are grouped and referred to as Clutter losses. This loss is dependent on frequency of operation and the area surrounding the transmitter.

Effective Radiated Power (ERP) ERP is the product of Intrinsic power of the transmitter and the gain of the transmitting antenna over a dipole. Alternately, it is the sum of these parameters if they are expressed in decibels. ERP = Transmitter power in kW x antenna gain (in kW) = Transmitter power in dBm + antenna gain in dB (in dBm)


Effective Isotropic Radiated Power (EIRP)

It is similar to ERP, except that the gain is expressed relative to an isotropic antenna. Gain of a dipole = 1.64 times or 2.15 dB EIRP = ERP (dBW) + 2.15 dB (in dBW)

Field Strength
Received field strength in dBm = 134.8 + 10 log P 20 log d F dB V/m P = EIRP in Watts d = distance of receiving point in metres. F = Loss experienced in propagation.

Sporadic layer (Es) interference

The sporadic E layer is sporadic in its existence as its name implies. It is an ionospheric layer whose ionization density is comparable to that of F layer of the ionosphere, but, usually occurs at the E layer height of 100 to 120 kms. By virtue of its high density odd ionization, the layer is able to reflect VHF frequencies, i.e., the band 40 to 68 MHz over long distances. Russian Television signals received in the plains of Northern India have been caused by this mode. Super refraction or ducting modes of interference The refractivity n of the troposphere, under normal weather conditions, gradually falls at the rate of -40 to 80 units per km with height above the earth. When the refractivity is 157 N units/km or more, ducting mode exists. During ducting, the VHF/UHF radio waves are refracted (bent) very fast so as to bump against the ground and again reflected. If this condition exists over a certain distance, the VHF/UHF waves are trapped in the duct and propagated without much loss similar to microwave signals through wave guides. This is called Duct Propagation. Such ducts can be close to the ground or elevated. Interference through super refraction or ducting is possible for very long periods of time in coastal areas and in places where lots of rivers criss-cross or sea surface exists. Interferences bserved in the coastal areas between Kolkata and Chennai and Bangladesh TV in North East and West Bengal are due to super refraction/ducting modes.

Co-Channel Interference
If the wanted TV signal exceeds the interfering signal by a voltage ratio of 55 dB or more, no interference will be noted. When the desired signal becomes weaker, Venetian blind interference occurs. This is seen as horizontal black and white bars super imposed on the picture and moving up or down. As the interfering signal strength increases, the bars become more prominent, until at a signal interference ratio of 45 dB or less, the interference becomes intolerable. The horizontal bars are visible indication of the beat frequency between the interfering carriers. The beat frequency is of the order of a few hundred Hz. Offset 34

method is used for reduction of co-channel interference. The offset frequency is 2/3 ine scan frequency, i.e., 15625 X 2/3 = 10416.67 Hz or some odd multiple, thereof since the averaging process is then optimum. In case of three stations one station can have a carrier offset above the second and the other below. The offset method requires only the quartz crystal of a station to be replaced by a crystal having offset.

Adjacent Channel Interference

Stations occupying adjacent channels present a different problem. Adjacent channel interference may occur as the result of beats between any two of these carriers. The difference of 1.5 MHz (as shown in fig) produces a coarse beat pattern.

D Layer
It is the lowest layer of the ionosphere. Its average height is 70 km and average thickness is 10 km. Degree of ionisation depends on the altitude of the sun above horizon. It disappears at night. It absorbs MF and HF waves to some extent and reflects some VLF and LF waves.

E Layer
This layer is above D layer. Its average height is 100 km with a thickness of 25 km. It also disappears at night as the ions recombine into molecules. This is due to the absence of sun at night when radiation is no longer received. It aids MF surface wave propagation a little and reflects some HF waves in day time. F1 Layer It exists at a height of 180 km in day time and gets combined with the F2 layer at night. In day time, thickness is about 200 km. Although some HF waves are reflected from it, most passes through it to be reflected by the F2 layer. Thus, the main effect of F1 layer is to provide more absorption for HF waves. Note that the absorption effect of F1 layer and any other layer is doubled because HF waves are absorbed on the way up and also on the way down. F2 Layer It is by far the most important reflecting medium for HF waves. Its approximate thickness can be upto 200 km and its height ranges from 290 to 400 km in day time. At night, it falls to about 300 km, when it combines with the F1 layer. Its height and ionisation density vary tremendously and depend upon the time of the day, the average ambient temperature and sunspot cycle. F2 layer persists at night unlike other layers due to the following reasons: Since this is the topmost layer, it is also the most ionised and, hence, there is some chance for the ionisation to remain at night, to some extent at least. ii) Although ionisation density is high in this layer, the actual air density is not high and thus most of the molecules in it are ionised. Low air density gives the molecules a larger mean free path (the statistical average distance a molecule travels before colliding with another 35

molecule). Hence, ionisation does not disappear as soon as the sun sets. Better reception of HF waves at night is due to the combination of F1 and F2 layers into one F1 layer, which causes noticeable absorption during the day. The electromagnetic waves returned to earth by one of the layers of the ionosphere appears to have been reflected, but actually, it is due to refraction.

Virtual Height
As the electromagnetic wave is refracted, it is bent down gradually, rather than sharply. However, below the ionised layer, the path of the incident and reflected rays is exactly same as if reflection has taken place from a surface located at a greater height called the Virtual Height of this layer. Once the Virtual Height is known, the angle of incidence required for the wave to return to the ground at a selected spot can be calculated easily.For any given layer, the critical frequency is the highest frequency that will be returned to earth by that layer after having been beamed straight up to it. Higher the frequency, shorter the wavelength, the less likely it is that the change in ionisation density will be sufficient for refraction. The closer to vertical a given incident ray is, it is less likely to be returned to ground. A maximum frequency exists above which rays go through the ionosphere. When the angle of incidence is normal, the name given to this maximum frequency is Critical Frequency. Its value, in practice, ranges from 5 to 12 MHz for the F2 layer.

Maximum Usable Frequency

MUF is also a limiting frequency, but for some specific angle of incidence other than the normal. If the angle of incidence (between the ray and the normal) is 0 it follows that sec f cosFrequency Critical MUF c This is called Secant Law and is useful in making preliminary calculations for a specific MUF. MUF is defined as the highest frequency that can be used for sky wave communication between two given points on earth. Hence, there is a different value of MUF for each pair of points on the globe. Normally, MUF ranges from 8 to 35 MHz, but after unusual solar activity it may rise to as high as 50 MHz. The highest working frequency between a given pair of points is naturally made slightly less than the MUF.

Skip Distance
As the angle of incidence is slowly reduced, the waves return closer to the transmitter. If the angle of incidence is made significantly less, the ray will be too close to the normal to be returned to the earth, and the bending will be insufficient to return the wave, unless the frequency being used for communication is less than the critical frequency. Transmission path is limited by the skip distance at one end and the curvature of the earth at the other. The longest single hop distance is obtained when the ray is transmitted tangentiallyto the surface of the earth. For F2 layer, the maximum practical distance is about 4000 km. To cover distances greater than 4000 km, multiple hops are used. For North-South propagation, no problem exists. For East-West paths, care must be taken for day/night time at different locations. 36

Fading Fading is the fluctuation in signal strength at a receiver, and may be rapid or slow, general or frequency selective. Fading is due to interference between two waves which left the same source but arrived at the destination by different paths. Because the signal received at any instant is the vector sum of all the waves received, alternate cancellation and reinforcement will result, if there is a path difference as large as a half wave length. Fluctuation is more likely with smaller wavelengths, i.e., at higher frequencies. Fading can be due to the occurrence of interference between the lower and the upper rays of a sky wave, or between sky waves arriving by a differing number of hops or different paths or even between a ground wave and a sky wave especially at the lowest end of the HF band. It may also occur when a single sky wave is being received because of fluctuations of height or density in the layer reflecting the wave. Because fading is frequency selective, although their frequency separation is only a few dozen Hertz, especially at highest frequencies for which sky waves are used, it can play havoc with the reception of AM signals, which are seriously distorted by such frequency selective fading. Fading is countered by Automatic Gain Control (AGC) in the receiver and further by adopting either Space or Frequency Diversity reception.

While there is no generally accepted formal definition of "protocol" in computer science, an informal definition, based on the previous, could be "a set of procedures to be followed when communicating". In the word is a synonym for the word procedure so a protocol is tomoomm.. what an algorithm is to Communicating systems use well-defined formats for exchanging messages. Each message has an exact meaning intended to provoke a defined response of the receiver. A protocol therefore describes the syntax, semantics, and synchronization of communication. A programming language describes the same for computations, so there is a close analogy between protocols and programming languages: protocols are to communications what programming languages are to computations The communications protocols in use on the Internet are designed to function in very complex and diverse settings. To ease design, communications protocols are structured using a layering scheme as a basis. Instead of using a single universal protocol to handle all transmission tasks, a set of cooperating protocols fitting the layering scheme is used

Figure 2. The TCP/IP model or Internet layering scheme and its relation to some common protocols. 37

The layering scheme in use on the Internet is called the TCP/IP model. The actual protocols are collectively called the Internet protocol suite. The group responsible for this design is called the Internet Engineering Task Force abviously the number of layers of a layering scheme and the way the layers are defined can have a drastic impact on the protocols involved. This is where the analogies come into play for the TCP/IP model, because the designers of TCP/IP employed the same techniques used to conquer the complexity of programming language compilers (design by analogy) in the implementation of its protocols and its layering scheme Communications protocols have to be agreed upon by the parties involved. To reach agreement a protocol is developed into a technical standard.

Communicating systems
The information exchanged between devices on a network or other communications medium is governed by rules or conventions that can be set out in a technical specification called a communication protocol standard. The nature of the communication, the actual data exchanged and any state-dependent behaviors are defined by the specification. In digital computing systems, the rules can be expressed by algorithms and data structures. Expressing the algorithms in a portable programming language, makes the protocol software operating system independent.Operating systems are usually conceived of as consisting of a set of cooperating processes that manipulate a shared store (on the system itself) to communicate with each other. This communication is governed by well-understood protocols. These protocols can be embedded in the process code itself as small additional code fragments.In contrast, communicating systems have to communicate with each other using shared transmission media, because there is no common memory. Transmission is not necessarily reliable and can involve different hardware and operating systems on different systems. To implement a networking protocol, the protocol software modules are interfaced with a framework implemented on the machine's operating system. This framework implements the networking functionality of the operating system.he best known frameworks are the TCP/IP model and the OSI model.At the time the Internet was developed, layering had proven to be a successful design approach for both compiler and operating system design and, given the similarities between programming languages and communication protocols, layering was applied to the protocols as well This gave rise to the concept of layered protocols which nowadays forms the basis of protocol design Systems typically do not use a single protocol to handle a transmission. Instead they use a set of cooperating protocols, sometimes called a protocol family or protocol suite. Some of the best known protocol suites include: IPX/SPX, X.25, AX.25, AppleTalk and TCP/IP. The protocols can be arranged based on functionality in groups, for instance there is a group of transport protocols. The functionalities are mapped onto the layers, each layer solving a distinct class of problems relating to, for instance: application-, transport-, internet- and network interface-functions. To transmit a message, a protocol has to be selected from each layer, so some sort of multiplexing/demultiplexing takes place. The selection of the next protocol is accomplished by extending the message with a protocolselector for each layer


Basic requirements of protocols

Messages are sent and received on communicating systems to establish communications. Protocols should therefore specify rules governing the transmission. In general, much of the following should be addressed

Data formats for data exchange. Digital message bitstrings are exchanged. The bitstrings are divided in fields and each field carries information relevant to the protocol. Conceptually the bitstring is divided into two parts called the header area and the data area. The actual message is stored in the data area, so the header area contains the fields with more relevance to the protocol. Bitstrings longer than the maximum transmission unit (MTU) are divided in pieces of appropriate size. Address formats for data exchange. Addresses are used to identify both the sender and the intended receiver(s). The addresses are stored in the header area of the bitstrings, allowing the receivers to determine whether the bitstrings are intended for themselves and should be processed or should be ignored. A connection between a sender and a receiver can be identified using an address pair (sender address, receiver address). Usually some address values have special meanings. An all-1s address could be taken to mean an addressing of all stations on the network, so sending to this address would result in a broadcast on the local network. The rules describing the meanings of the address value are collectively called an addressing scheme.[ Address mapping. Sometimes protocols need to map addresses of one scheme on addresses of another scheme. For instance to translate a logical IP address specified by the application to an Ethernet hardware address. This is referred to as address mapping Routing. When systems are not directly connected, intermediary systems along the route to the intended receiver(s) need to forward messages on behalf of the sender. On the Internet, the networks are connected using routers. This way of connecting networks is called internetworking. Detection of transmission errors is necessary on networks which cannot guarantee errorfree operation. In a common approach, CRCs of the data area are added to the end of packets, making it possible for the receiver to detect differences caused by errors. The receiver rejects the packets on CRC differences and arranges somehow for retransmission Acknowledgements of correct reception of packets is required for connection oriented communication. Acknowledgements are sent from receivers back to their respective sender Loss of information - timeouts and retries. Packets may be lost on the network or suffer from long delays. To cope with this, under some protocols, a sender may expect an acknowledgement of correct reception from the receiver within a certain amount of time. On timeouts, the sender must assume the packet was not received and retransmit it. In case of a permanently broken link, the retransmission has no effect so the number of retransmissions is limited. Exceeding the retry limit is considered an error.[18] Direction of information flow needs to be addressed if transmissions can only occur in one direction at a time as on half-duplex links. This is known as Media Access Control. Arrangements have to be made to accommodate the case when two parties want to gain control at the same time. Sequence control. We have seen that long bitstrings are divided in pieces, and then sent on the network individually. The pieces may get lost or delayed or take different routes to their destination on some types of networks. As a result pieces may arrive out of sequence. Retransmissions can result duplicate pieces. By marking the pieces with 39

sequence information at the sender, the receiver can determine what was lost or duplicated, ask for necessary retransmissions and reassemble the original message Flow control is needed when the sender transmits faster than the receiver or intermediate network equipment can process the transmissions. Flow control can be implemented by messaging from receiver to sender.

Getting the data across a network is only part of the problem for a protocol. The data received has to be evaluated in the context of the progress of the conversation, so a protocol has to specify rules describing the context. These kind of rules are said to express the syntax of the communications. Other rules determine whether the data is meaningful for the context in which the exchange takes place. These kind of rules are said to express the semantics of the communications. Both intuitive descriptions as well as more formal specifications in the form of finite state machine models are used to describe the expected interactions of the protocol.[22] Formal ways for describing the syntax of the communications are Abstract Syntax Notation One (a ISO standard) or Augmented Backus-Naur form)

Protocols and programming languages

Protocols are to communications what algorithms or programming languages are to computations.This analogy has important consequences for both the design and the development of protocols. One has to consider the fact that algorithms, programs and protocols are just different ways of describing expected behaviour of interacting objects. A familiar example of a protocolling language is the HTML language used to describe web pages which are the actual web protocols.In programming languages the association of identifiers to a value is termed a definition. Program text is structured using block constructs and definitions can be local to a block. The localized association of an identifier to a value established by a definition is termed a binding and the region of program text in which a binding is effective is known as its scope.The computational state is kept using two components: the environment, used as a record of identifier bindings, and the store, which is used as a record of the effects of assignments.In communications, message values are transferred using transmission media. By analogy, the equivalent of a store would be a collection of transmission media, instead of a collection of memory locations. A valid assignment in a protocol (as an analog of programming language) could be Ethernet:='message' , meaning a message is to be broadcast on the local ethernet. On a transmission medium there can be many receivers. For instance a mac-address identifies an ether network card on the transmission medium (the 'ether'). In our imaginary protocol, the assignment ethernet[mac-address]:=message value could therefore make sense.By extending the assignment statement of an existing programming language with the semantics described, a protocolling language could easily be imagined.Operating systems provide reliable communication and synchronization facilities for communicating objects confined to the same system by means of system libraries. A programmer using a general purpose programming language (like C or ADA) can use the routines in the libraries to implement a protocol, instead of using a dedicated protocolling language.

Protocol design
Communicating systems operate in parallel. The programming tools and techniques for dealing with parallel processes are collectively called concurrent programming. Concurrent programming only deals with the synchronization of communication. The syntax and semantics of the communication governed by a low-level protocol usually have modest 40

complexity, so they can be coded with relative ease. High-level protocols with relatively large complexity could however merit the implementation of language interpreters. Concurrent programming has traditionally been a topic in operating systems theorie texts. Formal verification seems indispensable, because concurrent programs are notorious for the hidden and sophisticated bugs they contain A mathematical approach to the study of concurrency and communication is referred to as Communicating Sequential Processes (CSP).[31] Concurrency can also be modelled using finite state machines like Mealy- and Moore machines. Mealy- and Moore machines are in use as design tools in digital electronics systems, which we encounter in the form of hardware used in telecommunications or electronic devices in general.This kind of design can be a bit of a challenge to say the least, so it is important to keep things simple. For the Internet protocols, in particular and in retrospect, this meant a basis for protocol design was needed to allow decomposition of protocols into much simpler, cooperating protocols.