Вы находитесь на странице: 1из 184

GSM & WCDMA

What is 0 dBm?
What is VSWR?
Parameter Timing Advance (TA)
What is RTWP?
What is HSN and MAIO in GSM?
What is Rake Receiver?
What is Frequency Hopping - FHSS?
What is Modulation?
What is RF Drive Test (Testing)?
OSI 7 Layers Model
What is Antenna?
What is Cellular Field Test Mode?
What is Ec/Io (and Eb/No)?
Goodbye IPv4... Hello IPv6!
What is MIMO?
How to Run a RF Site Survey (Tips and Best Practices)
IP Packet switching in Telecom - Part 1
IP Packet switching in Telecom - Part 2
IP Packet switching in Telecom - Part 3
What is Antenna Electrical and Mechanical Tilt (and How to use it)?
IP Packet switching in Telecom - Part 4
What is Retransmission, ARQ and HARQ?










Watt (W) and miliWatt (mW)
First of all, to understand what it means for example 0 dBm, we at least have to know the
basic unit of power, the Watt. By definition, 1 Watt means 1 Ampere (A) current in 1 Volt
(V) voltage, or in mathematical terms P = VA. It is interesting to note that the amount of
power radiated by an antenna is very small in terms of Watts, but it is enough to reach
several miles.

And as the signs are very small, is more common to refer to them in terms of prefix, such
as military or milliwatts (mW), which means 1 / 1000 (one thousandth) of Watts.

Mathematics

Besides the signals were rather small, it - as well as other quantities of physics such as
electricity, heat or sound - propagate nonlinearly. It would be more or less like compound
interest on a loan.
Or brought into our world of engineers, imagine a cable for transmitting 100 watts, with a
loss of 10% per meter. If the spread was linear, the final 10 meters would have no more
power!


Only it's not how it happens. In the first meters, have 10% less power, which is 90 watts.
And this is the value that 'enter' on the cable until the next meter. Thus, the second meter,
we would have 10% less of that power or 81 watts (= 90 - 9). Repeating this calculations,
you see that in fact the power never reaches zero, as it would if calculations were linear. (At
the end of the cable have actually 34.86 Watt)


To solve problems o deal with this - and make our lives easier - we need to know the
logarithms. We saw this in school, but there are people who do not like to hear. The good
thing is that we need not know much about them, just understand what they are.
Just understand that if we transformed the magnitudes in logarithms, the calculations
become addition and subtraction rather than multiplication and division.

Of course, in order to do the calculations by adding and subtracting, we must make the
necessary conversions. But with the help of a calculator or Excel, is not that complicated.

Decibels (dB)
By definition, we have:


Sure, we say that working with logarithms (or decibels) is much easier - and the common
good. But by the formulas above, still can not understand. So the best way to understand
why we use dB (decibels), is seeing how they help us through a practical example.
Consider a standard wireless link, where we have a transmitter (1) and a receiver (5),
Antennas (3), Cables, Jumpers and Connectors (2) and Free Space (4).


Using real values, and without using the help of dB, let's do the math and see, from the
transmitted power, how much power we have at receiver. So with dummy values, but close
to reality, we have:
Transmitter Power = 40 Watts
Cables and connectors loss = - 0.5 (Half Power)
Antenna Gain = 20 + times in the Power
Free Space Loss = - 0 000 000 000 000 000 1 Power
Note: This amount of loss in free space is quite big. And it is obtained based on distance
and other factors. For now, just accept that it is a practical value of loss of RF for a given
distance of our link.
The link with the absolute values in Watt would then be as below.


We can work this way, of course. But you must agree that it is not very friendly.
Now, if we use the proper conversion of power, gain and loss for dB, we can simply add and
subtract.

It was so much easier, isn't it?
Now we just need to know the formulas to do the conversions.

Converting with Formulas in Excel
Considering that the amount of wattage is in cell B3, the formula for convrting W in dBm is:
= 10 * ( LOG10 ( 1000 * B3 ) )
And the formula to reverse - convert dBm to Watt, considering that our power in dBm is in
cell B6 is:
= 10 * ( LOG10 ( 1000 * B6 ) )
And as a result, we have calculated values.


Note that in case we are using the 1000 value in the formulas, for wearing the Watt, but we
want the result in dBm.
To calculate (convert) db to ratio, or ratio to db, the formulas do not include the value of
1000.



Calculations without using a calculator
Of course, we will use calculators in the projects and programs such as Excel. But we also
know how to do calculations (conversions) without using a calculator. If anyone tells you
that the power is + 46 dBm, you need to know what that means in terms of Watts.
For this, there are certain tricks that can be used to arrive at least an approximate match.
For this, a good way is to memorize the equivalent to multiplying factors in dB, as in the
table below (at least those that are in bold).


With the corresponding values of dB and multiplier factor, we convert eg +46 dBm to mW.
Answer: First, we expressed 46 values that we already know by heart.
So 46 = 10 + 10 + 10 + 10 + 3 + 3
That is, we multiply the reference value (1 mW) for four times the factor of 10 and twice the
factor of 2.
What gives us
1mW x 10 x 10 x 10 x 10 = 1 000 0 mW
1 000 0 mW x 2 x 2 = 40000 mW = 40 W
Ie, + 46 dBm is equal to 40 watts.

Conclusion
Well, I think now you have given to see that when we do the calculations in dB everything is
easier. Moreover, the vast majority of Telecom equipment has specifications of its units in
dB (Power, Gain, Loss, etc.).
In short, just use basic math to understand the values and reach the final figures.
When we say that such a signal is attenuated by 3 dB, means that the final power is half the
initial power. Likewise, if a given power is increased by 3 dB means twice the power.
A good practice, irrespective of how you will work with the calculations is to store at least
some values such as 0 dBm = 1 mWatt (our initial question), 30 dBm = 1 Watt, and in our
example, 46 dBm = 40 Watt.
So you can quickly learn, for example, the equivalent for the calculations.
For example, 43 dBm = 46 dBm - 3 dB. That is, half the power of 46 dbm. Then, 43 dBm =
20 Watts!
Just finally, in our example, the received power was - 84 dBm, remember?

In this case, doesn't need memorizing. Just so you know which is equivalent to a very low
power, but enough for a good example for GSM call.



VSWR
To understand what is VSWR, we need to talk a little bit about signal propagation in radio
frequency systems.
Simply put, the radio frequency signals are driven by electric cables between transmitters /
receivers to their respective antennas.


By its definition, VSWR or Voltage Standing Wave Ratio is a ratio of peak voltage on the
minimum amplitude of voltage of standign wave. It does not help much, does it?
Okay, let's try to see how it relates ...
In radio frequency systems, the characteristic impedance is one of the most important
factors to consider. In our case this factor is typically 50 Ohms. This is a constructive
parameter, ie is some characteristic determined by its construction. In the case of a cable
for example, depends on the size of the inner and outer conductors, and the type of
insulation between them. All components of a link - cables, connectors, antennas - are
constructed to have the same impedance.
When we insert an element in our system, we have what we call the Insertion Loss, which
can be understood as something that is lost, taking into account what that actually went in
and came out.
And this loss occurs in two ways - by Attenuation - especially on cable - and y Reflection.
As for the attenuation along the cables, there's not much we can do. Part of the signal is
lost along the cable by the generation of heat and also by unwanted radiated off the handle.
This loss is characteristic of the same, and defined in terms of dB per lenght unit - the
longer the cable is, the greater is the loss. This attenuation also increases with increasing
temperature and frequency. Unfortunately, these factors are not much scope of our control,
since the frequency is already preset by the system we use, and the temperature will be
exposed to climatic variations of where the cable has to pass .
The most we can do is try to use cable with less attenuation , ie, cableswith high quality
materials used in its construction of the drivers internal and external and insulating
dielectric . As a rule, the larger the diameter of the cable, the lower your attenuation.
Typical values of diameters are 1/2 ", 7/8" and 1 5/8 ".
The choice of coaxial cable for the system is a process that requires a very comprehensive
analysis, taking into account its characteristics (is it softer, etc ...) and costs of several
options of existing cables, necessary cable length - and the consequent loss that it will
introduce, the loading of the tower or brackets where cables will be posted, among others.


But the other form of loss that we have in our system, and can be controlled a bit more is
the loss by reflection, ie loss of the signal, which has just returned, lost by the end where it
was injected. For this reason we call the Return Loss.
If there is any problem in the middle between the transmitter / receiver and antennas -
such as a fold or infiltration of water - half ends with the impedance mismatch. So, part of
the signal which ideally should leave by the antenna, then returns reflected!
Speaking in terms of the matching impedances, if the value of X, Y and Z are equal, we
have the following.

Now with values close to the real impedance unmatched scenario, have the following.


If we consider an ideal transmission line, the VSWR would be 1:1, ie all the power to reach
your destination, with no reflection (nothing lost).


And the worst means of transmission in the world, we would have infinite VSWR, ie all the
power would be reflected (lost).



In practice
It is clear that there is an ideal system, one that is not the worst in the world. What
happens is that there are maximum VSWR that each application can accept. The typical
value in our case is 1.5:1.
So what are the problems that we can in a bad VSWR (very large)? Besides the power
radiated effectively be much smaller than it should be, may also occur the burning of
electronic components that have no protection for that unwanted reflected signal.
So as basic recommendations:
Avoid bending the cables to the fullest - making turns as smooth as possible - and tighten the
connectors: isolating the system that does not suffer problems like water seepage or poeira.
o
In addition, the connectors and cables must be made by professionals, and using
professional equipment. It does not help tighten a connector evil feito.
Use always the best quality components possible: no equipment is perfect, and even the processes
of production glitches arise. The quality of the material and manufacturing process of the elements
is paramount so as to achieve a better quality of sinal.
Check that all elements of the system have the same impedance.


Tables and Graphs
Is not the goal here to explain what are the standing waves, because understanding
requires significant wave theory, but a simple and very interesting for you to see - and
understand - as these waves are formed is shown on the site bessernet.com. Be sure to
visit the link below. Enter a value of return loss, hit enter, and check!
http://www.bessernet.com/Ereflecto/tutorialFrameset.htm

The magnitudes of reflection VSWR, Return Loss dB and Power Reflected% are related, and
can be converted into one another, using the formulas or tables below.
For the standing wave (please visit link above to understand first):




And for the power transmitted and reflected:




With some values we have tabulated the table below.


Here comes a good tip: Understand the return loss as 'How much weaker, in dB, the
reflected unwanted signal is, compared with the transmitted signal? "
In the case of 1.5:1, power is 14 dB below the original value, or 4% was lost. Note that a
VSWR of 1.9:1 almost 10% of energy is lost!

Conclusion
To conclude, we can then understand the VSWR as an indicator of signal reflected back to
the transmitter radio frequency, always taking the value 1 in the denominator. And the
lower this index, the better!
Thus, a radio frequency system with 1.4:1 VSWR is better than one with 1.5:1!
And another with 1:1 VSWR would have a perfect impedance matching. In other words,
occurs only in theory.
Finally, the VSWR in a radio frequency system can be measured by special equipment.
One of them, and well known, is the Master Site. With mode "Distance-To-Fault" you can
identify the location of problems in a damaged system.
What is Parameter Timing Advance (TA) in GSM?
The Timing Advance is a parameter that allows the GSM BTS to control the signal delays in
their communication with the mobile.
More specifically, is calculated by the delay of information bits in Data Access Burst received
by BTS.

Recalling a little: GSM uses TDMA with sequential designated timeslots to allow different
users sharing the same frequency.
A burst represents the physical content of a timeslot and can be of 5 types: Normal,
Frequency Correction, Synchronization, Access or Dummy.
Each burst can carry bits of different types: Information, Tail, Training Sequence.
We have eight timeslots, each user transmits within 1 / 8 of that time, periodically. The
arrival time in each slot is then known.
Users are randomly located around the station, a closer and more distant, yet we can
consider the propagation environment as being the same for everyone.
So if we know the time and speed that the signal travels, we calculate the distance!
And how to use this parameter, not only to just check how far we are from BTS?

Applications
A major application of this parameter, you control the time at which each mobile can
transmit a burst of traffic within a timeslot in order to avoid collisions of transmissions of
the other adjacent users.


The TA signal is transmitted in the SACCH as a number between 0 and 63, in units of bit
periods (3.69 microseconds). If the signal travels at 300 meters per microsecond, each TA
is a distance of approximately 1100 meters. Because this is the distance round, each
increase in the value of TA corresponds to a distance 550 between the mobile and BTS.
For example, TA = 0 means that the mobile is up to 550 meters from the station, TA = 1
means it is between 550 and 1100 meters, TA = 2, from 1100 to 1650 meters and so on.


The maximum distance allowed by the TA between the MS and BTS is 35 km (GSM 850 /
900) * 63 or 550 meters.
So, for example during a test drive, we can measure how far we are from the BTS through
the value of TA. He does not give us the position exta, but gives an accurate range of 550
meters.
Controlling interference by continually adjusting the TA, we have less data loss, and
improve the quality of our signal.
As this is a parameter directly related to distance, it is natural that the TA is also used
in locating applications.
Another good application is the handover control.
Imagine you have a cell that uses two concentric bands. You can set as a condition to allow
the handover from one band to another.
More specifically: if you have a cell with 850/1900, you can set the band 850 as BCCH, and
1900 only to traffic. The TA threshold to control the terminal so it does not make for the
1900 handover if you're far from the BTS.


Extended Range
Despite the limitation of the GSM standard is 35 km as we speak, you can enable a feature
that allows the TA is greater than 63. For this, the station receives the uplink signal in two
adjacent timelots, instead of just one.

Conclusion
This was a brief explanation of the parameter TA in GSM.



What is RTWP?
If you work with UMTS,'ve probably heard someone talk about RTWP. Its definition can be
found in a dictionary of acronyms, such as http://acronyms.thefreedictionary.com/RTWP:
Received Total Wideband Power.
Represents a measure of UMTS technology: the total level of noise within the UMTS
frequency band of any cell.
RTWP is related to uplink interference, and its monitoring helps control the call drops -
mainly CS. It also has importance in the capacity management, as it provides information
for the Congestion Control regarding Uplink Interference.
In UMTS, the uplink interference may vary due to several factors, such as the number of
users in the cell, the Service, Connection Types and Conditions of Radio, etc..
As our goal is to always be as simple as possible, we will not delve in terms of formulas or
concepts involved. We will then know the typical values, and know what must be done in
case of problems.

Typical Values
Ok, we know that RTWP can help us in checking the uplink interference, then we need to
know its typical values.
In a network is not loaded, normal, acceptable RTWP Average value is generally around -
104.5 and -105.5 dBm.


Values around -95 dBm indicate that the cell has some uplink interferers.
If the value is around -85 dBm, the situation is ugly, with strong uplink interferers.
Usually we have High, Low and Medium measures of RTWP. However, the maximum and
minimum values are recommended only as auxiliary or reference, since they may have been
caused by a peak of access, or even been forced to have a momentary value due to some
algorithm i.e..
Thus, the value that helps us, and has the most accurate information is the same Mean
RTWP!
For cases in which cell has two carriers, the difference between them RTWP should not
exceed 6 dB.



Based on these typical values, most vendors have an alarm: RTWP "Very High. "

What to do in case of problems?
We have seen that RTWP can cause performance degradation, mainly CS Call Drops. Note:
Actually, it's not RTWP that causes performance degradation. What happens is that when its
value is 'bad', it's actually indicating the presence of interference - the latter being
responsible for degradation.
But what can we do when we find bad values?
If RTWP is not at acceptable levels, some actions should be taken.
The first thing to do is check if there is a configuration issue with the RNC or NodeB. This is the
most common case, especially in cases of new activations.
Once verified the parameter settings, the next step is the physical examination, especially jumpers
and cables, often partially reversed. It also should be checked if there is faulty transmitters, or any
other problem that could generate intermodulation between the NodeB and the antenna.
If the parameter settings and hardware are ok, the chance is very high that we have external
interference, such as a Interferer Repeater.
In cases where there may be external interference, we must begin to act after such a
prioritization based on how much this is affecting the cell KPI's across the network, if it
carry high traffic, major subscribers, etc..
Note: There are many forms of interference in the uplink, both internal and external. Only a
few are listed above. The deepening of all possibilities is beyond the goal of being simple to
teach the concepts, but this is a suggestion for whoever wants to deepen the study,
identification and elimination of interference.

In practice
to find - and eliminate - problems of interference is one of the biggest challenges in our
area. For being such a complex problem, we recommend that be collected enough data for
each investigation. Insufficient data collected can lead to erroneous conclusions, further
worsening the problem.
The uplink interference may appear only in specific periods. Thus, it is recommended that
data be collected from at least one week (7 days) for every 24 hours. Usually this amount of
data is sufficient. In the figure below, we see different days and times - colorful - a fictional
example where the interference occurred.


Data should be collected for the suspicious cell, but also for its adjacent cells, allowing it to
make a triangulation increasing the chances of locating the source of interference.
Another way to locate the source of interference is to do a test in field. An antenna guy
must gradually change the azimuth of the antenna, while another professional do RTWP
measurements. That is, through the information directing the antenna and the respective
values of RTWP, you can draw conclusions very good.
It is obvious that changing the online system may not be a good practice, and tests can be
made with a Yagi antenna and a Spectrum Analyzer.
Vendors offer several ways to measure RTWP, using the OSS, performance counters and
logs.

Conclusion
In this brief tutorial, we learn what is RTWP, and that the ideal typical value is about -104.5
dBm and -105.5 dBm.
As the RTWP is directly related to Uplink Interference - and we know that interference is the
main cause of performance degradation - have concluded that improving RTWP, ie making
is as close as possible to -105 dBm, improving the Call Drop Rate!
IMPORTANT : Seizing the opportunity, see what was stated at the start of this tutorial -
dictionary - by describing RTWP. Remember that this site has been the subject of a very
interesting tutorial in the Tips Section. If you have not visited this section of the portal yet ,
I strongly recommend, because it has many issues that help in our growth in telecom and IT
area.
What is HSN and MAIO in GSM?
Today let's understand what are the parameters MAIO and HSN in a GSM network.
The terms MAIO and HSN are also often used, but many people are confused about it's
planning. That's right, HSN and MAIO are used in frequency planning of a GSM network, and
know them well naturally will lead us to better results.



Quickly: The HSN is used to define the hopping sequence from one frequency list, and MAIO
is used to set the initial frequency on this list.
It did not help? So come on and try to understand better ...
Note: The goal here is not to teach HSN and MAIO planning, since this task involves many
possible configurations and scenarios, which would escape the scope of our tutorial. The
main goal today is to understand, in a planning already deployed, what they mean values
MAY HSN and assigns.

Frequency Hopping e MA List
To understand how HSN and MAIO are used in planning, we first need to know some brief
concepts.
Frequency Hopping or FH: one of the great advantages of the GSM system, in the constant
search to reduce interference. More on the FH due to a new tutorial.

MA List: set of frequencies (channels) assigned to a particular sector, ie are those channels that
can be used to attend calls from users.
To illustrate, let's consider a sector with 4 TRX, where the first TRX is used for BCCH and
the others are TCH TRX.


The MA List with the channels of traffic then would be:


HSN e MAIO
Sure, with the example in mind, let us return to our parameters.
First, the definition of HSN: Hopping Sequence Number. It is a number that defines the
frequency hopping algorithm, and can vary from 0 to 63, ie there are 64 hopping algorithms
to be used in GSM.


If HSN is zero, the frequency hopping sequence is cyclic, ie without changes.
If HSN is greater than zero, then frequencies vary pseudo-randomly.
When we enabled the Hopping - our case - all TRX in the SAME SECTOR has the SAME HSN.
And if the we have 1x1 SFH it is recommended to have the SAME HSN for ALL SECTORS of
the BTS.

In our example, the MA List is small - just three frequencies. The size of the MA List should
be taken into account in the planning of HSN: HSN should be the designated so as to
minimize the average probability of collision, according to the designated MAIOs.
And how MAIO's are designed?
Well, first defining MAIO: Mobile Allocation Index Offset. It's MAIO that designate the initial
position of frequency - among the frequencies available in MA List, that list with the
frequency hopping. It is the frequency that TRX uses so get hopping.
MAIO planning is straightforward if the number of TRX is small compared to the length of
the sequence of hopping.
For example, MAY 0 means that the TRX should use the first frequency, or f1.


GSM Automatic Frequency Planning Tools
The concept of HSN and MAIO is important, and when the number of TRX and frequencies is
small, we can even do planning 'byt hand'.
However, the best way - and always recommended - is to use network planning tools
suitable for this purpose, as the AFP, from Optimi, or Ultima Forte, from Scheme.
These tools can be configured with measurements collected from the network (via BSS and
/ or Drive Test ), and with predictions (calculations) built in that allow the creation of a
Interference Matrix. Based on this matrix, along with other algorithms, it allow a better
design of parameters based on such critical conditions in traffic load and access. According
to characteristics of each sector, they then provide the final planning, including the
possibility of simulations.

Conclusion
Knowing the concept of HSN and MAIO we can use them correctly in our plans, and/or do
audits of our existing networks. For example, in two hopping sequences, if we have the
same HSN and different MAIO, we guarantee that they never overlap, or in other words, are
orthogonal.
Another conclusion is that two channels with different HSN, but with the same MA List and
at the same time slot, will interfere with 1 / n of bursts, where n equals the number of
different frequencies in the hopping sequence. This conclusion is somewhat more complex
to see, and is due to feature pseudo randomly from HSN. So if you have interest, deepen
their studies of MAY and HSN. Otherwise, just understand that it is why we say that the
Frequency Hopping somehow averages the interference across the network.















What is Rake Receiver?
Have you ever heard of "Rake Receiver"? Surely you've heard of Receiver (Receiver in
English), and you probably have heard of Rake (Rake in English).
With the pictures bellow, can you imagine what a Rake Receiver can be?


Ok, if the analogy does not help much, let's go.


In a wireless communication system, the signal can reach the receiver via multiple distinct
pathways.


In each path, the signal can be blocked, reflected, diffracted and refracted. The signal of
this many routes reach receivers faded. The Rake receiver is used to correct this effect,
selecting the correct / stronger signals, bringing great help in CDMA and WCDMA systems.
Okay, but what is the Rake Receiver, and how it does it?

Definition
The Rake Receiver is nothing more than a radio, whose goal is to try to minimize the effects
of the signal fading due to multipath suffers when he travels. In fact, we can understand a
set of Rake Receiver sub-radios, each lagged slightly, to allow the individual components of
the multipath can be tuned properly.
Each of these components is decoded completely independently, but are combined in the
final. It is as if we took the original signal, and adicionssemos other copies of the original
signal reaching the receiver with different amplitudes and arrival times. If the receiver
knows the amplitude and arrival time of each of these components, it is possible to estimate
the channel, allowing the addition of components.
Each of these sub-radios Rake Receiver is called Finger. Each finger is responsible for
collecting the energy of bit or symbol, hence the analogy with the groomer that we use in
the garden, where each branch of the rake collecting twigs and leaves!
To ease some of the understanding, imagine two signal components arriving at the mobile
unit as seen in the previous figure, with a lag t among them.


Notice how each Finger works:
The first with component g1 and time reference t;
The second with component g2, but with the time reference t - t.



The Fingers are so receptors that works independently with the function of demodulating
the signal, ie, receive and remove the RF components from the information.
The big idea behind the methodology of combining multiple copies of the transmitted signal
to obtain a better signal is that if we have multiple copies, probably at least one must be in
good condition, and we have more chance of a better decoding!

Key Benefits
The main advantage of Rake Receiver is that it improves the SNR (or Eb / No). Naturally,
this improvement is observed in larger environments with many multipaths than in
environments without obstruction.
In simplified form: we have a better signal than we would have without using Rake
Receiver! This is already a sufficient argument, isn't it?

Disadvantages and Limitations
The main disadvantage of Rake Receiver is not necessarily technical, and is not as
problematic. This disadvantage is primarily because the cost of the receivers. When we
insert one more radio receiver, we need more space and also increase complexity.
Consequently, we increase costs.
The greater the number of multipath components supported by the receiver, the more
complex is the algorithm. As we always do here, we will not be deducting formulas involved,
but the complexity increases almost exponentially.
And in the real world, the amount of multipath components that arrive at the receiver is
quite large, there is not a 'limit'. Everything will depend on the environment.


The threshold number of fingers in a mobile unit is determined by each technology
standard, which for example in CDMA is 6, corresponding to the maximum number of
channels to direct traffic that can be processed by the mobile unit at once (Active Set).
However, in cellular environments, most of the CDMA mobile units need only actually 3 of
demodulators (WCDMA uses 4). More than that would be a waste of resources, and an
additional cost to manufacture the phone.

Searcher
An important detail in the CDMA and WCDMA systems is the use of one finger of the Rake
Receiver as a 'Searcher'. It is so called because of its function of seeking pilot signals being
transmitted by any station (BS) in the system. These pilot signals can be understood as
beacons used to alert the mobile, the presence of a BS.
Thus, in the UMTS UE(User Equipment), we have a simplified form of the configuration of
the Rake Receiver as below.



Fingers on BS and UE
To conclude, the number of Rake Fingers used in the BS and the UE is generally different.
That's because we saw that to have more fingers, the physical size of the receiver
increases, as their power requirements. This can be a problem for the UE but not a problem
for the BS, since it is able to offer more space and power for new fingers. It is only the
criterion of cost to be taken into account in the BS.
Anyway, the only critical issue is with UE. But the current three/four fingers ensure excellent
gain proven in practice (CDMA/WCDMA).

Conclusion
We saw today that the Rake receiver is used in CDMA and WCDMA as an efficient way of
multipath signal reception, where several recptores are able to reconstruct the signal with
different time-codes, amplitude and phase.





What is Frequency Hopping - FHSS?
We can define Frequency Hopping as a communication scheme between a transmitter and a
receiver. Several concepts are involved, such as spread spectrum modulation and switching
frequency according to a known standard.


The Frequency Hopping (FH) is widely used for example in GSM Networks, so let's
understand a little better today that subject.


First, a brief history
But before we talk about FH: who invented it?
Several people competing for this title, as the German Johannes Zenneck in 1908 through
his company Telefunken. Shares the title with the Polish inventor, who also exposed the
idea.
But particularly, I like the exotic version, where a beautiful actress (yes) Hedy Lamar, along
with her neighbor composer George Antheil were also responsible, in the Second World War.
Hedy was married to a German arms manufacturer. Issues like security, especially how to
send text messages using a signal that could not be interfered with by the enemy naturally
came to mind.
If you send a torpedo controlled by a continuous signal, the signal can be identified by the
enemy, which in turn can insert a high noise and shoot it down.
Once at the piano the composer George played a note, and Hedy repeated on another scale.
It was then that she realized it was possible to establish a communication by changing the
communication channel, just for this, they should make the change at the same time, ie,
following a pattern known to both.
Bringing this idea to the torpedo, it sufficed for the transmitter in the vessel and receiver in
the torpedo altered (or jumped) from one frequency to another in a synchronized manner.
That is, just that the receiver at the torpedo know what are the positions where the
transmitter frequency will jump!
And if any of these frequencies are suffering interference? Well, we still have other channels
in the sequence of jumps, from where information can be retrieved!
Like any great invention, note that the idea is simple.

Definition
Well, after a brief history, I hope you have understood the idea behind FH.
After the initial contribution of inventors, the idea was perfected, and is now used in various
systems such as GSM, as already mentioned.
FH has mainly the purpose of avoiding interference, and we'll see how he gets it.
In FH, the information is spread over a bandwidth much larger than is required for its
transmission. For this, it is divided into several channels of lower bandwidth.
Knowing the sequence of jumps that must be followed, the receiver and transmitter jump
through these channels.


This is a pseudo-random sequence, and that's what makes the FH also secure, since
unwanted receivers can not intercept the signal because they do not know the sequence.
The only thing they see are noises of short duration.
For each application with its full range of frequency, it is defined as bandwidth, hop number,
and maximum average time that each frequency must be busy.
We should also stress that in FH do not need continuous bands. In scenarios where the
available bandwidth is limited and not contiguous, the spectrum can be better used
(Actually, this is more a feature comparison between narrowband x broadband systems ).



FH in GSM
Speaking specifically about the GSM now, we'll finally understand how the interference is
avoid.
To do so, as always, let us take an example of a network with 10 MHz bandwidth. As the
channel of GSM is 200 kHz, we have 50 channels available. Remember that each GSM
channel has 8 time slots, and considering Full Rate we have 8 users.
It may seem to many channels, but believe me, a major planning challenge is to spread
these channels in a GSM network avoiding interference problems. To illustrate, suppose a
network with 100 sectors each BTS 3: we have 300 sectors, and only 50 frequencies.
Naturally, the channels must be reused, which inevitably result in the same channels used
in different sectors.
And there we have the co-channel interference, a major problem to be solved, especially in
dense GSM networks.
Not only the problem of co-channels, we also have the problem of multipath, compounded
by the fact that GSM band is narrow. A signal can leave the transmitter, and due to
obstacles, be reflected in a way that will eventually interfere with the original signal that
arrives at the receiver, since this signal is out of phase, because it had to 'travel' more.
And it is especially in these cases that FH helps us.
For clarity, consider an sector with channels A and B. Hardly all slots of all channels are in
use all the time. Even if a particular slot of channel A is also in use in another sector - co-
channel interference, chances are that another slot of channel B are not! That's what FH
does: changes the frequencies and slots of the call!


Thus, each user runs a much lower risk of suffering co-channel interference.
In other words, a channel can be suffering interference, but we have other channels in the
sequence of jumps that may be no interference! When the network uses FH, and moves our
call slot to slot, and frequency to frequency, the interference is turns into a random effect.
We still, as we speak, the problems of multipath. And the idea is basically the same. By
jumping from one frequency to another, the user suffers the effects of multipath problem by
a very small periods of time. (Remember we use narrow bands!)
In both cases, whether co-channel interference or multipath fading, there are the error
correction algorithms, which achieve the most efficient way to clean and recover the original
signal.

FH Basic Algorithm
Finally, we see a simplified diagram showing the steps involved in establishing a
communication using FH.


First, the transmitter sends a request (1) to start the FH through the control channel. The
receiver, after receiving this request sends a base number (2) back. The transmitter then
uses that number, calculates and sends the series of frequencies (3) to be used. With this
list of frequencies, the receiver returns a synchronization signal (4) in the first frequency of
the list. Thus, communication between the two is established (5).


Disadvantages
And what are the disadvantages of FH?
Like any spread spectrum communication, we need a bigger band than would be necessary
if it were used only a single frequency to carry the signal.
Furthermore, whenever a communication is established, it takes a significant time, to
establish sync between the receiver and transmitter.
Anyway, the advantages outweigh these points.

Conclusion
We now know the simple idea of frequency hopping, a spread-spectrum modulation scheme,
where it is possible to establish a communication over a single logical channel, based upon
the timing of changes (jumps) in frequency among them, following a pseudo-random
sequence known by both.
As a result, using the FH have a signal more robust - interference resistant, and secure - to
be very difficult to intercept.




















What is Modulation?
When we think about communication in a telecommunications system, the first thing that
comes to mind is someone talking to another person.



Although it may seem simple, the transmission and reception of information is quite
complex, considering the many possibilities and scenarios where this may occur.
And one of the main schemes is involved in the modulation. So try to understand what it is
today.
Note: Our goal here is to be as simple and straightforward as possible. For example, do not
reach the level of demonstration of theorems such as Nyquist and Shannon - involved in the
issue. This reading, however it is recommended further or if you have more interest in the
subject. Anyway, try to pass the ideas and concepts. Later, you can extend your studies, if
any, so much clearer.

What is modulation?
Let's start with the basic function of any communication system, transmitting information
from one location to another.
Speaking simply so, it seems a simple process ... but it is not!
To try to identify the many concepts and processes involved, let us consider a
communication between two people.
If these people are close, one speaks and another listens.


We already can observe some basic concepts.
The information power where the sound of the voice, is given by the capacity of the lungs of each
which can whisper, talk or sream.
The transmission medium this information is the air or free space.
Who speaks is the transmitter, and who is listening is the receiver.

If these people are far away, then the communication needs other means, such as a
telephone line or a radio channel frequency.


Note that now we introduce new digital devices, besides other techniques to allow the
original data - in this case the voice - is crafted so as to reach the other person.
The information coming out of the transmitter needs to be changed (modulated) to then be
transmitted. At the receiver, must do the reverse process, or the demodulation of the
information, converting the same to the original information.


More concepts ...
Modulation : changing the characteristics of the signal being transmitted.
Demodulation , the reverse process of modulation.

So far so good? So let's continue ...
Our voice, as well as most of the sounds found in nature, is analog. Until there are purely
analog transmitters, such as the transmission of AM and FM. But let us not worry about it,
almost everything today is even digital.
Before our voice to be transmitted, it must be converted. For this, there are digital devices
that convert analog voice through a process of sampling and quantization.



The analog signal is first sampled first, then quantified into levels. Each of these levels is
then converted to a binary number.
Below, we see an analog signal (blue) with its equivalent digital signal (red). Using only two
levels, we have:


If we use 4 leves, we have the following:


A specific type of modulation, PCM - Pulse Code Modulation is the method used to convert
the voice signal into digital signal, and generally used in telephony. Between the maximum
amplitude and minimum signal levels are set 16 (0 to 15), and these are encoded as binary
numbers (0000, 0001, ..., 1111).


For our voice is considered an effective bandwidth - lower limits 300 Hz, and upper limit
between 3500 and 4000 Hz. Sampling rate is 8000 / s.
We have a stream of 64 Kb / s: 8000 samples x 1 byte = 64.000 bits / second (64 kbps).

Okay, at this point we have then the signal of our voice digitally represented by binary
numbers.
Now let's see how the other digital modulation techniques?
Digital modulation has advantages over analog. For example, it is much easier to recover
the signal, because we avoid the accumulation of noise and distortion - compared to the
analog modulation. (In cases of various modulations/remodulations).
Furthermore, the streams of digital bits are much more suitable for various multiplexing
schemes.


But while the benefits are large, digital modulation also has its disadvantages. The main one
is that it requires more bandwidth than analog methods.
And then come the techniques developed to minimize this problem.
The digital signal compression: to reduce the number of bits needed to carry the same
information.
The use of advanced modulation techniques: increasing the number of bits carried by Hertz or
bandwidth - QPSK, OQPSK, GMSK, etc ...

So let's talk a little about these modulation techniques.
First, let's get used to the characteristics of modulation (change) of the RF signal. It can be
basically of three types:
Frequency
Amplitude
Phase
The following figure helps to understand this, where we see a reference signal - the first -
and their corresponding modulations altering the frequency (1), amplitude (2) and phase
(3).


All of these techniques alter a parameter in the sine signal somehow representing the
information we have.
Let us now make one more little analogy that will help us establish the concepts of
modulation.
Imagine a person, a night in an apartment like the one shown below, with two windows.
Suppose further that this person wants to communicate with his girlfriend, far away.


This person has combined with his girlfriend that he turn the light on the right, it means 1.
If he turn the light on the left, means 0.
We say that this signal then has 1 dimension, because the person uses only one dimension
(goes from one side to another) to indicate a change of symbol.
When it glows, we have a symbol. (Since we have two windows, we have two symbols. In
this case a symbol represents a bit).
Congratulations, you just know a modulation technique for the first phase: BPSK!

BPSK
BPSK means Binary Phase Shift Keying modulation.
This modulation uses a sinusoidal signal and varies its frequency to transmit information. In
our example, turning on the light from every window.
Each symbol is indicated by the change of position. In BPSK, is signaled by changing the
phase of sinusoidal signal, a phase of 0 and one with 180 degrees.
So, could you understand?
Bringing the xy axis, the BPSK signal will have only the x-axis. Given our signal as a vector,
it is like it switches back and forth that axis.
Indeed, the axis here is no longer called xy, but IQ. The letter I means a carrier 'In Phase'
with the carrier signal. And in the letter Q means 'Quadrature' (or perpendicular). Then we
have the figure below represents the BPSK modulation.


As an example, see how it is transmitted at 0110 bits sequence using BPSK modulation.
Note: For demonstration purposes, we use a frequency of 1 Hz, where it is easy to see the
variations. Actually, this frequency is much higher, but would expose what we want.


QPSK
Now let's return to the example of lovers.
Suppose now the boyfriend has moved to a different apartment, as shown below.
Now, see that he combined a new code for each lamp was lit. In other words, each symbol
carries two bits. For example, if it light up right on the top floor means 11.


You've probably made the analogy with the xy axis, or rather IQ:


OQPSK
Offset QPSK is a variation of QPSK, where only one channel I or Q can vary by time. The
goal is to offer better performance in some applications, with a lower rate of bit errors. The
signal is more 'friendly'to the transmitter.
In the case of lovers, the boy who was in the right window of the upstairs could only go
down or sideways.


In the diagram below you can see the possibilities of transition of OQPSK and QPSK.


FSK
In FSK - Frequency Shift Keying, signal carrier always has the same amplitude, and never
suffers from discontinuous phase changes. That signal is switched between two frequencies,
according to the value of the bits.

This type of signal is called envelope-constant, and suffers less distortion in applications
with high-power amplifiers.

MSK
Minimal Shift Keying is a particular type of FSK, in which the deviation of the peak
frequency is equal to half the bit rate.


This minimum frequency separation allows the detection of two orthogonal binary states.
This type of modulation has many advantages. It has an improved spectral efficiency,
compared to other PSK modulation schemes. It's kind of envelope-constant - as we speak,
and suffers less distortion in applications with high-power amplifiers. For mobile phones,
this contributed to a lower consumption of battery - good thing, isn't it?


GMSK
GSMK is basically the MSK signal applied to a Gaussian filter, which reduces the speed of
the rapid transitions of frequencies, which ended before spreading the energy in adjacent
channels. With this spectrum modulation turns out to be even smaller.


Well, we only show some of the existing modulation schemes. Anyway, these are the key,
and our initial objective - the concept of modulation - has been shown.

Other Modulation Schemes
Although complex modulation schemes are able to encapsulate large amounts of data in a
relatively small bandwidth, they are much more vulnerable to noise and distortion during
transmission.
Other important modulation schemes are:
/4 DQPSK
8PSK (it's like 2 x QPSK - 000 ... 111)
16 QAM
64 QAM
A more detailed explanation of these types is then for another opportunity, because until
now we have extended too much for today.
Finally, here is a table with some comparisons of modulation schemes.



Conclusion
This was a brief explanation of modulation scheme used to change the characteristics of the
signal being transmitted, allowing for greater efficiency in this process.





What is RF Drive Test (Testing)?
Every good RF design, after its implantation should be evaluated. There are few ways to do
this, for example through analysis of KPI (Key Performance Indicator) or through prediction
tools and signal interference. Other very common and efficient way to evaluate the network
is conducting a Drive Test. But what is it?


The name is intuitive: take a drive test. The Drive Test is a test performed in cellular
networks regardless of technology (GSM, CDMA, UMTS, LTE, etc. ...). Means collecting data
on vehicle movement. Its variation has also intutive: Walk Test, ie, collect data by walking
areas of interest.
The analysis of drive test are fundamental for the work of any professional in the field of IT
and Telecom comprising two phases: data collection and data analysis.
Although through the analysis of KPI's we can identify problems such as dropped calls,
among others, the drive tests allow a deeper analysis in field. Identifying areas of each
sector of coverage, interference, evaluation of network changes and various other
parameters.
Then let's know more about this technique, and know what we can do with it?

What is a Drive Test?
Drive Test, as already mentioned, is the procedure to perform a test while driving. The
vehicle does not really matter, you can do a drive test using a motorcycle or bicycle. What
matters is the hardware and software used in the test.
A notebook - or other similar device (1)
with collecting Software installed (2),
a Security Key - Dongle - common to these types of software (3),
at least one Mobile Phone (4),
one GPS (5),
and a Scanner optional (6).
Also is common the use of adapters and / or hubs that allow the correct interconnection of
all equipment.
The following is a schematic of the standard connections.


The main goal is to collect test data, but they can be viewed / analyzed in real time (Live)
during the test, allowing a view of network performance on the field. Data from all units are
grouped by collection software and stored in one or more output files (1).


GPS: collecting the data of latitude and longitude of each point / measurement data, time, speed,
etc.. It is also useful as a guide for following the correct routes.
MS: mobile data collection, such as signal strength, best server, etc ...
SCANNER: collecting data throughout the network, since the mobile radio is a limited and does not
handle all the necessary data for a more complete analysis.
The minimum required to conduct a drive test, simplifying, is a mobile device with a
software to coleect data and a GPS. Currently, there are already cell phones that do
everything. They have a GPS, as well as a collection of specific software. They are very
practical, but are still quite expensive.

Drive Test Routes
Drive Test routes are the first step to be set, and indicate where testing will occur. This area
is defined based on several factors, mainly related to the purpose of the test.
The routes are predefined in the office.
A program of a lot of help in this area is Google Earth. A good practice is to trace the route
on the same using the easy paths or polygons. The final image can then be brought to the
driver.


Some software allows the image to be loaded as the software background (geo-referenced).
This makes it much easier to direct routes to be followed.
It is advisable to check traffic conditions by tracing out the exact pathways through which
the driver must pass. It is clear that the movement of vehicles is always subject to
unforeseen events, such as congestion, interdicted roads, etc.. Therefore, one should
always have on hand - know - alternate routes to be taken on these occasions.
Avoid running the same roads multiple times during a Drive Test (use the Pause if needed).
A route with several passages in the same way is more difficult to interpret.

Drive Test Schedule
Again depending on the purpose, the test can be performed at different times - day or
night.


A Drive Test during the day shows the actual condition of the network - especially in relation
to loading aspect of it. Moreover, a drive test conducted at night allows you to make, for
example, tests on transmitters without affecting most users.
Typically takes place nightly Drive Test in activities such System Design, for example with
the integration of new sites. And Daytime Drive Test apply to Performance Analysis and also
Maintenance.
Important: regardless of the time, always check with the responsible area which sites are
with alarms or even out of service. Otherwise, your job may be in vain.

Types of Calls
The Drive Test is performed according to the need, and the types of test calls are the same
that the network supports - calls can be voice, data, video, etc.. Everything depends on the
technology (GSM, CDMA, UMTS, etc. ...), and the purpose of the test, as always.
A typical Drive Test uses two phones. A mobile performing calls (CALL) for a specific number
from time to time, configured in the Collecting Software. And the other, in free or IDLE
mode, ie connected, but not on call. With this, we collect specific data in IDLE and CALL
modes for the network.
The calls test (CALL) can be of two types: long or short duration.
Short calls should last the average of a user call - a good reference value is 180 seconds.
Serve to check whether the calls are being established and successfully completed (being a
good way to also check the network setup time).
Long calls serve to verify if the handovers (continuity between the cells) of the network are
working, ie calls must not drop.

Types of Drive Test
The main types of Drive Test are :
Performance Analysis
Integration of New Sites and change parameters of Existing Sites
Marketing
Benchmarking


Tests for Analysis Performance is the most common, and usually made into clusters
(grouping of cells), ie, an area with some sites of interest. They can also be performed in
specific situations, as to answer a customer complaint.
In integration testing of new sites, it is recommended to perform two tests: one with the
site without handover permission - not being able to handover to another site - thus
obtaining a total visualization of the coverage area. The other, later, with normal handover,
which is the final state of the site.
Depending on the type of alteration of the site (if any change in EIRP) both tests are also
recommended. Otherwise, just perform the normal test.
Marketing tests are usually requested by the marketing area of the company, for example
showing the coverage along a highway, or at a specific region/location.
Benchmarking tests aims to compare the competing networks. If the result is better, can be
used as an argument for new sales. If worse, it shows the points where the network should
be improved.

Data Collecting (almost) Flawless
Who has done a Drive Test before already knows this: It looks as Murphy's sits in the back
seat. That's because a lot of problems - but preventable - always end up happening.


To avoid, or at least minimize, the occurrence of these problems, always make a checklist
before starting the Drive Test.
It is very frustrating to run a route, and only in order to realize that the data were not
collected properly.
So before you start, check all connections, always! Mainly, make sure that all equipment is
properly energized. You will not want to see a low battery warning on a busy road, will you?
When we say check, include making sure that the connections are tight and will not drop
with vehicle movements.
Also make sure the equipment is tied, or you will see a flying laptop in case you need to
give a halt.
When assembling the equipment, maintain a distance of at least a foot between each
antenna, thereby ensuring that we have no electromagnetic interference or distortion of the
radiation pattern of antenna that can affect measurements.
Making sure that all equipment involved are tied and connected to the power source, verify
now that all were identified by the collection software. This must be done using the program
interface, which displays each element on which port is connected properly.
Now with the equipment identified, make sure the GPS has acquired satellites it needs to
determine its position. You must be an open area with sight to the satellites. It is advisable
to configure the software to do the collection in Degrees, Minutes and Seconds. Familiarize
yourself with the concept.
Another fact that should be taken into account in relation to its GPS antenna. It should
generally stay in one place on the vehicle roof, where you can see the sky. If it is not
waterproof, it is necessary to protect it with plastic if it rain.
If everything is OK with GPS, start a test collection to verify that all data being written.
In the program main window, make it an indicator that tells if the data are being recorded.
Also note that the parameters of the network are visible on the window of each device -
mobile, gps, scanner, etc.. Some software also offers the facility to visualize this data
without saving. It is very important to make sure everything is OK before you start.
And now, finally, but certainly most important, remember that: first, you're driving!


It is recommended whenever possible to have one vehicle driver and one equipments
operator. If this is not possible, always start, stop or make changes to a collectionat a
secure point of the road.
And of course: always check the conditions of the vehicle, and always wear your seat belt!

Annotations
Most software offer the facility to add notes (Marker) during the Drive Test. Whether
through it, or using a piece of paper, always make notes.


Information related to the test should be recorded for future aid in the analysis. For
example, how is the weather (rain), if there is some very big obstacle in the area, possible
sources of noise, etc ...


And what is collected?
Okay, but what is actually collected?
Well, before that, we must ensure that data is able to be recorded. Remember that we are
using a notebook, which arguably is subject to freeze the screen lock.
And if so, what to do? Unfortunately, there is much besides restarting the equipment.
But some practices can also minimize these errors.
A typical file size of Drive Test is from 30 minutes to an hour. Of course everything will
depend on the size of the file, which in turn depends on information being recorded.
Very large files suffer more risk of being corrupted - especially in case of malfunction of the
notebook - and are more difficult to move, load, and even to analyze.
Always leave a few GB free on your HD (Hard Drive) before beginning any data collection.
And use the least amount of RAM specified - required - the software collection.
Another important thing: do not open or use other programs when you are collecting data
only when strictly necessary.
Drive Test files are always big, and you're always moving them. So keep a daily basis -
weekly is appropriate - to perform a defragmentation of hard drive and a scan for errors.
Whenever you finish the collection, stop the ongoing calls, and only then stop collecting.
Otherwise, these calls may be interpreted erroneously as falls.


Now, yes. If the data were collected, we can talk about them. And to vary, depending on
the equipment used and the purpose of Drive Test.
In the case of mobiles, there are collected all the messages exchanged between the sites
and it, with all layers of information - even if you don't know much of it. It's because in
most critical cases, such data can be sent to better prepared laboratories for deep analysis.
If using a scanner, we also have information from sites that were not "seen" by the mobiles.
Of course, everything is configurable, but it's always good to use the default setting, and
record everything that is possible.
All information is stored with their respective data Date and Time as well as its geographical
position.
Typical example of data output is shown below.


Equipment and Collection Software
We have spoken enough of them. And what are the equipment and collection software
recommended?
Well, that question is not easy. Let's make an analogy: What is the car that you will buy
next year?
Got it? You'll have to check what your need, availability in the market, and the best cost
benefit. You may even continue by walking.
And with the equipment and software to collect and post-processing of Drive Test is the
same.
You should verify if it is compatible with your network, what are the differential costs and
benefits, not least, support!
Remember that new tools and features are constantly emerging. Keep up to date on this
subject.
Note: we could have listed here some equipment and software, for example, the one we
use. But we prefer not to quote any of them, to avoid the risk of eventually being somewhat
unfair.


But anyway, whatever the equipment, software and procedures used, the end result is
always the same: reports and output files.
The vast majority of collection (or processing) software have in common some software
which also makes analysis. These are called post-processing software. Each post-processing
software has its specific analysis, and as data (measures) collected is huge, they can be of
great help to solve very specific problems. These tools present the data in tables, maps and
comparison charts that help in making decisions.
Regardless of what the post-processing software, all have the functionality to export data in
tabular form, in text format or CSV.
This may be an attractive option, especially if you have own tools, developed specifically for
your needs.
The following are examples of drive test data processed by Hunter GE Drive Test tool,
created entirely in VBA.


One advantage of working with the data this way is that no matter how they were collected,
but its content. So for example we can, even if a team has run half the route with a type of
software, and another team of drive test shot the rest with another software, we can plot
the data from our network on a single desktop. That's where for example the generic geo-
referenced analyzing softwares enters, such as Mapinfo and Google Earth.
Another advantage is that the analysis available in Mapinfo and Google Earth has often a
better use, since they are more familiar to most professionals, not just those who specific
do/analyse Drive Test. This can be understood as not having to purchase multiple software
licenses for post-processing: only one for cases of deeper analysis.

Conclusion
Today we had an overview of Drive Test, a common and efficient technique for evaluating
the network.
Analyses made through the information of data collected in the field represent a true picture
of network conditions, and can be used in decision making in several areas, from planning
and design through optimization and maintenance of the system, always with the goal of
maximizing Quality, Capacity and Coverage in the Network.

















OSI 7 Layers Model
When we started this tutorial, the intention was to talk about wireless physical and logical
channels. In the course of the explanations, we realized it was necessary to first make an
introduction for beginners, or even a brief review for those who already know: The Layers of
the OSI Reference Model.


The Reference OSI layers Model (from now we'll just call OSI Model for short) is a very
common term in Computer Networks and Wireless Systems, and its understanding can be
considered as a requirement for those wishing to understand well the subject. Or saying it
another way, if you fully understand the concepts of OSI Layers, the concepts of
interoperability and integration between systems will be easier to be understood - you'll be
able to view how a network works through the layers.
Anyway, this is a key issue for all Telecom and IT professionals is highly recommended to
understand the concepts involved with the greatest possible clarity.
So come on and talk easily about the OSI layer?

OSI Layers Model
We begin by defining the model. The terms of Layers OSI / ISO are often used.
No! It was not a guy that set up the OSI model.
ISO - International Organization for Standardization. This is also reinforced what we always
say here: the advantages that standardization bring to us.
Note: ISO is an organization that aims to standardize also other procedures such as ISO
9000, implemented in processes of enterprises to obtain quality certification.

And the ISO created the OSI model. This model means Open System Interconnection -
Open System Interconnection. This model is designed to interconnect computers, but is now
applied in several other areas, like wireless. It is set in seven layers, which can be divided
or grouped into upper and lower layers.


The goal is clear: to standardize. By standardizing, processes are well defined, and this
organization allows for greater productivity and agility - what we always try!

PDU and Protocols
But before we talk specifically about the layers, we must understand why we need them.
And for that, let's talk a little bit of the packet switched networks, ie networks where small
units of data (packets) are sent to a destination address.
In these networks, these units are called PDU - Packet Data Unit, and each of them carries
an address.
And what's the point? Well, there are several. One is that we have no more need for
dedicated connections - per circuit. The connection using packets is more flexible, and this
type of packet traffic allowed for example the advent of the Internet!
And how networks communicate?

In cases of people communication to take place, we must both speak the same language.
You must also know when to talk and time to listen. Of course it is also possible a
communication using different languages (eg between an American - English and
Portuguese), but in this case, there must be a translation.
For networks of computers the idea is the same. And the language of networks are called
communication or information protocols - set of rules to follow to correct sending and
receiving information. An example of protocol that you might be used is the IP protocol!
Communication in packet networks have different protocols used in sequence and in
different stages of communication. We will see it soon.
Note: In another opportunity to talk more about Protocols and also about connections and
Circuit Packages.

Layering
Okay, so here we go: layers. Why this division?
Stages of communication occur in layers. The division - standardization - was made thinking
about it, mainly to facilitate the developments. A person can develop technologies to any
layer, without having to worry about the others. Interesting, no?
To begin to familiarize ourselves with them, following its listing: Physical (1), Link (2),
Network (3), Transportation (4), Session (5), Presentation (6) and Application (7).

But to explain further, it is easier to first make an analogy.
Imagine the following situation, where William, NY - United States sends a letter to Manuel,
in Lisbon, Portugal's capital.


Let's start from the transmitter point: William. The first thing that William needs to do is
write the letter, along with the Manuel address [7-Application].

William is with his injured hand, and can not write. Then he dictates the contents of the
letter to his wife Rose, who writes a letter to Manuel [6-Presentation].

William's wife then put the letter in an envelope, goes to the post, put the letter [Session
5].

Then the postal worker in the United States decides to outsource the service. He asks a
third party logistics - Fedex - to carry the envelope, which in turn puts everything in an
secure envelope of his Company. [4-Transportation]

The dispatch of the envelope is now on the Logistics company, who decides that the
quickest route is to Lisbon Airport - using air. So put the letter in another envelope with
your address information, and they take the airline. [3-Network]

Officials of the airline put the envelope in it's company's box on the plane, adding a label
with the destination address. [2-Link]

The box with the envelope follows our trip on the plane to Portugal [1-Physics].

Arriving in Portugal, we start the reverse process, ie the reception.

The box is then unloaded from the plane, the envelope is removed from within the same
and delivered to an officer who is in charge of directing the envelope to its destination,
which is the company Fedex in Lisbon [2-Link].

The delivery of the envelope as we know is with the company Fedex, which verifies that the
same should follow for the post office in Lisbon. [3-Network]

An official of the Post Office in Portugal receives the company's FedEx envelope, the
envelope of removing it, then deliver to the address of Manuel in Lisbon [4-Transportation].

Mary, the wife of Manuel checks the local post office, and receives the original envelope with
the letter. [5-Session].

She then read the contents to him [6-Presentation].

Finally, Manuel learns the news of Willian [7-Application].


This was a very simplified example. We do not talk for example of the routers that may
occur, eg if you plane scales, being necessary to add and remove new mailing labels.
However, I believe it has served to demonstrate the idea.
Also, remember that all analogy is not always perfect, but helps us understand the idea.
Now that we have done our analogy, we will bring the side a bit more technical, and talk a
little about each layer.
We will not follow the entire process - starting from seventh to first, and then the other
way. Let's talk at once, taking the road of first layer to last - the seventh.
Note: This description is a long subject, as the description of devices, protocols and
aplicatiovs used. Anyway, let's try to keep a simplified line to describe the layers.

Layer 1 - Physical

The physical layer does not understand anything but bits: The signal comes to it in the form
of pulses and is transformed into 0's and 1's.
In the case of electrical signals for example, if the signal has a negative voltage, it is
identified as 0. And if you have positive voltage, is identified as 1.

In this layer are then defined uses of cables and connectors, as well as the signal type
(electrical pulses - coaxial; pulses of light - optical).
Function: receive the data and start the process (or the reverse, enter data and completing
the process).
Devices: Cables, Connectors, Hubs.
PDU: bits.

Layer 2 - Link

Continuing the flow, the link layer receives data formatted by the physical layer, bits, and
treats them, converting the data on your drive to be forwarded to the next layer.
An important concept, the physical address (MAC Address - Media Access Control) is on that
layer. The next layer (3-Network) that will address the known IP address, but let's talk
about when discussing it.
Function: link data from one host to another, making it through the protocols defined for
each specific means by which data is sent.
Protocols: PPP, Ethernet, FDDI, ATM, Token Ring.
Devices: Switches, Network Card, Interfaces.
PDU: Frame.

Layer 3 - Network

The table then comes to the Network Layer, responsible for data traffic. For this, it has
devices that identify the best possible path to follow, and which establish such routes.
This layer takes the physical MAC address (Layer 2-Link) and converts it to the logical
address (IP address).
And how is IP address? Well, the IP protocol is a logical address. When the Network layer
unit receives the data link layer (Frame remember?) it turns into its own PDU with that
logical address, which is used by routers for example - in their routing tables and algorithms
- to find the best data paths. This data unit is now called Packet.
Function: addressing, routing and defining the best possible routes.
Protocols: ICMP, IP, IPX, ARP, IPSEC.
Devices: Routers.
PDU: Packet.

Layer 4 - Transport

If all goes well, the packets arrive from layer 3 (Network) with its logical address.
And like any good carrier, the Transport Layer must ensure quality in delivery and receipt of
data.
In turn, as in all transportation, it should be managed. For this we have a quality service
(QoS - Quality of Service or Quality of Service). This is a very important concept, and is
used for example in Erlang B tables, remember? In simple terms, rules and actions are
aimed at ensuring quality of service desired, based on error recovery and control of data
streams. But let's not lose focus here, just remember that QoS is in the transport layer.
Function: to deal with all matters of transportation, delivery and receipt of network data,
using QoS.
Protocols: TCP, UDP, SPX.
Devices: Routers.
PDU: Now is called a Segment.

NOTE - Lower Layers (Transport) and Upper Layers (Application)
Before talking about the last three layers, it is important to make an observation: It is
common to the layers of the OSI model layers are grouped into upper and lower layers, as
said before.
The first four layers discussed earlier can be referred to as Transport layers.
And the three layers - we'll describe now - are known as Application Layer (not to be
confused with the last layer - whose name is also Application).
There are some reasonable comments here, just to avoid to keep repeating the description
of each of the same data.
This is particularly the case for the PDU of these layers, which are now simply called Data
(unlike the Transportation Layers, where each one has its own PDU type: bit - frame -
packet - segment).
Furthermore, several protocols are used in all three layers, such as Telnet, DNS, HTTP, FTP,
SMTP. Only the layer that uses specific protocols will be indicated, but all others can be
used.
Regarding the devices involved, it becomes more consistent call it applications involved,
which are generally Client Programs such as Email, MSN, FTP, etc. .. In a few exceptions we
have devices.
Okay, let's go back to Layers.

Layer 5 - Session

Following the layers, we have the Session layer. As the name suggests, this layer (5-
session) starts and ends the session responsible for communicating and exchanging data,
for example by setting the beginning and end of a connection between hosts, and also
managing the connection this connection.
An important point here is the need for synchronization between the hosts, otherwise the
communication will be compromised, even stopping working.
This layer adds markings on the data transmitted. Thus, if the communication fails, it can be
restarted last received valid markup.
Function: start, manage and terminate sessions for the presentation layer, eg TCP sessions.

Layer 6 - Presentation

The presentation layer has the function to format the data, making the representation of
them. This formatting includes compression and data encryption.
It is easier to understand this layer as the one which translates the data into a format that
can understand the protocol used. We see this for example when the transmitter uses a
different standard other then ASCII, and these characters are converted.
When two different networks need to communicate, is the 6-Presentation layer that works.
For example, when a TCP / IP needs to communicate with an IPX / SPX network, the
Presentation layer translates data from each one, making the process possible.
Regarding compression, we can understand like a file archiver - ZIP, RAR - where the
transmitter compresses the data in that layer, and the receiver decompresses. This makes
the communication to become faster because we have less data to be transmitted
(compressed).
And when there is need for increased security, this layer applies some scheme encryption.
Remember that everything that is done on the transmission side (eg encryption) has its
corresponding opposite the reception (in the case, the decryption).
Function: encryption, compression, formatting and presentation of data formats (eg JPEG,
GIF, and MPEG) for applications.
Protocols: SSL, TLS.
Devices: Gateways (translating protocols between different networks), Transceiver
(translating between optical and electrical signals - traveling in different cables).

Layer 7 - Application

In this layer we have the User Interfaces, which are created by the data itself (email, file
transfer, etc). This is where the data is sent and received by users. Such requests are made
by applications according to the protocols used.
Just as the physical layer, it stands on the edge of the model, so it also starts and stops the
whole process.
This layer is probably that you are more used to. You interact directly with it for example
when using a program to read or send email, or communicate through instant messaging.
Function: make the interface between end users and communication programs.

All Layers Together
Okay so far?
We must conclude, as we have already extended too by now with the - short - description of
the layers of the OSI model.
As the final information, realize that each of them always prepare (or format) data to
understand the next layer. This occurs at all stages of a communication in both directions.
In conclusion, we have a figure that gives us a good idea of this process.

Another figure that helps us to see with a little more in detail examples of PDU's and
Protocols at each step.

That's it, the OSI Model! We hope you like its explanation!


Conclusion
Today we had a overview of the OSI Model Layers Reference. It is a very broad subject,
because this model, as the name suggests, serves as a reference for various applications.
Our approach was only introductory, explaining the model in simplified form. However, this
base is very important, and it works for those who just wanted to get an overview of how
the model is applied to communications, as well as for those who want to delve into the
subject.







What is Antenna?
If we only ask about the device, you'll know for sure to define what is an antenna, or at
least have ever seen it. We also know that changing conditions or characteristics, for
example targeting them, they improve the communication link.


But if someone asked to describe what is an antenna technically speaking, how would you
describe its work?
That's what we'll talk about today.


Basics
Before we begin to define the antenna work, we need to learn (or remember) some basic
concepts.
By understanding these concepts, it will be much easier to understand how antennas works.

Wavelength

Radio waves (electromagnetic) is physical, of which we highlight the frequency. We know it
is not easy viewing.
So let's make our first analogy: imagine a drop of water falling on the flat surface of a
bucket of water.


After the droplet hits the water at rest, we can see the waves formed.
In telecom we specifically describe the pattern of sine waves, the wavelength is distance
between two peaks.


Mathematically, the wavelength () is defined by the speed with which the wave propagates
(c) divided by frequency (f) thereof.
= c / f
wavelength (): is represented by the Greek letter ;
speed (c): Considering that our waves propagate in air, we can consider as the speed of light in
vacuum - c - 300,000,000 m / s (which may be represented by 300M m / s);
frequency (f): frequency of the signal will be using.
For example, on a 900 MHz system, we have: = (300 Mm / s) / (900 MHz) = 0.33333 ...
or 33.33 cm.

Polarization

When we talk about electromagnetic waves, another important concept is the polarization,
ie what the plan of the electrical component in which the wave propagates.
Ok, started to complicate things? So let's try to explain better.
Electromagnetic waves are composed of two planes - vertical and horizontal. These plans
represent the electric and magnetic fields. These components are always orthogonal,
vectors off by 90 degrees. They vary in phase - or zero - degrees of electrical phase shift.
The propagating speed (also vector) for these two fields in turn spreads in 90 degrees of the
two.
The following figure helps us visualize these vectors.


So depending on how the signal coupling is done - the antenna is oriented - we have a
definition of polarization.
If the transmitter is such that the wave is completely in the vertical plane (Electrical plane
E), then we have Vertical polarization. If the wave is in the horizontal plane (Magnetic plane
B), we have Horizontal polarization. There are other types of polarization, as Cross
polarization and Circular polarization (right and left), that actually are combinations of
vertical and horizontal polarizations, and also the phase differences.
The concept of polarization is very important in antennas, mainly because when a signal is
transmitted in one polarization must be received in the same polarization, otherwise we will
have an attenuation (loss), known as cross-polarization.
To better understand the polarization of waves, let's see some examples, in which we
highlight only the E component - electric field. (Remember though that there are always a
magnetic field 90 degrees to the electric field).


And see how looks the wave (the electric component E) for Cross polarization - a
combination of vertical and horizontal polarizations, electrically in phase.


Let's stop here, our artistic ability (?!?) limits us! But a wave with Circular polarization
(electric component E) - a combination of two polarized waves - one vertical and one
horizontal, electrically out of phase by 90 degrees, but with the same magnitude, it would
"more or less" as we draw down. Surely the real wave is at least less "shaky."


As an example of antenna with Circular polarization we have Helical Antennas or Cross Yagi
with Circular polarization (left or right), better known as RHCP (Right Hand Circular
Polarization) and LHCP (Left Hand Circular Polarization). We'll see more of their applications
in due course.

Antennas
Okay, after briefly introducing some basic concepts, let's talk about antennas.
By definition, an antenna is a device designed to transmit or receive electromagnetic
energy, matching these sources of energy and the space. Also often called radiant systems.
Note that the same device can be used to transmit or receive.
Let's start by looking at a simplified representation of a system for transmission and
reception.


The original information is changed, for example through some kind of modulation and
treatment, and still conveyed or guided by a cable to the antenna. The antenna then
radiates this information by the medium (air) until it reaches the other antenna, which in
this case will make receiving the signal, making it still the way the cable to the device that
will make such demodulation (and other treatments), recovering the original information.
Note: Just as an example, we are not considering existing losses.
Sure, but how the antenna works? How she radiates the information?
To understand this, we need a little atomic review!
Calm down, let's just talk about atoms: Atoms are the smallest possible share of any
chemical element. All that exists is made up of elements.
Put simply, most of them are formed by the atoms: protons, electrons and neutrons. At the
core of the atom have the neutrons and protons. The electrons stay moving around this
nucleus, like cars on a trajectory as in a crazy race.


An attraction (positive-negative) is what makes it possible that all elements exist.


But what does this have to do with the antenna?
The antennas are usually made with metallic materials (aluminum / brass). These metals
are formed by atoms. When all the atoms are brought together - to form the metal, then we
have a set of free electrons.
And when this series of free electrons is subjected to an electric voltage (electric field), they
begin to move and vibrate.
When electrons vibrate from one side to another antenna, they create an electromagnetic
radiation in the form of radio waves.


Pause: Are You caughting up how energy is radiated by the antenna?
Well then you've got it all. Because now, just the opposite happens.
The electromagnetic radio waves that leave the transmitting antenna travel through the
medium, eg air, and reach the other antenna - reception. The effect of electromagnetic field
reaching the other antenna is to make the free electrons vibrate in the same - which now
generates an electric current corresponding to what was sent from the transmitting
antenna.


So now we can conclude: the transmission antennas convert the electrical current
(electrons) into electromagnetic waves (photons), and the reception do the reverse -
convert electromagnetic waves (photons) into electrical current (electrons).
The information is preserved because the antenna acts as a transducer matching conductors
that generate these fields. For example in the transmission, the electromagnetic field
corresponds to a specific voltage and alternating current. In the reception, the same
reference voltage and alternating current is induced.


A Simple Antenna
Further, consider the representation of the simplest type of antenna: a dipole antenna. As
the name suggests, is an antenna with two poles.
It is a model of the antenna easy to make, and consists of two pieces of wire of equal
length, separated from each other by a center insulator and may have an insulator on each
end to attach it to a support.
In the figure below is an example of a dipole antenna (insulators shown in red in figure).


Let's use this example to talk about antennas, but now we're basically with simple question,
but that many people can NOT explain:
"How can there be a current flowing in antenna, if both parts are open? This runs totally
against what we learn, where have current, we need a closed circuit, no?"
To answer this, we again return to the familiar concepts of electrical circuits.
You must remember the concept of capacitance (C), defined through the use of capacitors.
And there is a kind of unavoidable capacitance that arises between compontent always close
to each other on the circuit - and often unwanted: parasitic capacitance.
Only in our case, this capacitance is what allows the antenna to work!


At high frequency, the parasitic capacitance between the two arms of the antenna has a low
impedance, and represents the current return path.
In short: a tuned antenna can be considered as an RLC circuit - with resistance R,
inductance (L) and capacitance (C)!

It's beginning to be clear?
Note: You may wonder: "And in the case of antennas with only one arm?" Do not worry, the
antenna will always seek a reference plane to act as "ground", such as a metal rod next.


From what was shown, we can say that every antenna requires two parts to radiate energy.
And that energy is proportional to the dipole current.
Okay so far? After many pauses for further explanations, let's continue talking about further
concepts.

Resonance
Recalling what we have seen so far, the electric waves in antennas usually have a fixed
wavelength.
We also saw that an antenna can be considered as an RLC circuit, where definition of these
features are given by the environment where the antennas are, and their physical
properties - especially its size.
Ready for another term? So here we go: Resonance!
In general, resonance is the phenomenon that occurs in a particular frequency where we
have a maximum possible transfer of energy.
In the case of antennas, so there to be resonance, its size (physical length) must be a
multiple of its wavelength. In this case, we will have a main frequency where the antenna
delivers the maximum amount of energy possible - resonant. And the larger the size
(length) of elements of the antenna, the lower the resonant frequency.
In more technical terms, we have the resonance frequency where the inductive and
capacitive reactances cancel each other out - we have a purely resistive impedance.


Most antennas are used in its resonance frequency. That's because when we turn from this
resonance frequency, the reactances levels give rise to parameters that may jeopardize the
operation, for example the SWR, as explained in another tutorial. The impedance of the
antenna ceases to be purely resistive, with a complex impedance - in both meaning of the
word, which gets her an unwanted behavior.
It is clear that a non-resonant antenna also works - transmit and receive. But it needs a
more powerful transmitter (because a smaller part of the input energy will be present at the
output). And for the same reason, you need a receiver with a sensitivity much higher. So:
the system efficiency will be much lower!

Wavelength X Length of Antenna
Just to finish by today, you should remember what we taught to be the resonance of the
antenna physical size must be multiple of its wavelength.
Let's try to understand why exactly this value? As always, let's remember more concepts...
Remember that an electrical circuit - which we has also mentioned that a tuned antenna
acts as an RLC circuit - the Voltage (Potential Difference):
in a Short Circuit is equal to Zero;
in an Open Circuit is Maximum.
Well, the antenna end, we have an Open Circuit - so the point with the Highest Voltage.
And considering the two ends - one with the maximum positive voltage and one with the
maximum negative voltage - we have the center point with Zero voltage.


This distance between the end and the central point is the distance between the point of
maximum voltage (yellow circle in figure) and point of zero voltage (green circle in figure) -
and is a quarter wavelength!


Properties and Types of Antennas
After our brief summary, focused mainly on the functioning of the antennas, we can proceed
with several other concepts, types of antennas, etc..
Some concepts - for example Impedance - were also mentioned, but were not well
described.
But for today, our tutorial is already extended too much, and is also very difficult to
absorb more knowledge than what was exposed here, at once. So lets take this supplement
,as well as continuing the subject of antennas, for the next tutorials. Much remains to be
said, many questions to be eliminated.
Hopefully you have managed to understand at least some of the basics of antennas.

Now, do you have a minute?
Before we finish, and IF you liked the article, or other articles in telecomHall, we would like
to make just one request, is it okay?

With a simple gesture, you can help us to improve more and more, bringing each week tips,
tools, tutorials and everiything more that really are worth reading for Telecom & IT.
So, we would ask to help us simply by sharing with your friends. To facilitate it, follows
links that let you do this quickly and easily:
Conclusion
Today we had a first approach on antenna, an undeniably important subject, and a essential
system for the good performance of any network.
As always in a more informal way, we try to flow explanations in a simplified manner, as a
matter of course is a foundation for other studies and further refinements as necessary.
New tutorials on the subject will be published in due course, always with a focus ever
deeper.


What is Cellular Field Test Mode?
The evaluation of the network through the phone's screen is possible using the feature
known as a Field Test.


However, there is not much documentation available about this feature, so let's talk a bit
about it today.

Motivation
Allow an IT and Telecom professional access network engineering information.


What is Field Test?
Field Test is also known as Test Mode, Engineering Mode, Net Monitor or some other similar
variation.
The mobile device is a receiver / transmitter that 'talk' with the Radio Base Stations (BTS's)
through Messaging. Receives and decodes messages such as received signal level, channel
control, neighboring cells, etc..
All phones must necessarily 'know' that information to access the system for example, make
a call or do a handover.
The Field Test is the feature that displays these data on the mobile screen.


And is this feature present in all brands and models of phones?
All this information is 'useless' to the vast majority of users. Therefore, this functionality -
generally - is not 'open'.
However, the information is very important for professionals who know what they mean.
For example, you see the Field Test: RxPwr -90 dBm. What does this mean?


A professional will know that this means that the signal level is -90 dBm; Still other
parameters should be examined, neighbors conditions, Link Budget (technology in use -
GSM, CDMA, UMTS, LTE ...), outdoor or indoor losses, among others.
Knowing the cell information server (control channels, voice, etc. ...), and information from
neighbors such as the professional can decide for example whether it is necessary to make
some kind of adjustment in the system.
The possibilities are endless. Again, however, useful only to professionals.

And how to access the Field Test on my phone?
That depends. Understand the Field Test software or as a basic application that shows data
on the mobile screen.
Almost all CDMA phones come with this 'software' Field Test installed. Just know the
sequence of keys that must be typed and ready. (On some models you need to access it's
programming mode and enable this option.)
GSM phones already in general does not come with the Field Test 'installed'. To access the
Field Test, you must first install a software on the device.



Is that even legal?
This is where the questions begin, and if it is allowed or not. In general, if your mobile is
under warranty, the answer is no - you will not want to lose your warranty, will you?
Besides, there is a doubt about installation of vendors proprietary software - some
companies develop software for their specific use - and it's not available to the general
public. What happens is that some of these softwares leak.

Since the goal here is to simply inform, we'll give an explanation for everything that is
involved in these processes. So you can draw your own conclusions about what can be
done.

Change the Cell Phone Software
Change the software installed - flashing - is a user's own choice, at your own risk. This
operation, as well as the unlocking, is done using the phone cable by connecting it to a
computer with specific software installed.
Usually the changes are made by technical assistance, but can be made by mobile hackers,
especially in cases of unlocking devices, for example to allow another carrier chip to be
used, set bluetooth open, put wallpaper, etc.. Some carriers provides their phones fully
unlocked, others prefer to lock up. Therefore, one must examine each case depending on
the rules of the country, the carriers rules for unlock the equipment , etc..
Speaking more technically, the cell has a special memory called EEPROM. This memory is
where the information of mobile is, and it's called by the same when necessary. That is, the
same as the motherboard of the computers has a memory where they store their operating
system, this memory stores the operating system that manages the entire cell phone. As
most of them are read / write access, you can erase the operating system and switch to
another / better - update. Or simply add what is needed.
In summary, the handset comes with some functions / features 'locked', but if it is 'flashed',
these functions can be unlocked.
However, as mentioned, some of these operations are not legal. For example, can be
unlocked sounds, space (KB), games. But if your carrier does not permit this, we had an
illegal procedure.
Remember that there are other dangers: a badly flashed phone may stop working, or at
least have to be taken to the Technical Assistance to make a repair - which will generate
costs!
There are numerous websites that assist in this task, all warning about the possible risks of
damage that may occur. This is not the focus of our site, so is up to you look for more on
this subject, for example searching in Google.
Again, we remember: the changes are not so simple, and should be done at your own risk -
including legal.
Of course there are the installation of legal software supplied by vendors to operators,
especially in cases of testing for approval of new devices.


And also the softwares provided by the vendors themselves, so-called Software Updates, eg
for the user to solve any problem with the device (bugs). In such cases, it is always worth
following the steps below.
Make a complete backup of the device;
Being with the device's battery fully charged (essential)
Choose the appropriate update for your device,
And be patient: it takes a long time for updates to be applied.

Android
But now a new scene is emerging, creating a new way of seeing the applications of cell
phones - including the Field Test.
Much of this revolution is due to the new operating system for mobile phones from Google:
Android.
That's because Android allows anyone to develop applications that use the available
information of the mobile. The service includes an SDK - Software Development Kit,
external libraries, applications, hosting services, API's.
Therefore, to we have a Field Test application for a device running Android, we just need to
develop an application that manipulates the information it already has - as if they were
variables in the code programming - and present them in an interface on the device screen.
When someone develops a new application, it can be submitted to Andoid Market, became
available for everyone who uses the Androidd install it. There are free and paid applications:
https://market.android.com/apps/TOOLS

Example of Field Test Like Application with Android

There are many applications already built with the concept of the old Field Test. Certainly,
many others will also be created, and increasingly we will have real 'Drive Test Software".
Just to illustrate, let's see one of these programs, the RF Signal Tracker.
And the best way to do so is lloking at some screenshots with the program running.
Below, we see a standard screen, much like the Field Test we're used to in old appliances.


But now, get better. As the - almost all- device has GPS inside, you can make a real online
drive test, including here show the best server! See another screen RF Signal Tracker.


And the options do not stop there. You have options to save your data and then export to a
more complete analysis on your computer.


And then: are you still satisfied with the old screen of Field Test? Well, it's not hard to see
that it's future is in their final days, is not it?
Conclusion
This was a brief tutorial showing information about the Field Test of cell phones, a program
that displays the network main information on the phone screen, allowing the responsible
professional to take the necessary measures.
As always, we hope you enjoyed it.


What is Ec/Io (and Eb/No)?
If someone asks you "Which Signal Level for good call quality: -80 dbm or -90 dBm?"
Beware, if you respond quickly, you might end up missing. This is because the correct
answer is ... it depends! The Signal Strength is a very important and essential measure for
any technology (GSM, CDMA, UMTS, LTE, etc.). However, it is not the only one: let's talk a
little today about another magnitude, equally important: the Signal Noise Ratio.


Although this ratio is of fundamental importance to any cellular system, is not well
understood by many professionals. On the opposite side, professionals with a good
understanding of this ratio are able for example, to correctly assess the RF links, and also to
perform more extensive optimizations, obtaining the best possible performance of the
system.
So, let's see a little about it?

Eb and No
To begin, we define the basic concepts of Eb and No. They are basic for any digital
communication system, and generally we talk about it when we deal with Bit Error Rate and
also Modulation techniques.
Simply put:
Eb: Bit Energy.
o
It represents the amount of energy per bit.
No: Noise Spectral Density.
o
Unit: Watts/Hz (or mWatts/Hz)
Which brings us to the classic definition of Eb/No:
Eb/No: Bit Energy on the Spectral Noise Density.
o
Unit: dB
It did not help much, does it?
Do not worry. Indeed, only with the theoretical definition is still very difficult to see how this
ratio is used, or how it can be measured.
But okay, let's walk a little further.

Okay, so how is Eb/No measured?
To understand how this ratio can be measured, let's imagine a simple digital communication
system.


The ratio Eb/No is measured at the receiver, and serves to indicate how strong the signal is.
Depending on the modulation technique used (BPSK, QPSK, etc.) we have different curves
for Bit Error Rate x Eb/No.
These curves are used as follows: for a certain RF signal, which is the bit errors rate that I
have? Is this bit error rate acceptable for my system?
Whereas the gain that digital has, then we can set a minimum criterion of signal to noise
ratio, in order to have each service (Voice/Data) operating acceptably.


In other words, we can theoretically determine how the performance would be for the digital
link.
Note: it is worth remembering here that this is a very complex subject. As always, we try to
introduce to you the most simplified possible through the use of examples and simple
concepts. Okay?
For example, a concept that could be explored here - since we are talking about digital
communication system - is the Noise Figure. But we do not want to repeat here all the
theory explained in the University. Nor was it to have mentioned the noise figure here, but
as we talked about it, just understand as a noise level that every receiver has, and that it is
due to the process of amplification and processing of signal.
Concepts like this, and other even more complex, can be studied, if you wish. But now, let's
continue with our signal to noise ratio.

Eb/No -> Ec/Io
The concept of Eb/No applies to any digital communication system. But today we are talking
specifically to Ec/Io, which is a measure of evaluation and decisions of CDMA and UMTS.
Note: all the technology uses signal-interference ratio. For example, in GSM, we use C/I.
As we are speaking of codes, it becomes easier to understand the concepts by observing a
simplified diagram of Spread Spectrum Modulation.
In red, in transmitter have a narrowband signal with data or voice modulated. This signal is
spread and transmitted. And spreads through the middle (air). In the receiver, the signal is
despread - using the same sequence that was spread - and thus recovering the base
narrowband signal.

To proceed, we must know some more definitions. However, this point is quite delicate, as
we enter a conceptual area where we have differences between authors, differences in
translations/countries, where differences in technologies are applied, etc..
Let's try to define in a generic way, and only the main.
No: Spectral Density of Noise;
o
Noise generated by the RF components of the system, the air, among others.
Io: Interference is the Broadband; Interfering co-channel, including yourself setor.
E: is the signal (average) energy - do not confuse it with the sinal (average) power.
b, c, s. ..: Energy are the power points in time, therefore related to the measure or 'length' of the
time (the average power is independent of time ).
o
Hence it comes Eb, Ec and Es, respectively relating to Bit Chip and Symbol in different
times.
Note: With these concepts, several formulas can be derived with different numerators and
denominators. For example, Es = Eb * k, where k = number of bits per symbol. In QPSK
modulation, where k = 2, Es = 2 * Eb. And the derivations of formulas can reach far more
complex equations, such as the definitions of capacity of an AWGN channel, and further
deductions for equivalences (Ec/No, Eb/Nt, etc. ...). Again, it is not our purpose here today.
We only mention a few concepts, related.
Then come back to the practical level - noting that theoretical approaches can be done more
easily later, after the basics are understood.
So let's keep today in ratios most common: Eb/No and Ec/Io.
As we defined Eb/No is the Average Energy of a bit signal, on the Spectral Density of
Noise. It is primarily a parameter related to the manufacturer for different bearers (based
on the channel model). But it can also vary with the environment (urban, rural, suburban),
speed, diversity, use of power control, application type, etc..
And now we can begin to define Ec/Io, one of the most important systems in CDMA and
UMTS.
Note: An important observation is that often when we refer to Ec/Io, we are actually
referring to Ec/(Io + No). What happens is that for practical purposes, we only have Ec/Io,
because the interference is much stronger and the noise can be neglected. Otherwise: for
CDMA interference is like a noise, then both can be considered the same thing.
Okay, let's stop with the issues and concepts, and talk a little about the values of these
indicators and their use in practice.

Eb/No Positive and Ec/Io Negative?
In terms of values, and talking logarithmicly, if any ratio is less than 1, then the value is
negative. If greater than 1, positive.
We have Ec/Io in the air, which is spread across the spectrum: then we have negative value
to the ratio of energy on the total noise (the energy is lower than the Total Interference). It
is measured at the input of receiver (NodeB, UE, etc).
Regarding Eb/No, it is in the baseband after despreading and decoded only for one user -
then we have a positive amount of energy over the total noise. It is measured at the output
of receiver (NodeB, UE, etc).

Why should we use Ec/Io?
A more natural question would be: why we can not simply use the Signal Strength
measured by the mobile as a guide for operations such as handover?
The answer is simple: the measured signal level corresponds to the Total RF power - All
cells that the mobile sees.
So we need another quick and simple measure that allows us to evaluate the contribution of
each sector individually.
We used to measure the pilot channel signal of each sector to assess the quality: if the level
of the pilot is good, then also are good levels for the traffic channels for our call in this
sector. Likewise, if the pilot channel is degraded, so will the other channels (including
traffic) be, and it is best to avoid using the traffic channels in this sector.
UMTS and CDMA systems, we have a pilot channel, some other control channels such as
paging, and traffic channels.
The Ec/Io varies with several factors, such as the Traffic Load and and RF Scenario.
Of course, the Ec/Io is the final composition of all these factors simultaneously (Composite
Ec/Io), but it's easier to understand talking about each one separately.

Change in Ec/Io according to the Sector Traffic Load
Each sector transmits a certain power. Suppose in our example we have a pilot channel
power setting of 2 W, and a power of other control channels also fixed at 2 W.
To make it easier to understand, we calculate the Ec/Io (pilot channel power to total power)
of this sector in a situation where we have no busy traffic channel (0 W).


Thus we have:
Ec = 2 W
Io = 0 + 2 + 2 = 4 W
Ec/Io = (2/4) = 0.5 = -3 dB
Now assume that several traffic channels are busy (eg use 6 W for traffic channels). This is
a situation of traffic load, we'll see how is Ec/Io.


Ec = 2 W
Io = 2 + 2 + 6 = 10 W
Ec/Io = (2/10) = 0.2 = -7 dB
Conclusion: As the traffic load in the sector increases, the Ec/Io worsens.

Change in Ec/Io according to the scenario RF
According to the RF scenario - a single server sector, some or many servers sectors - we
can also take various measures to Ec/Io.
Considering first a situation without external interference, with only one server sector
(dominant), the ratio Ec/Io is about the same initially transmitted.


Ec/Io = (2/8) = 0.25 = -6 dB
Whereas a signal coming from this sector in the mobile at level of -90 dBm (Io = -90 dBm),
we have Ec = -90 dBm + (- 6 db) = -96 dBm.

Let us now consider another situation. Instead of one, we have five sectors signal arriving at
the mobile (for simplicity, all with the same level of -90 dBm).


Now have Io = -83 dBm (which is the sum of five signals of -90 dBm). And the power of our
pilot channel remains the same (Ec = -96 dBm).
Thus: Ec/Io = -96 - (-83) = -13 dB
Conclusion: As many more sectors serves the mobile, the Ec/Io worsens.

This situation where we have many overlapping sectors, and with the same level of signal is
known as Pilot Pollution - the mobile sees them all at once - each acting as interferer to
each other.
The solution in such cases is to eliminate unwanted signals, by setting power parameters or
physical adjustments (tilt, azimuth), leaving just dominant signals which should exist at this
problematic place.

Okay, and what are typical values?
We have seen that for CDMA and UMTS systems, the measurement of Ec/Io which is very
important in the analysis, especially in handover decisions.
And now also understand the measure Ec/Io as the ratio of 'good' energy over 'bad' energy,
or 'cleaness' of signal.
But what are the practical values?
The value of Ec/Io fluctuates (varies), as well as any wireless signal. If the value starts to
get too low, you start to have dropped calls, or can not connect. But what then is a good
range of Ec/Io for a sign?
In practical terms, values of Ec/Io for a good evaluation of the network (in terms of this
indicator) are shown in the diagram below.

A composite Ec/Io ~ - 10 db is a reasonable value to consider as good.
Note: See we are talking about negative values, and considering them 'good'. In other
words, we are saying that energy is below the Noise (and still have a good situation).
This is a characteristic of the system itself, and Ec/Io 'most negative' or 'less negative' is
going to allow assessment of the communication.
In situations where Ec/Io is very low (high negative number), and the signal level too (also
high negative number), first we need to worry in enhancing the weak signal.
Another typical situation: if the measured Ec/Io is very low, even if you have a good signal
level, you can not connect, or the call will drop constantly.
I hope you've managed to understand how the Ec/Io is important for CDMA and UMTS.
Note, however, that this matter is very complex, and supplementary reading - books and
internet - can further help you become an expert on the subject.
Anyway, the content displayed serves as an excellent reference, especially if you're not
familiar with the concept of signal over noise for CDMA and UMTS.

And the Signal to Noise Ratio for other technologies?
The ratio Ec/Io is the most commonly used to assess the condition of energy over
interference, but applies only in technologies that use codes (Ec).
But the concepts understood here to CDMA and UMTS are very similar - apply - for any
technology, eg GSM, where we use the C/I.
Anyway, this is a topic for another tutorial, we saw today Ec/Io.

Conclusion
Today we had a brief introduction on the Ec/Io ratio, a measurement for decisions in CDMA
and UMTS, and used togheter with the measured Signal Strength.
We have seen that it represents the ratio of signal energy within the duration of a chip of
the pilot channel, on the Spectral Density of Noise + Interference.
This is a very important measure, which somehow ignores the overall strength of the signal,
and focuses on how best to evaluate the pilot channel signal is desired, in relation to noise
that interferes with it.
Returning to our original question: A strong signal level does not necessarily indicate an
strong Ec/Io: it depends on the level of interference.





Goodbye IPv4... Hello IPv6!
You've probably heard of IPv6. These little letters, increasingly known, will bring up a
number of innovations and changes that should occur gradually over the World.


Changes that for sure will affect you, directly or indirectly - mainly due to the benefits that
it provides.
In Telecommunications area, the universe related to IPv6 is also increasingly in focus, and
this subject for sure will be present to All of us in near future.
So let's talk a little about it?
Note: All telecomHall articles are originally written in Portuguese. Following we translate to
English and Spanish. As our time is short, maybe you find some typos (sometimes we just
use the automatic translator, with only a final and 'quick' review). We apologize and we
have an understanding of our effort. If you want to contribute translating / correcting of
these languages, or even creating and publishing your tutorials, please contact us: contact.

What is IP?
To begin, let's first remember a little about IP. Simply put, IP (Internet Protocol) is the
standard that controls the routing and structure of data transmitted over the Internet. It is
the protocol that controls the way devices such as computers communicates over a
network.
For two devices to communicate, each one must have an identification. In a cellular
network, each cell has unique number (eg 8 digits).


With computers we have the same case, only the identification 'number' is a little different.
Each 'number' has 4 parts, and each part can have up to three digits (and again, each part
can vary from 0 to 255).


As in cellular networks, we can't have two devices with the same number on the computers
network, each one must have a unique IP that identifies it.
It turns out that today, we have not only computers using IP. And this finite number of
possible combinations is no longer sufficient to meet the great demand for these new
devices.
And that's where the problems start: IPv4 ...

IPv4
The 'current' version of IP is the version 4, so IPV4. It has the format shown above, and
was standardized at a time was more than enough to connect Research Centers and
Universities - the initial goal of the Internet.
In more technical terms, IPv4 is a sequence of 32 bits (or four sets of 8 bits). The 8 bits can
range from 0 to 255 (from 00000000 to 11111111), which gives us a total of up to 4 billion
different IP's (or more precisely 4,294,967,296 IP's).
Although it is a very large number, we know that is running out.
By the early 90's for example, most user's connections to the Internet was through dial-up
modems. Currently, with the popularization of the Internet, the picture is quite different.
Virtually everybody use 'Always-On' broadband connections: The growth of addresses
consumption is exponentially increasing.
So, what to do?

Extending the life of IPv4
An alternative, which is not really a solution, is to create ways to avoid conflicts.
In this case, it is common to use techniques or tricks to increase the number of addresses
and allow the traditional client server setup, such as:
NAT (Network Address Translation)
CIDR (Classless Interdomain Routing)
Temporary Addresses Assignment (such as DHCP - Dynamic Host Configuration Protocol)
However, these techniques do not solve the problem, only help to temporarily minimize the
problem IPv4 limitation. That's because they do not meet the main requirements of True
Network and User Mobility.
Existing applications require an increasing amount of bandwidth, while the NAT represents a
considerable impact on the performance of network equipment.
And as mentioned earlier, we now have equipments that needs to be 'Always On', that is,
ensuring that anyone can be connected at any time. This requirement is an impediment to
this address translation.
We also have the problem of plug-and-play equipments, each time more numerous, and
with even more complicated protocol requirements.
In short, what happens is that we ended up having a problem: we must choose between
'allow new services' or 'increase network size'.
But we need both, and then what to do?

IPv6
The solution is quite natural: creating a new format, larger than the current one, to meet
future demand. And this new format, or new version is the 6. Hence, IPv6 - The Next
Generation Internet Protocol. Even that IPv6 is also known as IPng (next generation).
Although the 'solution' is apparently simple, its implementation isn't. Unfortunately, things
are not nearly so easy to make that change. Certainly, much work remains to be done, but
the bigger problem is just due to those responsible for configuration, or Network
Administrators.
There is much controversy about when the world will be ready for IPv6, but it's certainly the
path that must be followed. We should probably have an episode like 'Millennium Bug' in
2000, where some people predicted chaos in computer networks.
But back to talking about the new format, it is now a sequence of 128 bits. Using the same
calculation used above, we arrive at a total
340,282,366,920,938,463,463,374,607,431,768,211,456 different combinations of IP.
Now yes, a 'very' large number: To have an idea, it is 4 billion times larger than the number
of current IPv4 format!
To shorten some the format, it will be used hexadecimal notation instead of decimal, used in
IPv4. The new format will look like this:
FDEC: 239A: BB26: 7311:3 A12: FFA7: 4D88: 1AFF
Note that an address is still very large, and possibly emerge a way of shortening it.

Advantages of IPv6
To clarify the advantages of IPv6, let's enumerate some of them.
* Much more addresses

The main advantages of IPv6 is the simplest to understand: more addresses available!


* Mobility!

In Mobile IPv4, the transmission of data packets is usually based on a triangular routing,
where packets are sent to a proxy server before reaching their final destination.
In IPv6, the degree of connectivity is improved (since each one has its unique IP), and each
device can communicate directly with other devices, making this type of communication
much more efficient.


* Auto Configuration

A new feature in IPv6 standard (non-existent in IPv4) allows IPv6 hosts to automatically
configure to each other. It is the SLAAC.
The SLAAC (Stateless Address Auto Configuration) helps in the design of networks, making
remote settings far more simplified.


* Simpler Packet Format

Although IPv6 is much more complex than IPv4 in many other aspects, the format of the
packet is simpler in IPv6 - Header has fixed size, and fewer fields.
Thus, the process of forwarding packets for example turn out to be simpler, which increases
the efficiency of routers.


* Jumbograms

The data flow in a network is not continuous: it is done through discrete transmission of
packets. Depending on the information being transmitted, several packets are needed.
Because each packet must carry information other than the data itself, we ended up having
a 'wastage' with these traffic control information, such as those used for routing and error
checking.
In IPv4, there is a limit of 65,535 bytes of payload (recalling, octet is a group of 8 bits, eg
11111111).
Today, this limit of 64kB is extremely low compared to the transmitted data. For example,
in a simple video transmission, thousands of packets needs to be transmitted - each one
with its 'extra traffic'.
In IPv6, this limit is much higher: 4,294,967,295 octets.
That is, we can send up to 4 GB in a single packet, Jumbograms!


* Native Multicasting, Anycast

The transmission of packets to multiple destinations in a single send operation is one of the
basic specifications of IPv6.
In IPv4, this implementation is optional.


In addition, IPv6 defines a new type of service, the Anycast. As the multicast, we have
groups that receive and send packets. The difference is that when a packet is sent to an
anycast group, it is delivered only to one of group member.

* More Security - Network Layer

In IPv4, IPsec, an Network Layer Authentication and Encryption Protocol is not required,
and is not always implemented.
In the IPv6, we have native support for IPsec, and this implementation is mandatory.


That is, VPNs and secure networks are much easier to build and manage in the future.
IPv6 also does not rely, or has no need for fields of type 'checksum' to ensure that the
information was transmitted correctly. Now the error checking is responsibility of transport
layers (such as UDP and TCP protocols), and one reason is that the current infrastructure is
more robust and reliable than several years ago, that is, we have fewer errors during
transmission.
The result: easier to implement, greatly facilitating the development of systems such as for
the house network-enabled devices.

IPv4 to IPv6 Transition
The transition from IPv4 to IPv6 must happen slowly and gradually. It will only end when
there is no more IPv4 device. In other words, this transition will take years.
IPv6 was not designed to replace IPv4, but to solve its problems. It does not have
interoperability with IPv4, that is, they don't 'match', but both will exist in parallel for a long
time.


So one of the main challenges will be regarded to communication between these networks,
which should take advantage of existing IPv4 infrastructure.
Although there's no 'interoperability' with IPv4 not IPv6, they need some way to
communicate, ie, IPv6 needs a certain 'compatibility' with the previous version.
Suppose two IPv6 hosts wish to communicate with each other, but among them there are
only IPv4 hosts. And then, what to do?
One technique that can be used is 'tunneling', as shown in figure below.


In this case, the IPv6 packets are re-packetd in IPv4 format, sent through the IPv4 hosts
and unpacked when they reach their IPv6 destination.
Of course, in this example, we will not have such features as priority and flow control.
Anyway, this is only a possible technique, and a lot has changed since IPv6 was designed.
As more people come to deal with IPv6, it is possible that better solutions emerge.

Test your IPv6
If you want to know if you're ready for IPv6, a good site is as follows.
http://test-IPv6.com/
There you can get an idea of your IPv6 connectivity through a series of automatic tests, as
shown below.



Conclusion
Today we saw a brief overview of IPv6, aa Next Generation Internet Protocol.
But now, a very important observation: IPv6 is not totally different from IPv4, or in other
words, everything you learned over IPv4 will be very useful when dealing with IPv6.
IPv6 brings many new features compared to the current protocol IPv4.
In summary IPv6 is much better than IPv4 for addressing, routing, security, network
address translation, administrative tasks and design, and support for mobile devices.
Of course, at first glance, IPv6 seems to be the solution to all problems. But remember that
its implementation will require a lot of work. Anticipating this scenario, IPv6 has a last
feature, the definition of a set of possible plans for migration and transition - one of the
biggest challenges to be coping in the near future.
The explanations above represent only a simplified summary of this protocol, so you can get
an idea of what lies ahead.


What is MIMO?
New technologies are increasingly present in our lives, evolving towards modern - and
complicated - networks!
To enable this 'Revolution', new techniques must be developed, and existing ones need to
be improved.
Here in telecomHall 'Course' we'll talk about these techniques, as always trying to explain
each subject in the simplest possible way, allowing to understand how these innovations
may have become reality.
We begin today with: MIMO. Have you heard?


Even if you already know, we invite you to read this brief summary we prepared to you.
Note: All telecomHall articles are originally written in Portuguese. Following we translate to
English and Spanish. As our time is short, maybe you find some typos (sometimes we just
use the automatic translator, with only a final and 'quick' review). We apologize and we
have an understanding of our effort. If you want to contribute translating / correcting of
these languages, or even creating and publishing your tutorials, please contact us: contact.

SISO, MISO, SIMO
Before we talk specifically about MIMO, let's know, or remember what it also means SISO,
SIMO and MISO.
Although it may sound like some sort of lock tongues, in fact these letters correspond to
different types of a radio channel use. That is, refer to the access modes of the radio
channel, any transmitting and receiving system.
Let's start with SISO - "Single Input, Single Output ', as this model more intuitive. As the
name implies, we only have one input in the radio channel, and only one output.
In the figure below is easier to understand: we use a Transmitter (TX) to transmit data
through a single antenna, and receive it in the Receiver (RX), also through a single antenna.


When the system has multiple inputs and only one output, we have MISO - 'Multiple Input,
Single Output'.


In this case we have multiple entries, and only one output.
Note: in practice, we can have more than one antenna. Just to simplify the demonstration
we will limit ourselves to a maximum of two antennas in the illustrations.

Remember we are talking about the radio channel, the figure below helps to better
understand this nomenclature.


So pretty much opposed to MISO, we also have SIMO - 'Single Input, Multiple Output'.


MIMO
Once this nomenclature is understood, we can talk about MIMO.
As mentioned, although in practice we may have multiple antennas at the transmitter and
multiple antennas at the receiver, we're representing our system with only two antennas on
each side.


At first glance, and comparing with the previous access, MIMO seems to be simple, but
unfortunately it's not.
Its operation is much more complex than the others: we now have multiple inputs and
multiple outputs. The biggest challenge is how to recover the original information correctly?
See illustration of a more realistic scenario, showing what happens in practice.


Although more complex, it brings is a huge performance gain, or spectrum use efficiency, as
discussed below.
And again, the way MIMO works, with its variations, is very complex. We will try to show
here just simply how it works, that is, as is possible.
A good analogy to learn the concept of MIMO is to imagine that we have two 'mouths' and
use the two mouths to ask someone:
'How old are you?'
Note that we use 'four' words. As we have two mouths, we can use one to say 'How old' and
and another - at the same time - to 'are you'.
With two mouths talking at the same time, if the other person's ears are well cleaned, and
it's a smart person, he/she'll be able to understand.
That is, we speak 'four words' in the 'same time' that we would speak 'two words'.
What does this mean? In terms of data, assume that each word has 100 KB. So we're
sending 400 KB. But since we are transmitting two streams in parallel, each with a piece of
data. That is, we pass the 400 KB in half the time it would take to transmit typically with a
stream.


Simply put, this is what makes MIMO possible, and enables attaining high rates of 300 to
600 Mbps!
Thus, MIMO is used to improve wireless access in a large number of applications. Several
access standards such as LTE, WiMax, HSPA and WiFi use this gain to achieve the significant
improvements that each one has.
And now we have a concept that seems to be against everything we learned: MIMO is based
on interference at line of sight (LOS), ie, the signal path between the station and mobile.
To MIMO present some advantage, we need a good diversity in the signal.


In other words, anything that interferes with the signal path such as buildings, cars, people,
etc.. are actually contributing to the overall system efficiency, and effectiveness of MIMO
applications.
The diversity of the signal - that doesn't take a direct path between the transmitter - once
viewed as a problem, is now making it possible for the data streams to be combined and
recovered!
As seen in the analogy above, MIMO allows the sending of more than one stream of data on
a single channel. It effectively doubles the speed that he have on that channel - considering
the use of two antennas.
But okay, how does it work?
In the past, DSP's, or Digital Signal Processors were very hard to be developed, due to a lot
of past limitations. Currently however, DSP development have evolved a lot - and are still
evolving. This kind of processors today are very powerful, able to recover our transmitted
signal when it arrives at the receiver at different time intervals.
The DSP's then have the responsibility to take the data, 'separate' in different parts, send
each part via different antennas, at the same time, at th same channel. And do the reverse
process at the receiver.


The result is obvious: we are able to send a certain amount of data in half the time it would
normally take.
Each antenna has its own stream of data, both in transmission and reception. In the end,
then we have the data received.
Remember, the Multipath varies according to location, and this variation is very dynamic -
difficult to predict. Still, the multipath makes it possible for the receiving antenna to
differentiate between data that was transmitted on the same channel at the same time.

OFDM
Then enter the access via OFDM - 'Orthogonal Frequency Division Multiplexing'. Let's talk
more about this type of multiplexing / access in another tutorial, but OFDM is very
important to MIMO for new generations of cellular technologies.
It is easier if we make a comparison.
In a single carrier systems have symbols (or 'pieces of information') transmitted over
broadband, each transmitted sequentially, and for a relatively short period of time.

Symbols transmitted in Series
Broadband
Short Symbol Period
In the OFDM symbols are transmitted in parallel, each using a relatively very narrow
spectrum. However, each symbol is transmitted by a much greater period of time!

Symbols transmitted in Parallel
Narrow Band
Long Symbol Period
This scenario represents an advantage in signal reception, since it is much easier for the
receiver to check each of the symbols - even if they suffer some degradation - because they
are transmitted over a much longer period.
In wideband transmission, during the short time interval in which each symbol is
transmitted, we may have problems with data loss, making it difficult to recover
information. If there is interference in the signal, a significant part of it can be degraded and
may end up making it impossible to receive certain symbols (pieces of information)
correctly.


In the OFDM, while the bandwidth is narrower, each transmitted symbol stands for a much
longer time, and the chances of successfully recovering are higher.


The following sequence helps us understand this concept.


Comparing OFDM with a single carrier, OFDM methodology have multiple frequencies
transmitted in parallel - the symbols are transmitted in parallel!
And each symbol is being transmitted over a much longer time period. And even when we
have a problem of fading at some point we're probably still able to retrieve information.
Thus, with transmission of the symbols in parallel and for a longer period of time, the
greater the chances of success at the reception!
Another new fact concerning what we understand about transmission and reception of data:
the known and common scenario for us is to have one antenna on the transmitter,
transmitting at a certain frequency, and another antenna on the receiver, receiving at this
this same frequency.
MIMO introduces a new concept in terms of this known operation, and as we have seen, in
terms of spectral efficiency through the use of two or more antennas to transmit and two or
more antennas to receiving.
And perhaps the most innovative concept: all the antennas transmit at the same frequency
with different data transmitted by each one!
Surely, this is different from everything we learned in school, because we learned that the
frequencies will certainly interfere with each other, and end up losing all our data.
Antennas operating in the same operating frequency and transmitting different data
generates interference, and interference generates losses?
No more. Fortunately, using new advanced technologies of DSP's we can, for the same
frequency, transmit different data on different antennas - and simultaneously. And the
receiving antennas, we can differentiate between these streams of data.
It is not difficult to understand that this represents a huge advantage in terms of spectrum
use efficiency.
If for example we have two antennas, we double efficiency. If we use more antennas, triple
or quadruple this efficiency. But it is obvious that the greater the number of antennas, and
the greater the complexity of the system.

MIMO Example
In conclusion, we show an example of packets decoding by a MIMO receiver.
Returning to our initial example, suppose a transmitter with two antennas. Using the
nomenclature 'hij' for the channel 'h' of the transmitter antenna 'i' to receiver antenna 'j'.


That is, when a packet 'p1' is transmitted from the antenna of a transmitter, the receiver
receives 'h11*p1' in its first antenna, and also receives 'h12*p2' in his second antenna. In
other words, the receiver receives a vector whose direction is determined by the channel.


But remember that our example has two transmitter antennas, that is, while we can send
another packet 'p2' through its other antenna.
The receiver receives 'h21p2' in its first antenna, and 'h22p2' in his second antenna.


With this, we have a vector at the receiver end, defined by the sum of all vectors.


Sure, but how the receiver can decode these two packages? Once the two packages are
sent concurrently, they represent interference to each other. To decode a packet, the
receiver projects on a direction orthogonal to the interference of another package.
To eliminate the interference of the package 'p2', and so be able to decode the packet 'p1',
the receiver projects on a direction orthogonal to it (package 'p2').


Similarly, to decode the packet 'p2', the receiver eliminates interference from the other
package 'p1', projecting a direction orthogonal to the interference of it.


Then, with two antennas can decode two competing packages! Following the same
reasoning, we can understand that the MIMO decoding allows competitors and how many
packets as the number of antennas.


Conclusion
Today we had a brief introduction to MIMO, which as mentioned, is much more complex
than shown, because we are just doing an introduction to even allow you to understand its
basic operation.
However, the benefits it pays back the efforts of its complexity.
Hope you enjoyed, and if you liked, please share the telecomHall with your friends. Below
you have a few quick ways to do this.










How to Run a RF Site Survey (Tips and Best Practices)
From all the tasks that a telecommunication professional has, one of the most important is
the RF Design.
Mainly because this task results in physical changes in the network, by modifying or adding
new sites and/or equipment.
Based on the settings (and needs) of the current network, several areas - from Planning to
Marketing and Optimization - may require changes to the these settings, that will define the
future of the network.


Once defined the area of the new sites, another extremely important task is the collection of
candidate points, ie points close to the places defined as ideal, and can have a New Site
Deployed.
And that part of the project is what we call the 'Site Survey' - also with other common
variations such as 'RF Survey', 'RF Site Survey' or 'Wireless Survey'.


Note: For simplicity, from now on in this tutorial we'll refer to as just 'Site Survey'.
If we don't run it properly - like choosing 'bad' points - the worst consequences range from
an overall bad system performance (compared to what it could be), to cases where we need
more sites/equipment to meet the requirements of the same region. In other words, implies
in loss of CAPEX, OPEX and Poor Network Quality!
It's more than enough to try to run it in a best possible way, no?
Unfortunately, this is a kind of task that can't be learned from theory, and its success
depends heavily on the experience of his executioners. In addition, there too little specific
reference material on this subject available.
Therefore, we'll now try to share some of the best practices of it, as a step by step guide. As
always we'll be following the Hunter Methodology to organize all of our work procedures.
So let's go?
Note: All telecomHall articles are originally written in Portuguese. Following we translate to
English and Spanish. As our time is short, maybe you find some typos (sometimes we just
use the automatic translator, with only a final and 'quick' review). We apologize and we
have an understanding of our effort. If you want to contribute translating / correcting of
these languages, or even creating and publishing your tutorials, please contact us: contact.

Before the Survey
As in any task to be performed, the 'Site Survey' should be first of all well-planned, so that
its execution, as best as possible.
Therefore, it is advisable to follow some basic procedures, or some tasks that are common
and necessary: a pre-analysis before any 'Site Survey'.
Before heading to the 'Site Survey' region, it is extremely important to make a complete
analysis of that region. For this, all available resources should be used: Aerial Photos,
Google Earth, Maps, etc...
Important: Always take the printed data with you: the areas of interest highlighted, with a
longer zoom and a smaller one, especially in the focus area.


Equipments
The first important input comes from Planning, where we define the equipment to be
installed. It's important to emphasize that it is necessary to know the characteristics of the
equipment, and how they can be installed.
Knowing for example their dimensions, if they can be installed on the top of towers, how
many antennas are needed, will it be a BTS or Booster/Repeater, etc..

Predictions
From the definition of equipment, we then spent some time on theoretical calculations,
where we'll set the location for our site.
For this, we must use Signal Propagation Prediction tools. These tools, when properly
adjusted, give us a very close notion to what we can achieve. Of course not accurately
predictions do not reflects exactly what will be achieved in practice, but serve as an
excellent 'reference'.
A well-adjusted prediction tool is one that brings results close to what we find when collect
data, as in 'Drive Tests'. This adjustment can be done through the use of different
'Propagation Models' for different areas (urban, suburban, etc. ...).


The better the resolution of your bases, the greater the accuracy. However, the processor
uses more resources, and takes longer to run. For a fairly good approximation, we
recommend that you use at least a resolution of 30 meters, available for free download at
NASA website.
Other features such as 'Building Heights' - ie, the heights of existing buildings also greatly
improve the accuracy of the result, but are more difficult - and expensive - to obtain.
But remember that regardless of their 'reliability' of the predictions, the most important is
the 'comparison' between the candidate points. That is, even if the prediction tool does not
provide perfect results just as real, is always valid to use it at least for 'comparisons' among
the predictions of candidate points.

Drive Tests
Another excellent way to check for new items is the analysis of 'Drive Test'.
In an ideal world, if it were possible to have a detailed 'Drive Test' in the entire interest
area, we'd have no need for prediction tools, as we would have a true and complete
coverage knowledge, thus knowing where it needs to be improved.
Unfortunately, it is not possible for many reasons, but we can use the already available
'Drive Tests' as a supplement to the analysis - even to validate the results obtained with the
prediction.
Therefore it is important to have quick and easy access to Drive Tests processed data, for
example using files already processed in Mapinfo and/or Google Earth.


Open Drive Tests available in the region of interest and take a time to save a few more
images, which can be very useful in the field - so print it.
Once the analysis is done (Prediction, 'Drive Test', and others that are possible) we can
start the process of 'Site Survey'.

What is the Purpose?
When someone asks you to run a 'Site Survey', the purpose of it is already known, or in
other words, the need for improvements that led to the deployment process of this new
site:
Quality
Coverage
Capacity
While the 'Site Survey' should always try to meet all purposes, one always stands out, or it
have more priority and this should be taken into account when running it.
In other words: for example, if the goal is to increase coverage, you should look for a place
with the best sight in all directions of interest. But if the goal is capacity, focus on that, and
look for points that will solve this problem.


Concepts
The basic concepts of the 'Site Survey' are very simple, and it is worth only noting is
intended for you to indicate one or more points as possible candidates.
These points candidates must be within a region known as 'Search Ring'. Although the name
suggests, this polygon can be any shape, even a square.


Such points are recorded in a proper report, following the processes and documents of each
company, and should also rank some priority for each point (the best for the worst
indicated).
This is because, maybe the first indicated point has a problem, as an owner that don't want
to rent, transmission problems, unavailability of infrastructure, etc..
Moreover, the more points allow a better margin for trading in the area responsible for this
engagement.
To avoid these problems, it is interesting that the 'Site Survey' be conducted togheter with
the areas of RF, Transmission, and Infrastructure, Contract and other that apply. We know
however that it is always almost impossible, so it is up to the professional who carries out,
alert to all these aspects.
For example, if you work in the RF area, and running it alone, why do not note the name
and telephone number of the owners of each point? Your colleagues from Contract area say
Thank You, not to mention that the process will be streamlined.

What equipment to bring with?
Generally we notice the importance of something only when we need it - but don't have it
available!
This applies also to the 'Site Survey'. Imagine arriving at a remote location over 100 km
from any urban center, and realize you forgot to buy new batteries for the camera! This can
be very frustrating - not to mention one that is 'suffering' and unnecessary work!
So we can at least make sure to bring as much equipment as they apply to the kind of 'Site
Survey' to be done!
There is not a mandatory rule about what equipment to take, but here's a little 'Check List'
with the main equipment desired and/or necessary. As always, everything depends on your
need - like the 'Survey' type, Region, etc..
GPS: for location coordinates. In GPS you can also enter the points of your network sites
and use them as a reference, especially in rural locations.
Camera: for photos.
Batteries: for the Camera.
Keys and Locks Secrets: for both your company and competitors, when their sites can be shared
(let's talk about this later).
Binoculars: to view other distant points, such as possible transmission sites.
Compass: orientation of azimuths.
Phones with Test Mode enabled: to check the signal.
Proper Climbing Equipment: if you need to climb a tower.
Small Notepad: for quick notes, that fits in your pocket.
Template printed with key data to be collected: Use one sheet for each candidate, to record all
relevant and necessary information.
As mentioned, this list is not complete, as you may have other more specific equipment
according to your needs, you can extend it as according to your needs.

Photos
Now running the 'Site Survey', a very important part is referring to taking Photos.
Remember that when you are at field, you have a clear vision and complete understanding
of the region. However, when you come back to the office, the situation changed
dramatically.
It gets worse if you are gathering photos from several 'Site Surveys'. You run a serious risk
of forgetting the reference for some photos, wasting your work, and worse, degrading the
quality of the final analysis and reports.
When you take panoramic shots, it is important to know the orientation of each photo.
To achieve this in the field, first, with the compass, identify where the North is (0 degrees).
And make markings on the floor as possible - in the dust of the ground, with a stone, etc..


So when you take the photos, just follow the guidelines. Mark down the positions from 0
degrees to 360 degrees divided by 45 to 45 degrees, and take photos.


Another good tip is to always take pictures of reference, when you began and ended a
sequence - for example as shown above.


When shooting, also remember to leave only a 'small' part of Sky appearing. Remember
that what matters is the area of interest - you will not want to get to the office and realize
that more than half of the photos useful area is the Sky!
See for example the two photos below. Were taken in the same direction, only the second
did not bother to lower the Sky.


It is easy to see which one gives us more information, don't you agree? Although it seems
obvious, it's a mistake many people do when taking their first 'Site Surveys' photos.

Overview
It is also common to some beginner designers the problem of 'Limited Vision'.
At the region of interest, they are directed to a point where the project's goal is 'reached'.
And stop there!
No matter how good the analysis in the office, nothing replaces field verification. However,
this check should be done as long as possible.
Suppose for example you are looking for points on the tops of buildings for a given project.
From below (street level), you find some possible candidates, and climbs one of them.
From the top of this building, you see a good vision to cover the region, and decides that
this is the point indicated - without going the other buildings!
So don't do it - do not be 'lazy': go to 'all' buildings! Often, points that seem to provide the
'same' coverage happens to be better than others when you have a broader view of what
they all can provide.
Avoiding the 'Limited Vision' you have another way of viewing the site: the 'Big Picture'.
In 'Search Ring' as shown below, with only two buildings as candidates, which one would
you put as most suitable?


Simply looking at the picture, choose the point closest to the center - and not that far away
from the area of interest.
Of course, the figure is illustrative, and various other factors must be taken into account in
this decision, but in general, not having a limited vision, and get a macro view always helps
to get the best result.

Site Sharing
An increasingly common issue today is the sharing of infrastructure between operators. This
sharing includes antennas.
There are companies that specialize in 'Site Sharing', ie companies that have their own
infrastructure (such as Towers) and provide for those have interest, via rental payment for
example.
It is interesting to know beforehand all the possible points of share, eg by plotting these
points in Google Earth, getting a clearer picture of which site can be useful for a project -
you zoom to the new site region, and see available options.


Moreover, it is necessary to know the premises for sharing that your company have. That is,
the priority you need to know:
Choose to share in the first place, whenever possible, in order to speed up the process;
Try to set the most exclusive points, indicating only share a last resource. This represents more
spending, but may be the company's strategy and therefore must be followed.

'Roof Tops'
If the 'Site Survey' is conducted in an urban area with buildings as possible candidates, it is
essential that you go up into several, as much as possible.
In this case, mainly applies the criterion of Limited Vision and Macro Scenario, as seen
above.


If Repeaters...
In the case of 'Site Survey' for the installation of Repeaters, remember to bring extra
equipment to measure directed signal, ie, an antenna 'Yagi' (with known model and gain), a
cable to connect to the phone, and of course, a phone that matches that cable.
Take a printed table (like the one below), to record the relevant data for each scenario.


What data to Collect?
Keep on hand a notebook and a pen. Remember that information is always important, even
though at first glance do not appear to be.
Always conduct the 'Site Survey' as if it were not you the person who will do the final
documentation, that is, collect as much data as possible. So the reports will be made with
the greatest amount of detail, which as we saw, can make the difference between a good
and a bad Final Deployed Project.

Arriving back in the office
Finally, when arriving at the office, detach all data (Photos, 'Drive Tests'...) in proper place,
especially as indicated in the tutorial on Folder Structure for Telecom.
Remember also to make the observations, and especially, rename photos to the most
relevant names. Never procrastinate: you'll end up forgetting some details, you can be sure.

Conclusion
So, that's it. Hopefully you have clarified your questions and doubts about conducting a 'Site
Survey', and have learned some of the best practices adopted by professionals.
As we have seen, this is a very important task, affecting directly and indirectly various
aspects of the network, including financial.
In all other activities in both areas of Telecom and IT, the challenge is to get the best
results, achieving the goals and objectives. For it is very important have organization and
planning before any task, knowing clearly where to obtain or extract the necessary
information for analysis viability. Note that this is just what we always talk in the Hunter
Methodology.
So, continue trying to follow this methodology in all other activities of your daily work,
taking advantage of the tips presented here and in other Sections and Tutorials. Thus, in no
time you will gain knowledge as the best professionals have!












IP Packet switching in Telecom - Part 1
Let me start saying thanks to Leonardo Pedrini for the privilege of writing this series of
articles for TelecomHall. He doesnt do this frequently that I know of. If you like the articles
and want to read more then go visit my blog: Smolka et Catervarii (portuguese-only
content for the moment).


Id better warn you right now that youll find my writing style quite different from
Leonardos. While he emphasizes simplicity Im a bit more fond of rigorousness. So Ill make
a sincere effort to keep closer to his style than of mine. But there will be some rough spots
along the way, and I expect this wont discourage you.
Very well... You probably always heard that telecom networks are based on the circuit
switching paradigm. And that was correct up to about 15 years ago. Then started a
movement to change networks to the packet switching paradigm. This has been a long, long
way, which will be practically complete with 4G mobile networks deployment. Our first step
is to understand why this paradigm change was deemed necessary.
Circuit switching means that the communication channels between user pairs are rigidly
allocated for all the communication session duration. Although theres statistical formulae
for circuit switching networks capacity planning see this Wikipedia article about the Erlang
traffic unit theres a capacity waste everytime any of the parties isnt using their
communication channel (which is full-duplex, usually).



On the other hand, packet switching doesnt allocate full-session circuits. Transmission
capacity in either direction is granted to users just for the time needed to forward a single
data packet. This packet interleaving allows minimum capacity wasting of transmission
media.


Unfortunately theres no such thing as a free lunch. Packet switching adoption has its trade-
offs. The major one is accepting the possibility of congestion, because any network node
can suddenly have more packets to send through an interface than its transmission
capacity allows. Usually thats dealt with using transmission buffers, so were in the realm of
queuing systems statistics (Erlang C) instead of the more familiar blocking systems
statistics (Erlang B). This and a few other details were the basis for wrong notions about the
unfeasibility of carrier-class telecom services particularly telephony over packet
switching networks. And, with these articles I expect to bury them at last.
Next basic question to answer is: why IP and not any other packet switching network
architecture? Why not full-fledged OSI, for instance? The answer is quite simple: other
network architectures were considered and discarded because their adoption would be too
difficult, or too expensive. The Internet Protocol suite, on the other hand, was immediately
available and was reliable, cheap and simple. With the Internet boom of the 1990s the
option for IP became unquestionable and quite irreversible.
Here in TelecomHall theres a brief explanation of the 7-layer OSI Reference Model.
Likewise, the IP network architecture is structured on 4 layers which match all the
functionalities of the OSI-RM layers. Look at the diagram below.



First thing youll probably say is: wait a minute! Youve said four layers, and this diagram
shows five. Why so? The answer is quite simple: the sockets API isnt a real layer thats
why its shown in a dotted box. When TCP/IP architecture was first deployed thered been
need of something to keep different user sessions properly separated. Sockets API was
devised to that effect, and became a de facto standard, and was ported to all kinds of
operating systems.
Talking of operating systems, one of the great advantages of the TCP/IP network
architecture is the simple scheme of work division among hardware (network interface card)
and software (operating system and user application). Its easy, its simple, and above all, it
works.
On the next articles of this series I will talk with you about the working principles and the
main protocols used, with a focus on the use of all this to build the so-called Next-
Generation Networks (NGNs). Unlike the usual explanations that you can find about this, I
wont take a bottom-up approach, but will make a top-down description of this environment.





IP Packet switching in Telecom - Part 2
So far so good Like promised, lets start our journey about IP networking in the telecom
context from the ceiling and going down. So lets understand what the heck is an
application.

Note: My blog Smolka et Catervarii (portuguese-only content for the moment)

Technically we call an application any program that runs under control and taking
advantage of the services of the operating system. Thats a fairly reasonable definition for
our purposes, since all networking architectures are devised to allow communication
between applications, not people. Each application has its own way of human-machine
interaction handling (if it exists at all). Were not concerned with this here. All we want to
explain is how applications can reliably exchange data among them.


And here we arrive at the first paradigm-breaking aspect of the change from the circuit-
switching-based plain old telephony service (POTS) and the IP-packet-switching-based next-
generation network (NGN).
POTS networks are organized in such a way that youve got dumb (and reasonably cheap)
user terminals connected through a smart (and very, very expensive) network. Everytime
the user wants to use network services and for a very long time thered be only one:
telephony he/she has to ask the network for it. By means of sound-based network-to-user
signaling and key-pressing user-to-network signaling (see DTMF and ITU-T recommendation
Q.23 and Q.24) the user says I want to talk with this user, and the network makes the
arrangements to provide the end-to end circuit which the communicating parties will use.
IP-based networks, of which the Internet is the major example, were built assuming the
user terminals are smart (and not overwhelmingly expensive) and the network doesnt have
to have more smartness than necessary to perform a single function: take the data packets
from one side to the other with reasonable reliability. All the aspects of communication that
telephony engineers are used to name as call-control are negotiated directly between user
applications. This is the function of the so-called application-layer protocols.
So we have, so to speak, two different philosophies to handle the call control (which is
another way to say session control): the network-in-the-middle approach, and the end-to-
end principle. The schematic call-flow diagrams below give an example of the differences
between them.



Generally speaking there are two ways of application interaction, both widely used: peer-
to-peer and client-server. On peer-to-peer sessions the communicating parties have the
same status, and any of them can request or offer services to the other. Client-server
sessions, on the other hand, have a clear role distinction between the parties: one requests
services (the client) and the other fulfill the services requests (the server).
Most of Internet applications use the client-server model, and that goes quite well with the
end-to-end principle. Otherwise NGN telecommunication services go both ways. Theres
services that are a clear fit to the client-server model, like video or audio streaming, and
theres services that use peer-to-peer, like voice and video telephony (by the way,
videoconferencing can go both ways).
This and a few other issues (security, mostly) forced NGN call-control architecture to use
client-server interactions for signaling, and peer-to-peer or client-server for data exchange,
according to service characteristics. The diagram below is an example of this.

The packet routers between the elements are not shown. And this picture is a gross
oversimplification of NGN architecture. I will not go into details about this, but if you want to
get a more rigorous approach to this subject I recommend you strt reading ITU-T
recommendations Y.2001 General overview of NGN and Y.2011 General principles and
general reference model for Next Generation Networks.
Roughly speaking, the AAA (authentication, authorization and accounting) server role goes
to the IP Multimedia Subsystem (IMS), which was initially standardized by 3GPP/ETSI (see
ETSI TS 123 228 V9.4.0 IP Multimedia Subsystem), and later adopted by ITU
(recommendation Y.2021 IMS for Next Generation Networks). Actually it does much more
than simply AAA functions. Its the entry door to all NGN signaling which are based on
Session Initiation Protocol SIP, and Session Description Protocol SDP (see ETSI TS 124
229 V9.10.2 IP multimedia call control protocol based on SIP and SDP; IETF RFC 3261
SIP: Session Initiation Protocol; and IETF RFC 4566 SDP: Session Description Protocol).
On the next part of this article series well take a closer and more formal look at IMS, SIP
and SDP.
IP Packet switching in Telecom - Part 3
At the end of the precedent article Ive told you that were going to dig a bit deeper into IMS
and NGN signaling protocols (all this happens at the application layer of TCP/IP network
architecture see the first article of this series).

Note: My blog Smolka et Catervarii (portuguese-only content for the moment)

And so we shall do. I must warning you, though: youd better fasten your seat belts, cause
theres turbulence ahead. Few things can be more intellectually intimidating than the writing
style of telecom standards. Truth be told, theyre getting better, but its still a hard
proposition to read them. Even the pictures can be daunting. So I urge you: dont let this
picture scare you out of reading the rest of this article.



This picture comes from ITU-T Recommendation Y.2021. Look at the shaded round-cornered
rectangle. Theres core IMS written on it and it really is that. But were interested in a
single entity in there: the Call Session Control Function (CSCF), and its relationship with the
user equipment (desktop, laptop or handheld computers, smartphones, tablets, whatever)
identified by UE in the picture.
Each line connecting entities are called interfaces (formal terminology is: reference points,
but doesnt matter). Theyre the depiction of logical relationships between the entities, and
each interface uses an application-layer protocol (more than one, sometimes). The signaling
interface between CSCF and UE is identified as Gm in the picture. And the application-layer
protocols used in the Gm interface are SIP and SDP (Im not explaining some acronyms
cause theyre already explained elsewhere I really believe that youre following these
articles from the beginning).
And what does CSCF do? Its the AAA server (and more) that weve talked about in the last
article. Since it looks that most of TelecomHall readers have a mobile background then we
can explain CSCF functionalities this way: its a kind of fusion of HLR (Home Location
Registry) and AuC (Authentication Center).
But theres actually three entities called CSCF, differing by a prefix letter: P (proxy); I
(interrogating); and S (serving). These three flavors of CSCF exist because were talking
of telecom services here. So there are operators own subscribers, and there can be
roaming users.
Whatever the user is local or roamer, one of the first things he/she have to do when
connecting to the network is making contact with the P-CSCF. Item 5.1.1 of ETSI TS 123
228 offers two alternative methods for P-CSCF discovery. I think that the practical way is
combining both:
Dynamic Host Configuration Protocol (DHCP, for IPv4 or IPv6 networks) gives the UE the IP address
(v4 or v6) of the primary and secondary Dynamic Name System (DNS) servers which are capable
of resolving I-CSCF fully-qualified domain name (FQDN) to its IPv4 and/or IPv6 primary and
secondary addresses;
During initial configuration, or in the ISIM (IMS Subscriber Identification Module SIM), or even via
over-the-air (OTA) procedures, the UE receives the FQDN of the I-CSCF.
The I-CSCF forwards all user requests to the S-CSCF thats assigned to serve it. If the user
is local, then thats all. If the user is a roamer, then the S-CSCF of the visited network acts
as an I-CSCF and forwards all user requests to the S-CSCF of the native network of the
user.
To understand the remaining entities in the core IMS we have first to understand that NGN-
based services wont simply kick the present telecom services out of the market. Theyll
have to live together, side by side, for a long time yet. So theres a definite need for NGN
and traditional telecom services to interfunction. That is: there should be possible to calls
originated in NGN-connected UEs to terminate on common telephony devices, and vice-
versa.
Since about ten years ago operators started to substitute traditional telephony switches with
softswitches.
A softswitch is a distributed system (logically, and possibly also geographically), and can be
built (more or less) with an open architecture. Its main building blocks are:
One Media Gateway Controller (MGC), which handles signaling between the softswitch and the rest
of the network elements;
One or more Media Gateways (MGs), which make the translation of media streams between
different physical interconnections.
The MGC controls the MGs assigned to it through a IP-carried signaling protocol whose
specifications are found on ITU-T Reccomendation H.248.1 Gateway Control Protocol:
version 3. The picture below shows how the softswitch elements interconnect with IP and
Public Switchet Telephony Network (PSTN) and the signaling protocols used.



So the Media Gateway Control Function (MGCF) is the IMS element responsible for setting
up the Media Gateway which will bridge the IP data stream to a conventional telephony
circuit. Every IMS-enabled MGC have an instance of MGCF within it.
And that brings another question: since there can be many instances of MGCF available, in
the operator network and in other operators networks which are interconnected, which one
is the best option to bridge between the NGN and the PSTN for each call? This is the
attribution of Breakout Gateway Control Function (BGCF).
Last, but not least, theres the Multimedia Resources Function Controller (MRFC). Certain
application servers (see AS-FE in the picture) need help to deliver services to the UEs. Such
help can be:
According to ITU-T Recommendation Y.2021 Multi-way conference bridges, announcement
playback and media transcoding;
According to TSI TS 123 228 mixing of incoming media streams (e.g. for multiple parties), media
stream source (for multimedia announcements), media stream processing (e.g. audio transcoding,
media analysis), floor control (i.e. manage access rights to shared resources in a conferencing
environment).
Note that MRFC only does control of these activities. The actual execution is handled by
Multimedia Resources Function Processors (MRFPs) in ETSI parlance, or Multimedia
Resources Processor Functional Entities (MRP-FEs) in ITU-T jargon both names refer to
the same software object.
And something very important to keep in mind: P-CSCF, S/I-CSCF, BGCF, MRFC and MGCF
are logical functions which are implemented in software, so they can exist in one single
host machine, or can be distributed among many host machines. Logically it doesnt matter,
but physical implementations of each vendor can vary, and can cast doubts if youre not
aware of this















What is Antenna Electrical and Mechanical Tilt (and How to use it)?
The efficiency of a cellular network depends of its correct configuration and adjustment of
radiant systems: their transmit and receive antennas.
And one of the more important system optimizations task is based on correct adjusting tilts,
or the inclination of the antenna in relation to an axis. With the tilt, we direct irradiation
further down (or higher), concentrating the energy in the new desired direction.
When the antenna is tilted down, we call it 'downtilt', which is the most common use. If the
inclination is up (very rare and extreme cases), we call 'uptilt'.
Note: for this reason, when we refer to tilt in this tutorial this means we're talking about
'downtilt'. When we need to talk about 'uptilt' we'll use this nomenclature, explicitly.


The tilt is used when we want to reduce interference and/or coverage in some specific
areas, having each cell to meet only its designed area.


Although this is a complex issue, let's try to understand in a simple way how all of this
works?
Note: All telecomHall articles are originally written in Portuguese. Following we translate to
English and Spanish. As our time is short, maybe you find some typos (sometimes we just
use the automatic translator, with only a final and 'quick' review). We apologize and we
have an understanding of our effort. If you want to contribute translating / correcting of
these languages, or even creating and publishing your tutorials, please contact us: contact.

But Before: Antenna Radiation Diagram
Before we talk about tilt, it is necessary to talk about another very important concept: the
antennas radiation diagram.
The antenna irradiation diagram is a graphical representation of how the signal is spread
through that antenna, in all directions.
It is easier to understand by seeing an example of a 3D diagram of an antenna (in this case,
a directional antenna with horizontal beamwidth of 65 degrees).


The representation shows, in a simplified form, the gain of the signal on each of these
directions. From the center point of the X, Y and Z axis, we have the gain in all directions.
If you look at the diagram of antenna 'from above', and also 'aside', we would see
something like the one shown below.


These are the Horizontal (viewed from above) and Vertical (viewed from the side) diagrams
of the antenna.
But while this visualization is good to understand the subject, in practice do not work with
the 3D diagrams, but with the 2D representation.
So, the same antenna we have above may be represented as follows.


Usually the diagrams have rows and numbers to help us verify the exact 'behavior' in each
of the directions.
The 'straight lines' tells us the direction (azimuth) as the numbers 0, 90, 180 and 270 in the
figures above.
And the 'curves' or 'circles' tells us the gain in that direction (for example, the larger circle tells you
where the antenna achieves a gain of 15 db).
According to the applied tilt, we'll have a different modified diagram, i.e. we affect the
coverage area. For example, if we apply an electrical tilt of 10 degrees to antenna shown
above, its diagrams are as shown below.


The most important here is to understand this 'concept', and be able to imagine how would
the 3D model be, a combination of its Horizontal and Vertical diagrams.


Now yes, what is Tilt?
Right, now we can talk specifically about Tilt. Let's start reminding what is the Tilt of an
antenna, and what is its purpose.
The tilt represents the inclination or angle of the antenna to its axis.


As we have seen, when we apply a tilt, we change the antenna radiation diagram.
For a standard antenna, without Tilt, the diagram is formed as we see in the following
figure.


There are two possible types of Tilt (which can be applied together): the electrical Tilt and
Mechanical Tilt.
The mechanical tilt is very easy to be understood: tilting the antenna, through specific
accessories on its bracket, without changing the phase of the input signal, the diagram (and
consequently the signal propagation directions) is modified.


And for the electrical tilt, the modification of the diagram is obtained by changing the
characteristics of signal phase of each element of the antenna, as seen below.


Note: the electrical tilt can have a fixed value, or can be variable, usually adjusted through
an accessory such as a rod or bolt with markings. This adjustment can be either manual or
remote, in the latter case being known as 'RET' (Remote Electrical Tilt) usually a small
engine connected to the screw stem/regulator that does the job of adjusting the tilt.
With no doubt the best option is to use antennas with variable electrical tilt AND remote
adjustment possibility, because it gives much more flexibility and ease to the optimizer.
However these solutions are usually more expensive, and therefore the antennas with
manual variable electrical tilt option are more common.
So, if you don't have the budget for antennas with RET, choose at least antennas with
manual but 'variable' electrical tilt only when you have no choice/options, choose
antennas with fixed electrical tilt.

Changes in Radiation diagrams: depends on the Tilt Type
We have already seen that when we apply a tilt (electrical or mechanical) to an antenna, we
have change of signal propagation, because we change the 3D diagram as discussed earlier.
But this variation is also different depending on the type of electrical or mechanical tilt.
Therefore, it is very important to understand how the irradiated signal is affected in each
case.
To explain these effects through calculations and definitions of db, null and gains on the
diagram is possible. But the following figures shows it in a a much more simplified way, as
horizontal beamwidth behaves when we apply electrical and mechanical tilt to an antenna.
See how is the Horizontal Irradiation Diagram for an antenna with horizontal beamwidth of
90 degrees.


Of course, depending on the horizontal beamwidth, we'll have other figures. But the idea, or
the 'behavior' is the same. Below, we have the same result for an antenna with horizontal
beamwidth of 65 degrees.


Our goal it that with the pictures above you can understand how each type of tilt affects the
end result in coverage one of the most important goals of this tutorial.
But the best way to verify this concept in practice is by checking the final coverage that
each one produces.
To do this, then let's take as a reference a simple 'coverage prediction' of a sample cell.
(These results could also be obtained from detailed Drive Test measurements in the cell
region).


Then we will generate 2 more predictions: the first with electrical tilt = 8 degrees (and no
mechanical tilt). And the second with only mechanical of 8 degrees.


Analyzing the diagrams for both types of tilt, as well as the results of the predictions (these
results also can also be proven by drive test measures) we find that:
With the mechanical tilt, the coverage area is reduced in central direction, but the
coverage area in side directions are increased.
With the electrical tilt, the coverage area suffers a uniform reduction in the
direction of the antenna azimuth, that is, the gain is reduced uniformly.
Conclusion: the advantages of one tilt type to another tilt type are very based on its
application when one of the above two result is desired/required.
But in General, the basic concept of tilt is that when we apply the tilt to an antenna, we
improve the signal in areas close to the site, and reduced the coverage in more remote
locations. In other words, when we're adjusting the tilt we seek a signal as strong as
possible in areas of interest (where the traffic must be), and similarly, a signal the weakest
as possible beyond the borders of the cell.
Of course everything depends on the 'variables' involved as tilt angle, height and type of
antenna and also of topography and existing obstacles.
Roughly, but that can be used in practice, the tilt angles can be estimated through simple
calculation of the vertical angle between the antenna and the area of interest.
In other words, we chose a tilt angle in such a way that the desired coverage areas are in
the direction of vertical diagram.
It is important to compare:
the antenna angle toward the area of interest;
the antenna vertical diagram.
We must also take into account the antenna nulls. These null points in antenna diagrams
should not be targeted to important areas.
As basic formula, we have:


Angle = ArcTAN (Height / Distance)
Note: the height and distance must be in the same measurement units.

Recommendations
The main recommendation to be followed when applying tilts, is to use it with caution.
Although the tilt can reduce interference, it can also reduce coverage, especially in indoor
locations.
So, calculations (and measurements) must be made to predict (and check) the results, and
if that means coverage loss, we should re-evaluate the tilt.
It is a good practice to define some 'same' typical values (default) of tilt to be applied on
the network cells, varying only based on region, cell size, and antennas heights and types.
It is recommended not to use too aggressive values: it is better to start with a small tilt in
all cells, and then go making any adjustments as needed to improve coverage/interference.
When using mechanical tilt, remember that the horizontal beamwidth is wider to the
antenna sides, which can represent a problem in C/I ratio in the coverage of neighboring
cells.
Always make a local verification, after changing any tilt, by less than it has been. This
means assessing the coverage and quality in the area of the changed cell, and also in the
affected region. Always remember that a problem may have been solved ... but another
may have arisen!

Documentation
The documentation is a very important task in all activities of the telecommunications area.
But this importance is even greater when we talk about Radiant System documentation
(including tilts).
It is very important to know exactly 'what' we have currently configured at each network
cell. And equally important, to know 'why' that given value has changed, or optimized.
Professionals who do not follow this rule often must perform rework for several reasons
simply because the changes were not properly documented.
For example, if a particular tilt was applied to remove the interfering signal at a VIP
customer, the same should go back to the original value when the frequency plan is fixed.
Other case for example is if the tilt was applied due to problems of congestion. After the
sector expansion (TRX, Carriers, etc), the tilt must return to the previous value, reaching
a greater coverage area, and consequently, generating overall greater revenue.
Another case still is when we have the activation of a new site: all neighboring sites should
be reevaluated both tilts and azimutes.
Of course that each case should be evaluated according to its characteristics and only
then deciding to aplly final tilt values. For example, if there is a large building in front of an
antenna, increasing the tilt could end up completely eliminating the signal.
In all cases, common sense should prevail, evaluating the result through all the possible
tools and calculations (as Predictions), data collection (as Drive Test) and KPI's.

Practical Values
As we can see, there's not a 'rule', or default value for all the tilts of a network.
But considering the most values found in field, reasonable values are:
15 dBi gain: default tilt between 7 and 8 degrees (being 8 degrees to smaller cells).
18 dBi gain: default tilt between 3.5 and 4 degrees (again, being 4 degrees to smaller cells).
These values have tipically 3 to 5 dB of loss on the horizon.
Note: the default tilt is slightly larger in smaller cells because these are cells are in dense
areas, and a slightly smaller coverage loss won't have as much effect as in larger cells. And
in cases of very small cells, the tilt is practically mandatory otherwise we run the risk of
creating very poor coverage areas on its edges due to antenna nulls.
It is easier to control a network when all cells have approximately the same value on almost
all antennas: with a small value or even without tilt applied to all cells, we have an almost
negligible coverage loss, and a good C/I level.
Thus, we can worry about - and focus - only on the more problematic cells.
When you apply tilts in antennas, make in a structured manner, for example with steps of 2
or 3 degrees document it and also let your team know this steps.
As already mentioned, the mechanical tilt is often changed through the adjust of mechanical
devices (1) and (2) that fixes the antennas to brackets.


And the electrical tilt can be modified for example through rods or screws, usually located at
the bottom of the antenna, which when moved, applies some corresponding tilt to the
antenna.


For example in the above figure, we have a dual antenna (two frequency bands), and of
course, 2 rods (1) and (2) that are moved around, and have a small display (3) indicating
the corresponding electrical tilt one for each band.

And what are the applications?
In the definitions so far, we've already seen that the tilts applications are several, as to
minimize neighboring cells unwanted overlap, e.g. improving the conditions for the
handover. Also we can apply tilt to remove local interference and increase the traffic
capacity, and also cases where we simply want to change the size of certain cells, for
example when we insert a new cell.


In A Nutshell: the most important thing is to understand the concept, or effect of each
type of tilt, so that you can apply it as best as possible in each situation.

Final Tips
The tilt subject is far more comprehensive that we (tried to) demonstrate here today, but
we believe it is enough for you to understand the basic concepts.
A final tip is when applying tilts in antennas with more than one band.
This is because in different frequency bands, we have different propagation losses. For this
reason, antennae that allow more than one band has different propagation diagrams, and
above all, different gains and electrical tilt range.
And what's the problem?
Well, suppose as an example an antenna that has the band X, the lower, and a band Y,
highest.
Analyzing the characteristics of this specific antenna, you'll see that the ranges of electrical
tilt are different for each band.
For example, for this same dual antenna we can have:
X band: electrical tilt range from 0 to 10 degrees.
Y band: electrical tilt range from 0 to 6 degrees.
The gain of the lower band is always smaller, like to 'adjust' the smaller loss that this band
has in relation to each other. In this way, we can achieve a coverage area roughly equal on
both bands of course if we use 'equivalent' tilts.
Okay, but in the example above, the maximum is 10 and 6. What would be equivalent tilt?
So the tip is this: always pay attention to the correlation of tilts between antennas with
more than one band being transmitted!
The suggestion is to maintain an auxiliary table, with the correlation of these pre-defined
values.
Thus, for the electrical tilt of a given cell:
X Band ET = 0 (no tilt), then Y Band ET = 0 (no tilt). Ok.
X Band ET = 10 (maximum possible tilt), then Y Band ET = 6 (maximum possible tilt). Ok.
X Band ET = 5. And there? By correlation, Y Band ET = 3!
Obviously, this relationship is not always a 'rule', because it depends on each band specific
diagrams and how each one will reach the areas of interest.
But worth pay attention to not to end up applying the maximum tilt in a band (Y ET = 6),
and the 'same' (X ET = 6) in another band because even though they have the same
'value', actually they're not 'equivalent'.
After you set this correlation table for your antennas, distribute it to your team so, when
in the field, when they have to change a tilt of a band they will automatically know the
approximate tilt that should be adjusted in the other(s).

And how to verify changes?
We have also said previously that the verifications, or the effects of tilt adjustments can be
checked in various ways, such as through drive test, coverage predictions, on-site/interest
areas measurements, or also through counters or Key Performance Indicators-KPI.
Specifically about the verifications through Performance counters, in addition to KPI directly
affected, an interesting and efficient form of verification is through Distance counters.
On GSM for example, we have TA counters (number of MR per TA, number of Radio Link
Failure by TA).
Note: we talked about TA here at telecomHall, and if you have more interest in the subject,
click here to read the tutorial.
This type of check is very simple to be done, and the results can be clearly evaluated.
For example, we can check the effect of a tilt applied to a particular cell through counters in
a simple Excel worksheet.


Through the information of TA for each cell, we know how far the coverage of each one is
reached. So, after we change a particular tilt, simply export the new KPI data (TA), and
compare the new coverage area (and also the new distributions/concentrations of traffic).
Another way, perhaps even more interesting, is plotting this data in a GIS program, for
example in Google Earth. From the data counters table, and an auxiliary table with the
physical information of cells (cellname, coordinates, azimuth) can have a result far more
detailed, allowing precise result checking as well.


Several other interesting information can be obtained from the report (map) above.
When you click some point, we have its traffic information. The color legend also assists in
this task. For example, in regions around the red dots, we have a traffic between 40 and 45
Erlangs. In the same logic, light yellow points between 10 and 15 resulting Erlangs
according to legend see what happens when we click at that particular location: we have
12.5 Erlangs.


Another piece of information that adds value to the analysis, also obtained by clicking any
point, is the percentage of traffic at that specific location. For example, in the yellow dot we
have clicked, or 12.5 Erlangs = 14% out of a total of 88.99 Erlangs that cell has (the sum of
all points).


Also as interesting information, we have the checking of coverage to far from the site,
where we still have some traffic. In the analysis, the designer must take into account if the
coverage is rural or not. If a rural coverage, it may be maintained (depends on company
strategy). Such cases in sites located on cities, are most likely signal 'spurious' and probably
should be removed for example with the use of tilt!


The creation and manipulation of tables and maps processed above are subject of our next
tutorial 'Hunter GE TA', but they aren't complicated be manually obtained mainly the data
in Excel, which already allow you to extract enough information and help.

Conclusion
Today we've seen the main characteristics of tilts applied to antennas.
A good tilts choice maintains network interference levels under control, and consequently
provides best overall results.
The application of tilt always results in a loss of coverage, but what one should always bear
in mind is whether the reduced coverage should be there or not!
Knowing well the concept of tilt, and especially understanding the different effects of
mechanical and electrical tilt, you will be able to achieve the best results in your network.


IP Packet switching in Telecom - Part 4
And then we finally get to NGN signaling protocols: SIP and SDP. The picture below was
extracted from RFC 3261 and gives a fairly good example of a SIP dialog between two
users, Alice and Bob.


The entities involved in the call setup are called User Agents (UA). UAs which request
services are called User Agent Clients (UAC), and those which fulfill requests are called User
Agent Servers (UAS). Although the basic operating mode is end-to-end, the model supports
the use of intermediate proxy servers, which work as back-to-back User Agents (B2BUA)
relaying requests from one user to the other. In the picture Alices softphone and Bobs SIP
phone are the end-to-end user agents, while theres two proxy servers: atlanta.com and
biloxi.com. Linking this with what we already know about IMS, we can identify the P-CSCF
as a SIP proxy server, while the communicating UEs are the end-to-end UAs.


Note: Also visit my blog Smolka et Catervarii (portuguese-only content for the moment)
Quoting RFC 3261:
SIP does not provide services. Rather, SIP provides primitives that can be used to
implement different services. For example, SIP can locate a user and deliver an opaque
object to his current location. If this primitive is used to deliver a session description written
in SDP, for instance, the endpoints can agree on the parameters of a session. If the same
primitive is used to deliver a photo of the caller as well as the session description, a caller
ID service can be easily implemented. As this example shows, a single primitive is typically
used to provide several different services.
SIP primitives are:
REGISTER: indicate an UA current IP address and the Uniform Resource Identifiers (URI) for which
it would like to receive calls;
INVITE: used to establish a media session between UAs;
ACK: confirms message exchanges with reliable responses (see below);
PRACK (Provisional ACK): confirms message exchanges with provisional responses (see below). This
was added by RFC 3262;
OPTIONS: requests information about the capabilities of a SIP proxy server or UA, without setting
up a call;
CANCEL: terminates a pending request;
BYE: terminates a session between two UAs.
Typically a SIP message have to have a response. Like HTTP, SIP responses are identified
with three-digit numbers. The leftmost digit says to which category the response belongs:
Provisional (1xx): request received and being processed;
Success (2xx): request was successfully received, understood, and accepted;
Redirection (3xx): further action needs to be taken by sender to complete the request;
Client Error (4xx): request contains bad syntax or cannot be fulfilled at the server/UA of destiny;
Server Error (5xx): The server/UA of destiny failed to fulfill an apparently valid request;
Global Failure (6xx): The request cannot be fulfilled at any server/UA.
Session Description Protocol (SDP) is described at RFC 4566 (warning: IETF mmusic
working group is preparing an Internet Draft which eventually will supersede RFC 4566).
Matter of fact, it should be called Session Description Format, since its not a protocol as we
use to know. SDP data can be carried over a number of protocols, and SIP is one of them
(although RFC 3261 says that all SIP UAs and proxy server must support SDP for session
parameter characterization).
Quoting RFC 4566:
An SDP session description consists of a number of lines of text of the form:
<type>=<value>
where <type> MUST be exactly one case-significant character and <value> is structured
text whose format depends on <type>. In general, <value> is either a number of fields
delimited by a single space character or a free format string, and is case-significant unless a
specific field defines otherwise. Whitespace MUST NOT be used on either side of the = sign.
An SDP session description consists of a session-level section followed by zero or more
media-level sections. The session-level part starts with a "v=" line and continues to the first
media-level section. Each media-level section starts with an "m=" line and continues to the
next media-level section or end of the whole session description. In general, session-level
values are the default for all media unless overridden by an equivalent media-level value.
Some lines in each description are REQUIRED and some are OPTIONAL, but all MUST appear
in exactly the order given here (the fixed order greatly enhances error detection and allows
for a simple parser). OPTIONAL items are marked with a "*".


Heres an example of an actual SDP session description:


Very well. I think thats enough to you understand how NGN signaling works. Now its time
to get one step down on the TCP/IP protocol stack, so on our next article well be starting to
talk about transport protocols, and will understand how the socket API is used to create
separate sessions over the transport protocols service

What is Retransmission, ARQ and HARQ?
It's very important to use solutions that improve the efficiency of the adopted model in any
data communication system. If the transmission is 'Wireless', this need is even greater.
In this scenario we have techniques that basically checks, or verify if the information sent
by the transmitter correctly arrived in the receiver. In the following example, we have a
packet being sent from the transmitter to the receiver.


If the information arrived properly (complete), the receiver is ready to receive (and process)
new data. If the information arrived with some problem, corrupted, the receiver must
request that the transmitter sent the packet again (retransmission).


Let's understand a little more about these concepts increasingly used (and required) in the
current systems?


Note: All telecomHall articles are originally written in Portuguese. Following we translate to
English and Spanish. As our time is short, maybe you find some typos (sometimes we just
use the automatic translator, with only a final and 'quick' review). We apologize and we
have an understanding of our effort. If you want to contribute translating / correcting of
these languages, or even creating and publishing your tutorials, please contact us: contact.

Error Checking and Correction
We start talking about errors. Errors are possible, and mainly due to the transmission link.
In fact, we can even 'expect' errors when it comes to Wireless Data Transmission.
If we have errors, we need to take some action. In our case, we can divide it into two steps:
error checking and error correction.
Error checking is required to allow the receiver to verify that the information that arrived is
correct or not.
One of the most common methods of error checking is the CRC, or 'Cyclic Redundancy
Check', where bits (CRC) are added to a group of information bits. The CRC bits are
generated based on the contents of the information bits. If an error happens with the
information bits, the CRC bits are used to verify and help recover the degraded information.
The level of protection provided is determined by the ratio: number of CRC bits by the
number of information bits. Above a certain error level, the process is eliminated. CRC
protection is used practically in all existing Voice and Data applications.
The following diagram shows a simplified demonstration of how the CRC is used.


And the CRC is directly connected to the Error Correction methods. There are various ways
of Foward Error Correction (FEC), but the main idea is, given a level of quality in the link,
try to get the lowest number of required retransmissions.
Minimizing the number of retransmissions we ended up having a more efficient data flow
result, including - mainly - the 'Throughput'.
In simplified way: the CRC lets you know if a package arrived 'OK' or 'NOT OK'. Every
packet that is sent has a CRC, or a 'Signature'. As an analogy, it's like when we send a
letter to someone, and in the end we sign: 'My Full Name'. When the other person receives
this letter (information), he checks the signature: 'My Wrong'. In this case, he tells the
Messenger: 'I don't know 'My Wrong', this information has some problems. Please ask
sender to send it again!'.
I.e. I do CRC checks. If the CRC is 'wrong', the information is 'wrong'. If the CRC is 'correct',
probably the information is 'correct'.

Retransmissions
Retransmissions are then: send information again (repeat) to the receiver, after it make
such a request. The receiver requests that the information be retransmitted whenever it
cannot decode the packet, or the result of decoding has been an error. That is, after
checking that the information reached the receiver is not 'OK', we should request it to be
retransmitted.


Of course, when we have a good link (SNR), without interference or problems that may
affect data integrity, we have virtually no need for retransmissions.
In practice, in real World, this is very difficult to happen, because the links can face the
most different adversities. Thus, an efficient mechanism to enable and manage the
retransmission is essential.
We consider such a mechanism as efficient when it allow data communication in a link meet
quality requirements that the service demands (QoS).
Voice for example, is a service where retransmission does not apply. If a piece of
information is lost, and is retransmitted, the conversation becomes intelligible.
On the other hand, data services practically rely on retransmission, since most have - or
allows - a certain tolerance to delays some more, some less. With the exception only for
'Real Time' services.
But it is also important to take into account that the greater the number of needed
retransmissions, lower the data transmission rate that is effectively reached: If the
information have to be retransmitted several times, it will take long for the receiver to
obtain the complete - final - information.

ARQ
Till now we talked in a generic way about data retransmissions, error checking and
correction. Let's now see some real and practical schemes.
The simplest way (or more common) control using what we described above is known as
ARQ, or 'Automatic Repeat Request'.
In ARQ, when we have a 'bad' package, the system simply discards it, and asks for a
retransmission (of the same package). And for this, it sends a feedback message to the
transmitter.


These feedback messages are messages that the receiver uses to inform whether the
transmission was successful or not: 'ACKnowledgement' (ACK) and 'Non-ACKnowledgement'
(NACK). These messages are transmitted from the receiver to the transmitter, and
respectively informs a good (ACK) or bad (NACK) reception of the previous packages.
If in the new retransmission the packet keep arriving with errors, the system requests a
new retransmission (still for this same package). That is, sends another 'NACK' message.


The data packets that are not properly decoded are discarded. The data packets or
retransmissions are separately decoded. That is, every time a packet that arrives is bad, it
is discarded, and it is requested that this same package be retransmitted.
But see that if there were no retransmissions, the performance of the data flow would be
much better. In the example below, compared with the previous, we transmit more
information - 3 times in the same time interval.


Unfortunately we don't have much to do about the link conditions. Or better, we are able to
improve the links performance, for example with configuration parameters optimization, but
we'll always be subject to face adverse conditions. In this case, our only way out is to try to
minimize retransmissions.
And that's where arise other techniques or more 'enhanced' schemes for retransmission.
The main one is HARQ.

Hybrid ARQ (HARQ)
The HARQ is the use of conventional ARQ along with an Error Correction technique called
'Soft Combining', which no longer discards the received bad data (with error).
With the 'Soft Combining' data packets that are not properly decoded are not discarded
anymore. The received signal is stored in a 'buffer', and will be combined with next
retransmission.
That is, two or more packets received, each one with insufficient SNR to allow individual
decoding can be combined in such a way that the total signal can be decoded!
The following image explains this procedure. The transmitter sends a package [1]. The
package [1] arrives, and is 'OK'. If the package [1] is 'OK' then the receiver sends an 'ACK'.


The transmission continues, and is sent a package [2]. The package [2] arrives, but let's
consider now that it arrives with errors. If the package [2] arrives with errors, the receiver
sends a 'NACK'.


Only now this package [2] (bad) is not thrown away, as it is done in conventional ARQ. Now
it is stored in a 'buffer'.


Continuing, the transmitter send another package [2.1] that also (let's consider) arrives
with errors.


We have then in a buffer: bad package [2], and another package [2.1] which is also bad.
Does by adding (combining) these two packages ([2] + [2.1]) we have the complete
information?
Yes. So we send an 'ACK'.


But if the combination of these two packages still does not give us the complete
information, the process must continue - and another 'NACK' is sent.


And there we have another retransmission. Now the transmitter sends a third package
[2.2].
Let's consider that now it is 'OK', and the receiver sends an 'ACK'.


Here we can see the following: along with the received package [2.2], the receiver also has
packages [2] and [2.1], that have not been dropped and are stored in the buffer.
In our example, we see that the package arrived 2 times 'wrong'. And what is the limit of
these retransmissions? Up to 4. IE, we can have up to 4 retransmission in each process.
This is the maximum number supported by 'buffer'.

Different HARQ Schemes
Going back a little in the case of Conventional ARQ, whenever we send a package and it
arrives with problems, it is discarded.
Taking the above example, when we send the package [2], and it arrives with errors, it is
discarded. And this same package [2] is sent again.
What happens is that we no longer have the concept of 'package version' - [2.1], [2.2], etc.
We do not have the 'redundancy' version, or the gain we get in HARQ processing.
To understand this, we need to know that information is divided as follows:
[Information + Redundancy + Redundancy]
When we transmit the packet [2] we are transmitting this:
[Information + Redundancy + Redundancy]
When retransmit the same package [2] we are retransmiting it again:
[Information + Redundancy + Redundancy]

But when we use HARQ, and retransmit packet [2.1] or [2.2], we have the possibility of:
Or retransmit that same information again;
Or retransmit only the redundancy.
And then, if we retransmit less information (only redundancy), we spend less energy, and
that will run much faster. With this we have a gain!
That is, we work with different 'versions of redundancy', that allows us to have a gain in the
retransmission. This is called 'Redundancy Version', or what version of redundancy.
The redundancy version, or HARQ scheme with 'Soft Combining' can be 'Chase Combination'
or 'Incremental Redundancy'.

HARQ Chase Combination

Chase Combination: when we combine the same information (the retransmission is an
identical copy of the original packet).
We transmit an information, which arrived wrong, and we need to do a retransmission. We
retransmit the same information - and there we don't have much gain.

HARQ Incremental Redundancy

Incremental Redundancy: where we retransmit only the portion that we didn't
transmitted before. Thus we retransmit less information. Less information means fewer bits,
less energy. And this gives a gain!
Redundancy bits are retransmitted gradually to the receiver, until an ACK is received.
With this, we adapt to changes in the condition of the link. The first retransmission can, for
example, contain or not bits of redundancy. If necessary, a small number of these bits is
retransmitted. And so on.

Finishing for today: what are the 2 steps of HARQ? Why it gives me a
Gain?
First because from wrong packets 1 and 2 we can get a correct one, since we do not discard
erroneous packets anymore.
Second because we can - also in retransmission - send less information, and streamline the process.
The use of HARQ with 'Soft Combining' increases the received Eb/Io effective value for each
retransmission, and therefore also increases the likelihood of correct retransmissions
decoding, in comparison to conventional ARQ.
We send a package, and it arrives with errors: we keep this package. Receive the
retransmission and then we add or combine both.

HARQ Processes (Case Study)
What we have seen so far clarifies the concepts involved. In practice, in retransmission, this
type of Protocol is called 'Stop And Wait' (there are other kinds of similar protocols).
What would be: send the information and stop. Wait for the response to send other
information. Send, wait for response. Send, wait for response ...


No! Not so in practice. In practice, we work with a number of 'processes', which may vary
for example from 4, 6 or 8. The following image illustrates this more clearly.


Other types of HARQ
New schemes are constantly being developed and used, as the type III HARQ, which uses
self-decodable packages.
But enter these variations, terminology and considerations, is not the scope of our tutorial,
which was simply to introduce the concept of Retransmission, ARQ and HARQ.
Based on the key concepts illustrated here today, you can extend your studies the way you
want, however we believe that the most important thing was achieved understand how it
works and what are all the cited concepts.

JAVA Applet
Below, you can see how some retransmission schemes work. There are several Applets
available, for the many possibilities (ARQ, HARQ, With Sliding Windows, Selective, etc).
The next is a link for a JAVA Applet that simulates a 'Selective Repeat Protocol
transmission'.
http://media.pearsoncmg.com/aw/aw_kurose_network_4/applets/SR/index.html



Conclusion
This was another tutorial on important issues for those who work with IT and Telecom: data
Transmission and Retransmission techniques, ARQ and HARQ.
ARQ is used for applications that allow a certain delay, as Web Browsing and Streaming
Audio/video. It is used widely in Wimax and WiFi communication systems. However, it
cannot be used in Voice transmission, as for example in GSM.
HARQ for example is used in HSPA and LTE, and therefore must be a well-understood
concept for those who work or want to work with these technologies.

Вам также может понравиться