Вы находитесь на странице: 1из 21

SDH Logical Question:

1. Why Multiframe use in SDH.

Ans: Multiframe is combination of 4 frames used to provide meaningful POH.


Each frame having 1 Byte POH and combination of these 4 frames are Multiframe and POH
are v5, j2, n2, k4 respectively. You can find out that is the use of each byte by any of the sdh
doc.

Multiframe is used to reduce the Overhead ratio for lower order signals

In addition to this, the Multiframe is used for the convenience of rate adaptation. If E1(2mb/s)
signals have standard rate of 2.048mb/s,each C-12 container will accommodate 256bit(32 byte)
payload(2.048/8000).however, when the rate of the E1 signal is not standard, the average bit
number accommodate into each C-12 is not an integer. In this case, a Multiframe of four C-12
frames is used to accommodate signals

2. ALS (automatic laser shutdown).


Ans: Automatic Laser Shutdown is a mechanism in which you are asking the TX Laser to shut
down in case you are not receiving any kind of power in the Optical Snk.

Essentially this is an ideal condition when you are running redundant networks and would
like to stop the TX direction also when there is no power in the RX.

Imagine in the following way.

1. You have a pair of fiber along two nodes.


2. From Node A to Node B there is a fiber cut, but from Node B to Node A the fiber is OK.
3. In such a condition you will receive a LOS in the NODE B.

If ALS is not enabled then

1. You have LOS in NODE B,


2. This generates MS-AIS in the MS section.
3. The MS-AIS leads to a MS-RDI in the NODE-A.
4. The Path level AIS and corresponding RDI's also follow.

Now if ALS is enabled in NODE B then

1. The moment NODE B receives a LOS it turns of the Laser of NODE B


2. This leads to a total link shut between NODEB and NODEA

why is ALS needed?

1. Understand that if you are transferring a data wherein you require acknowledgement also
to be done, then in that case shutting of one path should shut the other path also.
2. So as to say that when the acknowledgement path is cut, then the Data transfer path
should also stop.
3. ALS is also used in case of total link shut for signalling links.
4. It is also necessary for switching in both the direction in case you have SNCP. (this is a very
good example of having ALS).
Suppose you have a SNCP type of protection. If out of a pair of fiber one is down, then the
RX would switch only on one side and not on the other side. This is because SNCP switch
occurs only at the snk. For the path to switch completely on a different media the SNCP
switch should occur at both the endpoints and this is the reason why you have AlS enabled,
so that when one part of the fiber breaks the other TX also gives a link shut and at both the
ends you have a switch.
The major question is how does the mux recover from a ALS scenario.

1. The TX sends a pulse of signal every 90 sec for 3 sec.


2. If the LOS is cleared on one side then the TX of that side is also started.
3. However if the LOS is not cleared the TX of that side also doesn't fire.

This situation remains till LOS is cleared on both sides and the link is completely up
Disadvantage of ALS:
Yes there is, but we ignore it. Look at this scenario. There is a multicast traffic only going
from one source to many sinks through many legs.

If you are enabling ALS and if there is a fiber loss in the reverse direction then your
forward direction is also affected. In such cases you don't want to happen.

Suppose you send multicast (unidirectional traffic) from A to B. So this has the following
thing happening.

When traffic is going from A to B and the ALS is enabled and the loss is there only from B
to A then the ALS shuts laser from A to B also this kills your Unidirectional Multicast which
otherwise could have easily not been killed had ALS been disabled. So this is one
disadvantage of enabling ALS.

Also in case of Unidirectional MSP1+1 you always keep ALS disabled as you want the
switching only to take place in one direction.

3. What is difference between 4 X VC4 and VC4-4C?


Ans: The basic difference between the two is that of Virtual concatenation and Contigous
concatenation.

VC-4-4C is an example of Contiguous concatenation. In this scheme the following thing


happens.

4 VC-4s are combined into one signal but these VC-4s need to be in a sequence, either
1,2,3,4 or 5,6,7,8 and also need to share the same MS ie the same physical port.
Remember this is a contiguous concatenation mode so the physical port resources cannot
be different and also the alarm overheads are carried by the first VC-4.

4XVC-4 is an example of Virtual concatenation.

In this case 4 VC-4s are actually grouped but they can be individually cross connected to
different VC-4 pipes belonging to different physical paths. So eventually you can take a
600Mb/s signal and then bifurcate it into 4 different VC-4s on 4 diverse routes. This
technology is used most in Ethernet over SDH.

To enhance this you have LCAS, which is described very well in one of my earlier posts.

The Ethernet rides on this concatenation schemes.

The Ethernet (which is asynchronous in nature) in order to run on this needs to be first
encapsulated in GFP.

However let us understand that this VCG can have traffic of any type.

it may be Ethernet, FC, FICON or ATM whatever you want.

You can say there is a kind of BW advantage in VC-4 4C, however this i very negligible
considering the following facts.

1. You have to always take VC members in sequence.


2. You cannot have diverse routing options.
3. There is less flexibility if you don't have members in sequence all through the path.

This is the reason contiguous concatenation is giving way to virtual concatenation.

4. MSP/MSPRING/SNCP
Ans:
1. MSP 1+1

In this there is a linear topology that i protected by another topology. The granularities
are STM1/4/16/64. In this there is a link that is dedicatedly protected by another link. so
as to say there is dedicated section/link for every link. This protection scheme is
triggered by the K1 and K2 bytes.

2. MS-SPRING:

This is a protection scheme in which the traffic is more optimised for high density cores.
Let us say that there is a STM 16 ring. To have a dedicated protection in this ring using
MSP 1+1 over all the spans you may have to have more HW. In MS-SPRING this STM -16
ring is divided into 8+8 combination. VC-4 1-8 in each span is working as main and the
VC-4 9-16 in each span is serving for shared protection. Remember in MS-SPRING the
protection is shared. So this will allow you to actually use 16X (N/2) number of VC-4 that
will be protected. Where N is the number of Nodes.

3. SNCP:

This is a dedicated protection for each path. Just like MSP was in the MS level SNCP is in
the path level. Each Circuit end to end will have its dedicated protection path. The bytes
involved are K3 at higher order and k4 at lower order
SNCP is a path level protection that was designed in order to have a protection scheme
at collector level for path level eventualities (alarms). Until long time there was a great
deal of consolation to the users that now there will be a protection scheme that would
also respond to path level alarms/Path level AIS ie TU-AIS and AU-AIS.

This traditional design of SNCP is termed as SNCP-I or Intrusive SNCP. In this case the
SNCP only triggers on a path failure.

As the time progressed we realized that in path level we can very well have other
problems than AIS. One of the problems were quality level problems like DEG and EXC,
to which SNCP-I didn't respond. This made a change of design in the SNCP and there was
an incumbency of SNCP-N which stands for NON-INTRUSIVE SNCP. This scheme also
responds to qualitative alarms like EXC and DEG.

However now there will be a question that what if one path has EXC and one path has
DEG?

In such a case one should remember the priorities of SNCP - N triggers.

1. AIS
2. EXC
3. DEG

So AIS gets the top priority. So if both the paths have a problem then the path having the
least serious problem tends to carry the traffic.

Shortcomings of SNCP-N and solution:

Most of the time SNCP - N is not preferred in the case of access links. This is because
there is always a degree of qualitative degradation in both such links and this results the
links to toggle between main and protection. To overcome this you should use a hold off
timer.

Just remember one thumb-rule. SNCP is always triggered at the drop point.

So say if I have two paths.

A-B-C-D-E-F-G (working)

A-J-K-I-L-G (Protection)

There is a failure of C-D then the D has a LOS, This generates a TU level AIS which is sent
on the path to G (Drop point) here you have a decision point which switches the traffic.
Just remember one thing, Path level AIS generated at any point of the path will
propagate to the End drop point, which will switch the traffic.
5. Difference between Holdover and free-running mode.
Ans: Hold-over: the equipment clock samples the in-use timing in its memory and when the
primary source of synchronisation lost, it uses it stored clock for synchronisation.

Free -Running: the equipment runs of its own clock.Generaly, the NE runs in free running
mode when it has just commissioned and is in use for first time.
As you must be knowing that in every NE there is a Priority table of Line clock latching.

The NE actually latches to the clock of the highest quality and if there is a tie it looks for the
line with the highest priority.

As the Lines fail consequently a next line is selected on the basis of quality and priority.

In an event of the fact that all the lines towards that NE has failed and there is no way that
the NE can actually latch to a line clock then the NE goes to the Holdover mode.

In the Holdover Mode the NE maintains the quality of the last latched clock for the next 24
Hrs. Hold-over is actually holding the quality level for a finite amount of time in an event the
reference is not coming.

After the 24 Hrs expire and there is still no line clock in picture the NE will have the Freedom
to actually synchronise itself with the internal oscillator. This is called Free Running mode. In
this case the clock oscillator of the NE doesn't have any reference so there is no feedback.

Free running mode is an undesirable mode of functioning for synch and is always avoided.
6. What is difference between 4 X VC4 and VC4-4C?
Ans: This question is asked often in interviews and many people just go blank. I must say this
is a very good question (and since we have this open platform, so let me reveal this to you).

The basic difference between the two is that of Virtual concatenation and Contigous
concatenation.

VC-4-4C is an example of Contiguous concatenation. In this scheme the following thing


happens.

4 VC-4s are combined into one signal but these VC-4s need to be in a sequence, either
1,2,3,4 or 5,6,7,8 and also need to share the same MS ie the same physical port. Remember
this is a contiguous concatenation mode so the physical port resources cannot be different
and also the alarm overheads are carried by the first VC-4.

4XVC-4 is an examply of Virtual concatenation.

In this case 4 VC-4s are actually grouped but they can be individually cross connected to
different VC-4 pipes belonging to different physical paths. So eventually you can take a
600Mb/s signal and then bifurcate it into 4 different VC-4s on 4 diverse routes. This
technology is used most in Ethernet over SDH.

To enhance this you have LCAS, which is described very well in one of my earlier posts.
What I told you was not at all related to Ethernet.

These are related concatenation styles of SDH.

The Ethernet rides on this concatenation schemes.

The Ethernet (which is asynchronous in nature) in order to run on this needs to be first
encapsulated in GFP.

However let us understand that this VCG can have traffic of any type.

it may be Ethernet, FC, FICON or ATM whatever you want.

What I told you was SDH.


You can say there is a kind of BW advantage in VC-4 4C, however this i very negligible
considering the following facts.

1. You have to always take VC members in sequence.


2. You cannot have diverse routing options.
3. There is less flexibility if you don't have members in sequence all through the path.

This is the reason contiguous concatenation is giving way to virtual concatenation.

7. What is difference between points to multipoint Configuration in SDH?


Ans: The first thing that you should not do is to give hard loops or soft loops. Ethernet
circuits are prone to MAC move (Mac duplications) when a hard/soft loop is given in the
circuit. The working of a p2mp service is totally based on MAC learning even though if the
VLAN is same.

Remember a P2MP service is actually created under the following conditions.

1. There is a hub.
2. There are many spokes.
3. You want to have unicast communication (interactive) between the hub and the spokes.
4. You want to treat each and every spoke a seperate broadcast domain.
5. However, you have the same Vlan for all the spokes and hub.

In such a case the VLAN functionality is also ruled out because you may have the same VLAN
in different spokes.
So what to do.

The best way is to test via multiple streaming by MAC address.

Remember the circuit may be P2MP but after the MAC table is clear the communication is
always P2P from hub to spoke.

So le us consider a hub and spoke with one hub and 2 spokes.

So.

1. MAC address of Hub = 00 00 00 00 00 0A


2. MAC address of Spoke -1 = 00 00 00 00 00 0B
3. MAC address of Spoke -2 = 00 00 00 00 00 0C

You connect one analyser to hub and another to spoke -1.

In the hub analyser you have to put the following things.

Souce address = 00 00 00 00 00 0A
Dest address = 00 00 00 00 00 0B

In the spoke -1 analyser you have to put

Source address = 00 00 00 00 00 0B
Dest Address = 00 00 00 00 00 0A

Now first start the stream of the hub.

1. You will see that the stream is actually reaching both the spokes. (This is because the
destination address 00 00 00 00 00 0B from the hub is still not known or learnt by any
switch).

Now start the stream from Spoke -1.

At this instant you will see that any packet from spoke -1 only reached the hub and the
stream that was meant for spoke -1 from the hub only goes to the spoke -1.

This is because due to the reverse stream the MAC address has been learnt and the traffic is
now unicast.

What happens in the real scenario????


If you look at the real scenarios then you ahve a L3 network of routers that are actually connected by
means of a Metro ethernet network.
The metro ethernet network consists of such services of P2MP.

Just like we did the MAC learnign from different streams of the analyser same thing happens in real
scene also.

Keep in mind for any L3 device to start sending traffic it has to do an ARP. (Address resolution
protocal). In the analyser you are putting the destination address however in case of real router the
destination mac address that is put on the MAC header of the frame that comes out of the router is
decided by the ARP.

Now as the ARP happens in the L3 network overlay, the underlying metro ethernet network has the
MAC addresses resolved due to this.

This is the reason why inspite of having the same VLANS you will not have any cross talk and the
collision domains are broken.

2. What is Nut Configuration?

the concept of NUT comes in MS-SPRING.

As I told you that in MS-SPRING you have 8 in main and shared protection carried by 9-
16 AU-4s.

However in this ring if you want some of the VC-4s to be free of MS-SPRING
configuration then you configure them as NUT.

So if VC-4 number 5 is configured as NUT then VC-4 no 5 and VC-4 13 don't participated
in MS-SPRING.

This is something like in Railways you have RAC. So when you are boarding the train with
RAC two people share one seat in Side lower birth. This is what happens when a pair of
VCs are configured in MS-Spring.

Moment there is a cancellation, (in our case addition of a MS-Spring protected member
to NUT) then two seats are confirmed. So in our case two VC-S are free for either
unprotected configuration or HO SNCP configuration.
3. Reason of LOF:
The term LOF means Loss Of Frame.

This happens when the A1 and A2 framing is not as expected.

The STM-4 port expects 12 A1 (F6) and 12 A2 (28) combination of framing bytes but alas
the STM-1 is able to send only 3 A1 and 3 A2 bytes.
This is the reason you have a RS-LOF on the STM-4.

The consequent alarms on the mux of the STM-4 is as follows.

1. MS-AIS.
2. HP-AIS ( for the instance of a higher order XC present).
3. LP - AIS ( for the instance of a higher order termination and lower oder XC present).

On the STM-1 end.

1. MS-RDI ( For the MS-AIS).


2. HP-RDI (For the HP-AIS).
3. LP-RDI (For the LP - AIS).

4. Pointer Alarm
This is a Payload bit on the HO path over head.

This can be in two flavors

HP-UNEQP
LP-UNEQP

LOP stands for loss of pointer.

This happens when there is no pointer identification in the payload.

This can be in two flavours

AU-LOP
and TU-LOP

Make a note of the following

Any pointer related alarms relate to the AU or TU.

But the alarms like UNEQP, PLM, SLM which are path related would actually carry a
prefix of HO/HP or LO/LP

**Can somebody explain the function of N1 and N2 bytes??


N1 / n2 bytes are used for Tandem Connection Monitoring...It purpose is to reflect / synchronise the
alarm/ warning time when the event has "occurred" not when it has "detected" in the complete
Network/Ring.

This N1 byte is used for tandam connection monitoring for a big networks of diiff vendors.This byte
is used for checking errors for a a perticular vendor network/ networks section. Follow the diagram.
1-----2------3-----4-----5-----6,This is one linear network.
Suppose 1,2,5,6 belogs to A vendor and 3,4 belongs to other B vendor.Now errors occures in this
networks.It could not understand from where the errors genarate.Now A vendor need to prove that
its networks not having any problem.So what need to check? At 1 node(source) B3 byte can copy its
value to N1 byte and both are trasmitted and at the 2 node(sink) both the B3 and N1 byte is
compared.If B3 has same value to N1 then A can tell 1---2 section has no problem as N1 byte value
do not change while B3 byte value changes according to path errors.If there is some diff so
obviously 1----2 section generates errors.like that A can also check make 5 as a source and
6 as sink.

**What is Difference between Jitter and Wander?

Jitter or wander is short term variations of the significant instants of a digital signal from their ideal
positions in time. Significant instant is any convenient, easily identifiable point on the signal such as
a rising or falling edge. If frequency of this phase modulation is less than 10 Hz, it is known as jitter. If
this variation is less than 10 Hz, it is known as wander.

Wander happens generally due to low phase variations arising out of sproadic pointer variations.and
also due to low pass characteristics of netwok elements it gets superimpose at each level.this effect
cancells out due to synchronous nature but amplified by superimposition. wander of more than 18us
can cause slips.

Jitter is a high frequency variation which happens when PDH signals are multimplexed or
demultiplexed into sdh signals. To equalise this variations buffers are used at receiver and
transmitter end. if jitter is not in acceptable limit the sampling circuitry can go awry and
synchronisation also gets affected.there are mapping jitter,itrinsic jitter,pointer jitter and many
more..

Both test are carried out together because the signal is passed through a low pass filter with lower
limit of frequency as wander and upper limit of frequency as jitter.

If the clk frequency is in above 10hz is called jitter


below10hz is called wander
**Can anyone help me to understand the difference btwn blocking & non blocking
cross connect?

BLOCKING:
Suppose you have one euqipments having Capicity 60G and due to more traffic capicity increase 80G
then your system dont take these extra traffic and these are come under BLOCKING.

NON BLOCKING:
Suppose your system having 60G and take capicity 60G than these are NONBLOCKING.

This type we can say as restricted and unrestricted xc, means suppose if u r taking a 60G xc capcity
Eqpt in which we can Cross connect at level of VC4,VC3 and VC12 till the complte utilization of 60G
this is called as Non blocking XC connect, but where as in Blocking Xc sysytem , If u r taking 60G XC
capcity eqpt, within this one entire 60G we cant use for VC4,VC3 and VC12 x connections.it may be
20 G for VC4 Xcs and 40G may VC3,and VC12 X connetoins.

For Ex:

If u have any idea about ECI-XDMs: we can xc entire capacity with VC12/VC3/VC4, but
in NEC eventogh the capacity is 80Gwe can drop VC3/VC12 at max 30G and rest is for AU4.

*** if we have 4 node like A,B,C,D.All r protected by MSP protection,


simply tell me,we can do sncp protection between A and B.If no plz tell
me why,if yes also

msp is protecting whole line(say STM16).


and sncp is protection tributuary...say VC4.

now it means that if a port is protected, the trib inside it is protected as well...or in other terms the
traffic through the port is protected...

here the problem is if something goes wrong with trib(insert AU-AIS defect or B3 error)...the msp
won't notice the fault....so line is in good condtion where as trib inside it is in bad condition....

now if you trib is protected,(sncp)..u have a protection at trib level as well...


I hope this gives u atleast a basic idea!!!!

It's depends upon the which vendor product u r using, some vendors will not support sncp if MSP is
provided for that port, but like marconi and NEC may can configure the SNCP with MSP links.
***Can any body explain me about Oscillation guard time in SDH protection?

never came across such terminology as standard one. There is one wait to restore time ( WTR) used
in protection switching.

It offers the selected delay before it switches back to work path, once the work path is restored. The
WTR bridge request is used to prevent frequent oscillation between the protection channels and the
working channels.

Consider there is break in fiber in a link and traffic switched to protected path. Once the team
reaches the site for splicing, it does momentrly make and break multiple times which will interrupt
the link multiple times, within 50 mSec limit.

Consider the case when in a link one transreciever is recieving the power in threshold and is
frequently fluctuating on either side.

The intent is to minimize oscillations, since hits are incurred during switching.

I have seen this in Nortel products

**Can ne1 send link of detailed description of EoS alarms??

LOM: Loss of VCAT Multiframe Alignment


SQM: Sequence number mismatch
DDE, MND: Differential Delay Exceeded, Member not deskewable
PersCRC: Persistent CRC errors
LCR: Loss of capacity, Receive
LCT: Loss of capacity, Transmit
UnexMST: Unexpected Member Status
SQNC: Inconsistent sequence numbers
LMM: LCAS Mode Mismatch
UnExMST: Unexpected Member Status
LLC: Loss of LCAS capability
LFD: Loss of Frame delineation
CSF: Client Signal Failure
UPM: User Payload Mismatch
Extended Header Mismatch
PLCT: Partial loss of capacity, Transmit
PLCR: Partial loss of capacity, Receive
TLCT: Total loss of capacity, Transmit
TLCR: Total loss of capacity, Receive
SD: Signal Degrade, Receive
EER: Excessive Error Ratio, Receive
Link Down
AN Fail: Autonegotiation failed
Link integrity on
SD: Signal Degrade, Receive
EER: Excessive Error Ratio, Receive

**Can u explain about FOPR and GIDM???

FOPR: Lcas failure of protocol


GIDM: Group ID mismatch

***Can anybody plz tell me the reason behind to keep the switching time<50ms? If it is
greater than that, what effect should happen?

Earlier all the traffic carried over the SDH was voice traffic. A voice call will not drop from the
switch if voice channel interrupted for the time less than 50mS. Idea was that protection
switching does not lead to voice call drops completing through that SDH network.

***Please tell me what is the role of ALS (automatic laser shut down ) in SDH as well in
DWDM system.?

ALS ( Autometic Laser sutdown)

These are working in two mode


1:
Enable Mode
In these mode when fiber cut than laser is off condition

2:
Forced Off
In these condition when fiber is cut then laser is also in running condition
ALS as the name suggest is autmatic shut down of the laser. this is to prevent any damage to eye
/body parts during field maintenance.

ALS - normal mode(or enabled mode): means whenever there is a LOS alarm detected at any end
then both end shutdown the laser.
In simple language if the link is not through then equipment shut down the laser and then retry
every 2 min or so on unless the link is thorugh.

However for testing u can use Laser forced on in this case the laser wil remain on eveno matter what
is the link status.

Forced off : Remain off.

Also it increases Laser Life

Because with time ...the O/P power and signal strength from laser keeps on decreasing.

ALS when enabled, during any OSP cut shuts down the high o/p Power transmitted by LD on to the
line.. however peroidically LD sends low power signal to the other end... similarly the same is done
by distant end equipment.. and when ever the APD recieves this low power signal (upon
restorationof media) the Laser automatically switches on..

Disabling ALS is never recommended as it could damage the eyes or anyother body part of the
person on feild attending any OSP cut.

***KLM

There are 63 Virtual containers ( VC12) multiplexed in STM1 frame which are known as "time slots".
This multiplexing happens at different stages to finally make STM1 frame. Three TU12 are
multiplexed to form TUG-2. Seven such TUG-2 are multiplexed to TUG-3. Three such TUG-3 are
multiplexed to VC4. KLM is reprsentation of these three stages and can vary from 1-1-1 to 3-7-3
which makes 3X7X3=63 combinations.

Numbering of TU-3s in a VC-4


any TU-3 can be allocated a three figure address in the form #K, #L, #M, where K designates the
TUG-3 number (1 to 3), L and M are always 0. The location of the columns in the VC-4 occupied by
TU-3(K,0, 0) is given by the formula:
Xth column = 4 + [K1] + 3*[X1] For X = 1 to 86
Thus TU-3(1, 0, 0) resides in columns 4, 7, 10, ..., 259 of the VC-4, and TU-3(3, 0, 0) resides in
columns 6, 9, 12, ..., 261 of the VC-4.

*** Like M1 for B2 is there is any mechanism?

B1 byte provides endtoend error performance monitoring across an individual regenerator section
and is calculated over all bits of the previous STM-N frame after scrambling. Computed value is
placed in B1 byte before scrambling. Once error is detected in B1 bytes, it only raises the alarm in
that network element and it does not forward this information either upstream or downstream.

Whereas B2 byte provides end-to-end error performance monitoring across an individual multiplex
section and is calculated over all bits of the previous STM-N frame except for the first three rows of
SOH. Computed value is placed in B2 byte before scrambling. Alarm indicating MS-REI is raised in the
network element and MS-REI ( Multiplex section remote error indication ) is sent upstream throught
M1 byte.

K2, G1 & V5 bytes sent RDI upstream upon detecting MS AIS (LOS, LOF), AU AIS (H1, H2 LOP) & TU
AIS(V1, V2 LOP) alarms

*** how can we rectify LP-PLM alarm on SDH networks

either make ur path trace ID consistent throughout the path of the VC12 trail, or open each of the
NE in the path and 'mask/inhibit' the alarm

LP-PLM denotes lower order Payload label mismatch. V5 byte in path overhead provides the signal
label information in b5b7. There can be only 8 combinations each signify different type of signal
label Shown below:

000 Unequipped or supervisory-unequipped


001 Equipped non-specific
010 Asynchronous
011 Bit synchronous
100 Byte synchronous
101 Reserved for future use
110 O.181 test signal (TSS4)
111 VC-AIS

Each time any payload is kept in virtual container we assign a label in it which explains the type of
payload. We keep it asynchronous for E1 payload. At terminating point, if expected payload label
does not matches with the received payload label LP-PLM alarm is raised by NE. To remove this
alarm, either ensures that both are same by changing expected payload or mask the alarm if such
facility is there in the equipment

***what is d reason that ms-spring protection support only 16 nodes???


what is advantage of this protection????

The ring APS protocol shall be carried on bytes K1 and K2 in the multiplex section overhead. Each
node on the ring shall be assigned an ID. Destination node ID is stored in b5 to b8 in K1 and source
node ID in b1 to b4 in K2. We can make only 16 combinations with these 4 bits.

***What is the diff bw LOS and LOF alarm in SDH ,please tell me

as such LOS is when u switch off laser...of the port...complete bandwidth which you were expecting
is missing!!!

LOF - loss of frame, suppose u'r port capacity is STM16, and now you send STM4 signal to this
port...will show LOF cause it always expects STM16,and if it recieves and thing less than STM16...it
assumes that frame is lost!!! this is just one instance of LOF...

LOS: when the incoming power level at the receiver has dropped to a level which corresponds to a
high error ( say BER of 10**3) condition. It could be due to a cut cable, excessive attenuation of the
signal, or equipment fault.
SDH frame is detected by A1 & A2 bytes and these indicates the starting of the frame. When the bit
errors increases excessively, they can corrupt A1 and A2 bytes along with the other bytes of the
frame. In such situation it is not possible to detect the starting of the frame. If correct A1 A2 byte
pattern is not detected in 625 microsec, OOF ( Out of frame ) is detected. If the OOF state persists
for 3 ms, a loss of frame (LOF) state shall be declared. It will clear when two consecutive frames are
recieved with valid A1 A2 bytes.
Generally when we slowly attenuate the input optical power, we can see the LOF alarm immediately
before LOS. If any STM16 equipment will verify all the A1 and A2 bytes, it will take more time so it is
implemented on sample basis by checking some of the A1 and A2 bytes out of 48 bytes. Similarly the
test equipment used to inject LOF alarm also do corruption of only some of the bytes. ANT20
corrupts only first (out of 48) A2 and last (out of 48) A1 while injecting LOF. If the equipment
algorithm to detect LOF does not overlap with that of test equipment, one can not inject LOF al all!!
** Dear All please hepl to know about -ve & +ve pointer justification ???

NEGATIVE Justification:
when Clock 1> Clock 2
Rate of incoming STM-1 is Higher Than The Capacity Of Out Going STM-1 . Additional Bits are used
for Increase Capicity of The Outgoing STM-1.
POSITIVE Justification:
When Rate Of CLock 1<Clock 2
Rate Of Incoming STM-1 is Slower Then The Capacity of Outgoing STM-1.Additional Stuffing Bits
must be used in The Outgoing STM-1 to Reduce Its Useful Capacity

Pointer justification accomodates the phase and frequency differences of virtual containers with
SDH frame.
In overhead section there are H1, H2 and H3 bytes which does all these manipulations. Pointers
basically points the starting of the payload container by registering the location of first byte of virtual
container, which is J1 byte.
Out of 16 bits of H1 and H2 bytes, 10 bits keep this adress, which can vary from 0 to 782, where 0
means starting J1 byte of VC immediately after H3 byte in payload section.
When data rate of VC is slower, one byte immediately after H3 will be stuffed and VC will have its J1
bytes immediately after stuffed byte. To point this new location pointer will increment by 1 and will
be known as positive pointer justification.
When data rate of VC is faster, H3 will accomodate this extra byte and VC will have its J1 bytes
immediately after H2 byte. To point this new location pointer will decrement by 1 and will be known
as negative pointer justification.

**STM-1 Have one B1 Byte and STM-4 aslo have one B1 Byte , why
When STM-1 have 3 B2 Byte But STM-4 have 12 B2 Byte

then b1 byte not multiply by 4 why

B1 byte is in Regenerator Section and B2 bytes Provides multiplex section error monitoring. B1 parity
is computed over all the bytes in the frame, no matter how large the frame. B2 is calculated over
Line Overhead and Synchronous Payload Envelope (SPE) of the previous frame before scrambling.
Each SPE (STS-1) consists of 774 bytes as compared to 2340 bytes in payload of STM1 frame. This is
why we require 3 B2's in STM1 frame and 12 B2's in STM4 frame
***Can anyone tell me what is the need to make a ring in SDH network(apart from
MSP)? Bcoz if there is no ring also, protection will work by SNCP.

path level protection and ring level protection are altogether different entities. Which kind of
protection is needed is all depends upon the planning. Suppose you have a customer and you are
giving services to that customer on KLM 111. Then depend upon need of your customer you either
protect that service or simply give your customer an unprotected service on KLM 111. The
protection at this level can be definitely achieved by the necessary means that you are suggesting
but what would you do if you need to protect the most important advantage of SDH i.e. the
information contained in the RSOH and MSOH. In order to protect this high level information you
have to go ahead youhave to go ahead with ring level protections. As the name itself signifies that
SNCP mechanism will give protection only at the path level i.e. at the KLM and J level. However, if
you want to have entire STM-1 FRAME as a protected STM frame then you have to go ahead with
the ring level protection techniques. Also, i would like to add to what i had replied earlier that SNCP
is given in metros and ring protection is defined at the backbone level of a N/W.So, provisioning of
circuits become easier if you define ring level protecion at the backbone level. I will explain this to
you more clearly, in SNCP you have to manually define the working path and protecion path for each
KLM and J, this is feasible at the metro level where circuits are less in number. However, what you
will do at the backbone level of your network where there are lakhs of circuits that you need to
provision. In SNCP you have to manually define protection path for each of the customer data that
needs to be protected. However, in ring level protecion you just have to define ring level protection
and your ADM's will automatically protect the tarffic for you.

u can activate MS SPRING in the also thereby enabling to carry extra traffic between the nodes.. a
MSSPRING can carry traffic = No. of Nodes/2 * STM-N where as in case u r using SNCP Protection the
system (linear or ring ) will carry only up to STM-N only

***Dear friends what is OSNR and what is major diffrence between


CWDM,WDM,DWDM, if any one have idea guide me. thanks

OSNR -optical signal to noise ratio.


reflection,refraction or diffraction of light in fiber due to dispersion is noise in optics.Higher the
OSNR lesser will be the losses & more reliable the media will be.--

If spacing between wavlength is very high then it is WDM-supports up to 4 lambeda ,Moderate


spacing- CWDM --supports up to 16 lambeda, Low spacing -- DWDM-- supports up to 80 lambeda

OSNR :
Optical Signal To Noise Ratio, These testing is doing when we are install new equipments. And Above
given statment in true.
WDM,CWDM&DWDM

When Channel Spacing is >200 Ghz Called CWDM


When Channel Spacing is> 100 Ghz Called WDM
When Channel Spacing is < 100 Ghz Called DWDM

DWDM Classification
when we are using
C Band ( 1530 nm - 1565 nm Then Channel Spacing 35 nm)
L Band ( 1565 nm - 1610 nm Then Channel Spacing 45 nm)

WDM;
When Mixing 2 Channel Per Fiber , These Type of Multiplexing can be used increase N/W .Where
triffic is less with help of WDM Increase the distance between Central Office and Subscriber which
difficult to copper cable.

CWDM;
When Mixing 4 & 8 Fiber per channel then called CWDM

DWDM;
When we are mixing more no of channel in fiber then using 16,32,64
DWDM Used for Long Haul and Ultra Long Haul Which Connect Metro N/W

** What is the difference between SNCP,MSPring,UBSR and BLSR. All these work on 2
fibre or 4 fibre ..?

SNCP-UPSR are same thing..one is SDH term and later one SONET term..this is trib level protection
scheme..

MSPRING/BLSR are same thing..one is SDH term and later one is SONET term, it has two different
form of protocol..2F and 4F....

***Hi can any body tell me how DWDM can be protected

There is no inherent protection mechanism defined in standards for DWDM.Anyways when you SDH
traccif being carried is already having different protection schemes a protection on DWDM is
required and would prove to be very costly .If still protection is needed one can have two lambdas
terminating between the same node and through diverse routes .
There is no inherent protection mechanism defined in standards for DWDM.Anyways when you SDH
traffic being carried is already having different protection schemes a protection on DWDM is not
required and would prove to be very costly .If still a protection is needed one can have two lambdas
terminating between the same node and through diverse routes

** Can you please tell me what is the difference between AUG-3 and TUG-3

There is nothing called a AUG-3 , there is however a TUG -3. What are they?
ITU G.707 clearly explains that:
An Administrative Unit is the information structure which provides adaptation between the higher
order path layer and the multiplex section layer. It consists of an information payload (the higher
order Virtual Container) and an Administrative Unit pointer which indicates the offset of the payload
frame start relative to the multiplex section frame start.
Two Administrative Units are defined. The AU-4 consists of a VC-4 plus an Administrative Unit
pointer which indicates the phase alignment of the VC-4 with respect to the STM-N frame. The AU-3
consists of a VC-3 plus an Administrative Unit pointer which indicates the phase alignment of the VC-
3 with respect to the STM-N frame. In each case the Administrative Unit pointer location is fixed
with respect to the STM-N frame.
One or more Administrative Units occupying fixed, defined positions in an STM payload are termed
an Administrative Unit Group (AUG)
Further,
A Tributary Unit is an information structure which provides adaptation between the lower order
path layer and the higher order path layer. It consists of an information payload (the lower order
Virtual Container) and a Tributary Unit pointer which indicates the offset of the payload frame start
relative to the higher order Virtual Container frame start.
The TU-n (n=1, 2, 3) consists of a VC-n together with a Tributary Unit pointer.
One or more Tributary Units, occupying fixed, defined positions in a higher order VC-n payload is
termed a Tributary Unit Group (TUG). TUGs are defined in such a way that mixed capacity payloads
made up of different size Tributary Units can be constructed to increase flexibility of the transport
network.
A TUG-2 consists of a homogeneous assembly of identical TU-1s or a TU-2.
A TUG-3 consists of a homogeneous assembly of TUG-2s or a TU-3.

** What is the difference between a terminal multiplexer and ADM?


Function of Terminal Multiplexer is Dropping off all Signal Because SNR is more after some distance
so we drop all signal at terminal point.

But in case of ADM we use only some specific Channel as per customer requirement.

**Why do we used these specific bands?


We are using C & L BAND Because in particuler band PMD & CD is very less and these band
follow Fiber Property and these band having very less Optical loss.

**What is the significance of H4 byte in VCAT and LCAS context?

H4 Byte are used for mutiframe generation.

**Tell about Mux and OADMs

Mux - Function of Mux to Multiply Many Transponder to send another location

**What is the difference between Muxponder and Multiplexer?

OADM- Optical Add & Drop Multiplexing OADM are used where as per customer requirement
some traffic drop and some some traffic add. i am currently using NEC OADM in which we can
ADD & Drop upto 6 Transponder

**Which band does DWDM use?

DWDM is used in two Band

1: C Band Range 1530.31 to 1562.23

2: L Band Range 1572.08 to 1608.32

**VCAT Significance?

Virtual Concatenation is a Standardized Layer 1 Inverse Multiplexing Technique that can be


applied to the Optical Transport Network , Synchronous Optical Network , Synchronous Digital
Hierarchy , and PDH Component signals. By Inversing Multiplexing , it Multiple link at particular
layer into aggregate links to achieve a commensurate increasein available bandwidth on the
aggregate links to achieve a commensurate increase in available bandwidth on theaggregate link

Вам также может понравиться