Вы находитесь на странице: 1из 11

University for Information Science and Technology St.

Paul The Apostle Subject: Congestion in Data Networks


Network Architecture

Mentor: Student: Atanas Hristov Nikola Despotoski

Table of Content: 1. Definition 2. Effects of Congestion 3. Congestion Control 3.1. Backpressure 3.2. Choke packet 3.3. Implicit Congestion Signaling 3.4. Explicit Congestion Signaling 4. Traffic Management 4.1 Fairness 4.2 Quality of Service (QoS) 5. Congestion control in Packet-Switching networks

1. Definition
Congestion occurs when the number of packets being transmitted through a network begins to approach the packet-handling capacity of the network. The objective of congestion control is ot maintain the number of packets within the network below the level at which performance falls off dramatically. In data networking and queuing theory, network congestion occurs when a link or node is carrying so much data that its quality of service deteriorates. Typical effects include queuing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either only to small increases in network throughput, or to an actual reduction in network throughput.

2. Effects of Congestion
Let us consider a packet switch or router as figure below :

Each node has number of I/O ports attached to it, one or more to other nodes and zero to end points (systems) where packets arrive and depart. We can consider any number of queues of any variable size which will work as buffer, that will deal with arrival and departure of the packets. This means that as packets arrive they are stored into input buffer to its port respectively. The node examines the incoming packet and makes routing decisions and then moves the appropriate output buffer. Outgoing packets are transmitted rapidly, if the next node cannot process them then the packets will arrive but there will be no memory available. The queue will be filled up with other packets. When this scenario takes place one of two general practices can be adopted. The first one is to discard all packets that buffer cannot accommodate (insufficient memory space). The second general practice is to practice flow control and transfer the load to its neighbors so the traffic can be manageable. Lets assume this figure and the scenario when the congestion can occur:

Figure 1.2

Scenario: If node 6 is experiencing congestion in the flow of the packets from node 5, this causes the output buffer of node 5 for the port to node 6 to get filled up. Therefore, congestion at this point can propagate throughout region or entire network.

3. Congestion Control
At this point four Congestion Control techniques will be provided: - Backpressure Explicit Congestion Signaling Choke Packet Implicit Congestion Signaling

3.1

Backpressure

This technique produces an effect similar to backpressure in fluids down a pipe. When the end of the pipe is closed, the fluids backs up to the origin. Looking back at figure 1.2 when the node 6 becomes congested (buffers filled up), that node can slow down or halt the flow of all packets from node 5 (or node 3, or both). If this restriction persists, node 5 will need to slow down or halt traffic on its incoming links. This flow restriction propagates backward (Against the flow of the data traffic) to sources, which are restricted in the flow of new packets in the network. Backpressure is of limited utility since it can be applied in logical connection that are hop-by-hop (from one to another node).

2.2

Choke Packet

A choke packet is used in network maintenance and quality management to inform a specific node or transmitter that its transmitted traffic is creating congestion over the network. This forces the node or transmitter to reduce its output rate.

Choke packets are used for congestion and flow control over a network. The source node is addressed directly by the router, forcing it to decrease its sending rate .The source node acknowledges this by reducing the sending rate by some percentage.

An Internet Control Message Protocol (ICMP) source quench packet is a type of choke packet normally used by routers.

Internet Control Message Protocol (ICMP) is a TCP/IP network layer protocol that provides troubleshooting, control and error message services. ICMP is most frequently used in operating systems for networked computers, where it transmits error messages.

ICMP for Internet Protocol version 4 is called ICMPv4 and for Internet Protocol version 6 is called ICMPv6.

2.3 Implicit Congestion Signaling

When network congestion occurs, two things may happen :

1. Transmission delay for individual packet from source to destination increases. 2. Packets are discarded. If the source detects increased delay and is able to discard packets than that is evidence of implicit network congestion. If all source can detect congestion

and, in response, reduce the flow on the basis of congestion then the network will be congestion relieved. Thus, this control basis relies on signaling on the end systems. Implicit congestion Signaling is effective congestion control technique in connection-less or datagram, packet switching networks and IP-based internets. In such cases, there are no logical connections through the internet on which the flow will be regulated. These connections are on TCP protocol which allows control and segmentation of the flow between established connection (s). 2.4 Explicit Congestion Signaling.

It is desirable to use as much of the available capacity in a network as possible but still react to congestion in controlled manner. This is the purpose of Explicit Congestion Signaling . In general, the avoidance of Explicit Congestion signaling, the network alerts end systems to growing congestion within the network and the end systems take steps to reduce the load. Explicit Congestion Signaling can work in one of two directions: -Backward: Notifies the source that congestion avoidance procedure should be initiated where applicable for traffic in opposite direction of the received notification. This mieans that the packets are transmitted on logical connection and can encounter congestion. Backward information can be sent in form of altering the header bits or separate control packets - Forward : Notifies the user that congestion avoidance procedures should be initiated where applicable for traffic in same direction as the received notification. It indicates that this packet, on this logical connection has encountered congestion. Also information can be sent in form of altering the header bits or separate control packets. We can divide explicit congestion signaling approaches in three general categories:

Binary: A bit is set in a data packet as it is forwarded by the congested node. When a source receives a binary indication of congestion on a logical connection it may reduce its traffic flow. Credit based: Providing explicit credit to a source over a logical connection. The credit indicates how many octetsor how many packets the source may transmit. When the credits are exhausted, the source must wait for next free credit before sending additional data. These credit-based schemes are common in end-to-end flow control, in which destination system uses credit system in order not to overflow its buffer .

Rate based: These schemes are based on providing an explicit data rate limit to the source over a logical connection. The source may transmit data at a rate up to the set limit. To control congestion, any node along the path of the connection can reduce the data limit in a control message to the source.

4.Traffic Management
Congestion Control is concerned with efficient use of a network at high load. Other considerations can be used to refine the application of the congestion control techniques and discard policy.

4.1 Fairness As congestion develops also the delay increases, as congestion gets high the more packets are lost. In the absence of other requirements we can say that they also suffer equally. Simply to discard on last-in-first-discarded basis may not be a fair. As an example of a fairness, a node can maintain a separate queue for each logical connection or each source-destination pair. If all of the queue buffers are of equal length then the queues with the highest traffic load will discard packets, so other low-loaded queues can get equal share of the capacity.

4.2 Quality of Service It is important during periods of congestion that traffic flows with different requirements be treated differently and provided a different quality of service (QoS).

For example, a node might transmit higher-priority packets ahead of lower-priority packets in the same queue. Or a node might maintain different queues for different QoS levels and give preferential treatment to the higher levels. Reservation one way to avoid congestion and also provide QoS is to use reservation scheme. Such a scheme is an integral part of ATM networks (Async Transfer mode). When logical connection is established, the network and the user enter into a traffic contract, which specifies a data rate and other characteristics. The network will give agreed QoS as long as limits of the contract are not overlapped, the excessed traffic is discarded. If the current reservations are such that the network resources does not meet the policy the the new reservation is denied. All characterstics of contract are monitoried all the time during the transmission, as mentioned before, excesses are discarded or delayed.

5. Congestion control in Packet-Switching networks Mechanism


A number of control mechanisms for congestion control in packet-switching networks: 1. Send a control packet from a congested node to some or all source nodes, this choke packet will have the effect of stopping or slowing the rate of transmission from sources and hence limit the total number of packets in the network. This approach requires additional traffic on the network during period of congestion. 2. Rely on routing information may also provide delay. Because these delays are being influenced by the routing decision, they may vary to rapidly to be used effectively for congestion control. 3. Make use of end-to-end probe packet. Such a packet coud be time stamped to measure the delay between two endpoints. This has disadvantage by adding overhead to the network.

4. Allow packet switching node to add congestion information to packets as they go by . They are two possible approaches here. A node could add such information to both directions or opposite (backward and forward) or add to the same direction as the congestion. The destination or source must adjust the load based on this information.

Вам также может понравиться