Вы находитесь на странице: 1из 6

A Novel Congestion-Avoidance Mechanism for the Internet

Parikshit Singla, Pankaj Kapoor and Rama Chawla Department of Computer Science and Engineering, Doon Valley Institute of Engineering and Technology, Karnal 132001, Haryana Abstract
The Internets excellent scalability and robustness result in part from the end-to-end nature of Internet congestion control. End-to-end congestion control algorithms alone, however, are unable to prevent the congestion collapse and unfairness created by applications that are unresponsive to network congestion. To address these maladies, we propose and investigate a novel congestion-avoidance mechanism called Congestion Free Router (CFR). CFR entails the exchange of feedback between routers at the borders of a network in order to detect and restrict unresponsive traffic flows before they enter the network, thereby preventing congestion within the network. Keywords: - Internet, Congestion, Router, Congestion Control Algorithms Internet if it does not scale well. A key corollary to the scalability argument is the end-to-end argument: to maintain scalability, algorithmic complexity should be pushed to the edges of the network whenever possible. As a result of its strict adherence to end-to-end congestion control, the current Internet suffers from two maladies: Congestion collapse from undelivered packets and unfair allocations of bandwidth between competing traffic flows. The first malady - congestion collapse from undelivered packets - arises when packets that are dropped before reaching their ultimate continually consume bandwidth destinations [2]. The second maladyunfair bandwidth allocation to competing network flowsarises in the Internet for a variety of reasons, one of which is the existence of applications that do not respond properly to congestion. Adaptive applications (e.g., TCP-based applications) that respond to congestion by rapidly reducing their transmission rates are likely to receive unfairly small bandwidth allocations when competing with unresponsive applications. The Internet protocols themselves can also introduce unfairness. The TCP algorithm, for instance, inherently causes each TCP flow to receive a bandwidth that is inversely proportional to its round-trip time [4]. Hence, TCP connections with short round-trip times may receive unfairly large allocations of network bandwidth when compared to connections with longer round-trip times. The impact of emerging streaming media traffic on traditional data traffic is of growing concern in the Internet community. Streaming media traffic is unresponsive to the congestion in a network, and it can aggravate congestion collapse and unfair bandwidth allocation.

1. Introduction
Congestion control concerns in controlling the network traffic to prevent the congestive collapse by trying to avoid the unfair allocation of any of the processing or capabilities of the networks and making the proper resource reducing steps by reducing the rate of packets sent. The end-to-end congestion control mechanisms of TCP have been a critical factor in the robustness of the Internet. However, the Internet is no longer a small, closely connected user community, and it is no longer practical to rely on all end-nodes to use end-to-end congestion control for best-effort traffic. Similarly, it is no longer possible to rely on all developers to incorporate end-to-end congestion control in their Internet applications. The network itself must now participate in controlling its own resource utilization. The Internets excellent scalability and robustness result in part from the end-to-end nature of Internet congestion control. End-to-end congestion control algorithms alone, however, are unable to prevent the congestion collapse [1] and unfairness [3] created by applications that are unresponsive to network congestion. The fundamental philosophy behind the Internet is expressed by the scalability argument: no protocol, mechanism, or service should be introduced into the

2. Working of CFR
To address the maladies of congestion collapse we introduce and investigate a novel Internet traffic control protocol called Congestion Free Router (CFR). The basic principle of CFR is to compare, at the borders of a network, the rates at which packets from each application flow are entering and leaving the network. If a flows packets are entering the network faster than they are leaving it, then the network is likely buffering or, worse

Proc. of the International Conference on Science and Engineering (ICSE 2011) Copyright 2011 RG Education Society ISBN: 978-981-08-7931-0
115

Proc. of the International Conference on Science and Engineering (ICSE 2011)

yet, discarding the flows packets. In other words, the network is receiving more packets than it is capable of handling. CFR prevents this scenario by patrolling the networks borders, ensuring that each flows packets do not enter the network at a rate greater than they are able to leave the network. This patrolling prevents congestion collapse from undelivered packets, because unresponsive flows otherwise undeliverable packets never enter the network in the first place. Although CFR is capable of preventing congestion collapse and improving the fairness of bandwidth allocations, these improvements do not come for free. CFR solves these problems at the expense of some additional network complexity, since routers at the border of the network are expected to monitor and control the rates of individual flows in CFR. CFR also introduces added communication overhead, since in order for an edge outer to know the rate at which its packets are leaving the network, it must exchange feedback with other edge routers. Unlike some existing approaches trying to solve congestion collapse, however, CFRs added complexity is isolated to edge routers; routers within the core of the network do not participate in the prevention of congestion collapse. Moreover, end systems operate in total ignorance of the fact that CFR is implemented in the network, so no changes to transport protocols are necessary at end systems. A data flow oriented method is shown below to provide a systematic approach for the description of the program structure.

form of packet with IP address for its identification. Algorithm : not applicable Output: formatted packet with the required information for communicating between the source & the destination node.

INROUTER ROUTER An edge router operating on a flow passing into a network is called an InRouter router. CFR prevents congestion collapse through a combination of per-flow rate monitoring at OutRouter routers and per-flow rate control at InRouter routers. Rate control allows an InRouter router to police the rate at which each flows packets enter the network. InRouter Router contains a flow classifier, perflow traffic shapers (e.g., leaky buckets), a feedback controller, and a rate controller

Process Description:
Input Parameters are Data Packets from Source Machine and Backward feedback from the Router. Output Parameters are Data Packets and Forward feedback. Using rate control and leak bucket algorithm to rank the nodes in the network Input data entities :which determine the rate of the packets Algorithm: Leaky bucket Output: All the nodes in the network assigned with a unique rank.

ROUTER The task of this Stage is to accept the packet from the InRouter router and send it to the OutRouter router. Input Parameters are Data Packets from InRouter Machine, Forward feedback from the Router or InRouter Router, Backward feedback from the Router or OutRouter Router and Hop count. This approach consists of five stages as discuss below: SOURCE The task of this stage is to send the packet to the InRouter router. Output Parameters and Data Packets, Forward feedback, Incremented Hop count and backward feedback.

Process Description:
Input entities: receives data neighboring nodes And transfer into another neighboring nodes. Algorithm: not applicable. Output :transfer packets to neighboring nodes

Process Description:
Sending data in the form of packet Input data entities: Message to be transmitted from the source to the destination node in the

116

Proc. of the International Conference on Science and Engineering (ICSE 2011)

OUTROUTER ROUTER An edge router operating on a flow passing out of a network is called an OutRouter router. CFR prevents congestion collapse through a combination of per-flow rate monitoring at OutRouter routers and per-flow rate control at InRouter routers. Rate monitoring allows an OutRouter router to determine how rapidly each flows packets are leaving the network. Rate monitored using a rate estimation algorithm such as the Time Sliding Window (TSW) algorithm. OutRouter Router contains a flow classifier, Rate monitor, and a feedback controller. Input Parameters are Data Packets from Router and Forward feedback from the Router. Output Parameters are Data Packets and Backward feedback.

tolerance parameter Each cell arrival creates a "cup" of fluid flow "poured" into one or more buckets for use in conformance checking. The Cell Loss Priority (CLP) bit in the ATM cell header determines which bucket(s) the cell arrival fluid pours into. The algorithm is called "dual leaky bucket" if several parameters are monitored at once, or "single leaky bucket" if only one parameter is monitored. In the "Leaky Bucket" analogy the cells do not actually flow through the bucket; only the check for conformance to the contract does. In this algorithm, packets are assumed to be of fixed length. A counter records the content of the leaky bucket. When a packet arrives, the value of the counter is incremented by some value I provided that the content of bucket would not exceed a certain limit; in this case the packet is declared to be conforming. If the content would exceed the limit, the counter remains unchanged and the packet is declared to be nonconforming. The value I typically indicates the nominal inter-arrival time of the packet that is being polices (typically, in units of packet time). As long as the packet is not empty, the bucket will drain at a continuous rate of 1 unit per packet time. Figure below shows the Leaky Bucket Algorithm that can be used to police the traffic flow.

Process Description:
Using time sliding window and rate monitoring algorithm to rank the nodes in the network Input data entities: which determine the rate of the Packets flow in the network. Algorithm: time sliding window and rate monitoring Output: packets are sending to destination.

DESTINATION The task of this Stage is to accept the packet from the OutRouter router and stored in a file in the Destination machine. Message received from the OutRouter router will be stored in the corresponding folder as a text file depends upon the Source Machine Name.

Process Description:
Packets are received from the Neighboring nodes Input data entities: message to be Received from the Out router to the Destination node in the form of packets with IP address. Algorithm: not applicable Output: formatted packets with the requirement Information for communication between Source and destination nodes. At the arrival of the first packet, the content of the bucket X is set to Zero and the last conforming time (LCT) is set to the arrival time of the first packet. The depth of the bucket is L+I, where L depends on the maximum burst size. If the traffic is expected to be bursty, then the value of L should be made large. At the arrival of the kth packet, the auxiliary variable F records the difference between the bucket content at the arrival of the last conforming packet and the inter-arrival time between the last conforming packet and the kth packet. The auxiliary variable is constrained to be nonnegative. If the auxiliary variable is greater than L, The packet is considered to be nonconforming. Otherwise, the packet is conforming. The bucket content and the arrival time of the packet are then

Leaky Bucket Algorithm


The "leaky bucket" algorithm is key to defining the meaning of conformance. The leaky bucket analogy refers to a bucket with a hole in the bottom that causes it to "leak" at a certain rate corresponding to a traffic cell rate parameter The "depth" of the bucket corresponds to a

117

Proc. of the International Conference on Science and Engineering (ICSE 2011)

updated. If the counter exceeds a certain value, the cells are assumed not to conform to the contract. To counteract this, non-conforming cells can now either be tagged or dropped.

Rate Control Algorithm


The CFR rate-control algorithm regulates the rate at which each flow is allowed to enter the network. Its primary goal is to converge on a set of per-flow transmission rates (here in after called InRouter rates) that prevents congestion collapse due to undelivered packets. It also attempts to lead the network to a state of maximum link utilization and low router buffer occupancies, and it does this in a manner that is similar to TCP. In the CFR rate control algorithm, shown in figure below, a flow may be in one of two phases, slow start or congestion avoidance, which are similar to the phases of TCP congestion control. The desirable stability characteristics of slow-start and congestion control algorithms have been proven in TCP congestion control, and CFR expects to benefit from their well-known stability features. In CFR, new flows entering the network start with the slow-start phase and proceed to the congestion-avoidance phase only after the flow has experienced incipient congestion. The rate-control algorithm is invoked whenever a backward feedback packet arrives at an InRouter router. Recall that backward feedback packets contain a timestamp and a list of flows arriving at the OutRouter router from the InRouter router as well as the monitored OutRouter rates for each flow. Upon the arrival of a backward feedback packet, the algorithm calculates the current round-trip time (currentRTT in Fig. 6) between the edge routers and updates the base round-trip time (e.base RTT), if necessary. The base round-trip time (e.base RTT) reflects the best-observed round-trip time between the two edge routers. The algorithm then calculates deltaRTT, which is the difference between the current round-trip time (currentRTT) and the base round-trip time (e.baseRTT). A deltaRTT value greater than zero indicates that packets are requiring a longer time to traverse the network than they once did, and this can only be due to the buffering of packets within the network. CFRs rate-control algorithm decides that a flow is experiencing incipient congestion whenever it estimates that the network has buffered the equivalent of more than one of the flows packets at each router hop. To do this, the algorithm first computes the product of the flows InRouter rate (f.InRouterRate) and deltaRTT (i.e., f.InRouterRate deltaRTT). This value provides an estimate of the amount of the flows data that is buffered somewhere in the network.

118

Proc. of the International Conference on Science and Engineering (ICSE 2011)

If this amount (i.e., f.InRouterRate deltaRTT) is greater than the number of router hops between the InRouter and the OutRouter routers (e.hopcount) multiplied by the size of the largest possible packet (MSS) (i.e., MSS e.hopcount), then the flow is considered to be experiencing incipient congestion.

The rationale for determining incipient congestion in this manner is to maintain both high link utilization and low queueing delay. Ensuring there is always at least one packet buffered for transmission on a network link is the simplest way to achieve full utilization of the link, and deciding that congestion exists when more than one packet is buffered at the link keeps queueing delays low. Therefore, CFRs rate-control algorithm allows the equivalent of e.hopcount packets to be buffered in flows path before it reacts to congestion by monitoring deltaRTT.1 a similar approach is used in the DEC bit congestion-avoidance mechanism. Furthermore, the approach used by CFRs rate control algorithm to detect congestion, by estimating whether the network has buffered the equivalent of more than one of the flows packets at each router hop, has the advantage that, when congestion occurs, flows with higher InRouter rates detect congestion first. This is because the condition f.InRouterRate deltaRTT MSS e.hopcount fails first for flows with a large InRouter rate, detecting that the path is congested due to InRouter flow. When the rate-control algorithm determines that a flow is not experiencing congestion, it increases the flows InRouter rate. If the flow is in the slow-start phase, its InRouter rate is doubled for each round-trip time that has elapsed since the last backward feedback packet arrived (f.InRouter). The estimated number of round-trip times since the last feedback packet arrived is denoted as RTTs Elapsed. Doubling the InRouter rate during slow start allows a new flow to rapidly capture available bandwidth when the network is underutilized. If, on the other hand, the flow is in the congestion-avoidance phase, then its

InRouter rate is conservatively incremented by one rate Quantum value for each round trip that has elapsed since the last backward feedback packet arrived (f.InRouterrate rate Quantum RTTsElapsed). This is done to avoid the creation of congestion. The rate quantum is computed as the maximum segment size divided by the current roundtrip time between the edge routers. This results in rate growth behavior that is similar to TCP in its congestionavoidance phase. Furthermore, the rate quantum is not allowed to exceed the flows current OutRouter rate divided by a constant quantum factor (QF). This guarantees that rate increments are not excessively large when the round-trip time is small. When the rate-control algorithm determines that a flow is experiencing incipient congestion, it reduces the flows InRouter rate. If a flow is in the slow-start phase, it enters the congestion-avoidance phase. If a flow is already in the congestion-avoidance phase, its InRouter rate is reduced to the flows OutRouter rate decremented by a constant value. In other words, an observation of incipient congestion forces the InRouter router to send the flows packets into the network at a rate slightly lower than the rate at which they are leaving the network. CFRs rate-control algorithm is designed to have minimum impact on TCP flows. The rate at which CFR regulates each flow (f.InRouterRate) is primarily a function of the round-trip time between the flows InRouter and OutRouter routers (currentRTT). In CFR, the initial InRouter rate for a new flow is set to be MSS/e.baseRTT, following TCPs initial rate of one segment per round-trip time. CFRs currentRTT is always smaller than TCPs end-to-end round-trip time (as the distance between InRouter and OutRouter routers, i.e., the currentRTT in CFR, is shorter than the end-to-end distance, i.e., TCPs round-trip time). As a result, f.InRouterRate is normally larger than TCPs transmission rate when the network is not congested, since the TCP transmission window increases at a rate slower than CFRs f.InRouterRate increases. Therefore, CFR normally does not regulate TCP flows. However, when congestion occurs, CFR reacts first by reducing f.InRouterRate and, therefore, reducing the rate at which TCP packets are allowed to enter the network. TCP eventually detects the congestion (either by losing packets or due to longer round-trip times) and then promptly reduces its transmission rate. From this time point on, f.InRouterRate becomes greater than TCPs transmission rate, and therefore, CFRs congestion control does not regulate TCP sources until congestion happens again.

Time Sliding Window Algorithm


The sliding window serves several purposes:

119

Proc. of the International Conference on Science and Engineering (ICSE 2011)

it guarantees the reliable delivery of data on time. it ensures that the data is delivered in order within time, it enforces flow control between the sender and the receiver.

Simulation results show that CFR successfully prevents congestion collapse from undelivered packets. They also show that, while CFR is unable to eliminate unfairness on its own, it is able to achieve approximate global max-min fairness for competing network flows when combined with ECSFQ, they approximate global max-min fairness in a completely core-stateless fashion.

REFERENCES
[1] Braden, D. Clark, J. Crowcroft, B. Davie, S. Deering, D. Estrin, S. Floyd, V. Jacobson, G. Minshall, C. Partridge, L. Peterson, K. Ramakrishnan, S. Shenker, J.Wroclawski, and L. Zhang, RFC 2309: Recommendations on queue management and congestion avoidance in the Internet, Apr. 1998, Status: INFORMATIONAL. [2] Sally Floyd and Kevin Fall, Promoting the use of end-to-end congestion control in the Internet, IEEE/ACM Transactions on Networking, vol. 7, no. 4, pp. 458472, Aug. 1999. [3] Chandra E. and Subramani B. A Survey on Congestion Control in Global Journal of Computer Science and Society, Vol. 9 Issue 5 (Ver 2.0), January 2010 pp 82 [4] Padhye J., Firoiu V., Towsley D., and Kurose J., ``Modeling TCP Throughput: A Simple Model and its Empirical Validation,'' in Proc. of ACM SIGCOMM, September 1998, pp. 303-314. [5] Clark D. and Fang W., ``Explicit Allocation of Best Effort Packet Delivery Service,'' IEEE/ACM Transactions on Networking, vol. 6, no. 4, pp. 362-373, August 1998. [6] UCI Network Research Group, Network Border Patrol (NBP), http://netresearch.ics.uci.edu/nbp/, 1999. [7] Habib A. and Bhargava B., Unresponsive flow detection and control in differentiated services networks, presented at the 13th IASTED Int. Conf. Parallel and Distributed Computing and Systems, Aug. 2001. [8] S. Floyd and K. Fall. Router Mechanisms to Support End-to-End Congestion Control. Unpublished manuscript, in 1997. [9] V. Jacobson. Congestion Avoidance and Control. SIGCOMM Symposium on Communications Architectures and Protocols Proceedings of the ACM workshop on Frontiers in computer communications technology,, pages 314329, 1988.

3. Conclusion
In this project, we have presented a novel congestionavoidance mechanism for the Internet called CFR and an ECSFQ mechanism. Unlike existing Internet congestion control approaches, which rely solely on end-to-end control, CFR is able to prevent congestion collapse from undelivered packets. ECSFQ complements CFR by providing fair bandwidth allocations in a core-stateless fashion. CFR ensures at the border of the network that each flows packets do not enter the network faster than they are able to leave it, while ECSFQ ensures, at the core of the network that flows transmitting at a rate lower than their fair share experience no congestion, i.e., low network queuing delay. This allows the transmission rate of all flows to converge to the network fair share. CFR requires no modifications to core routers nor to end systems. Only edge routers are enhanced so that they can perform the requisite per-flow monitoring, per-flow rate-control and feedback exchange operations, while ECSFQ requires a simple core-stateless modification to core routers.

120

Вам также может понравиться