Вы находитесь на странице: 1из 3

Performance Evaluation and Comparison of Reno and Vegas TCP Congestion Control

Abstract

The two parameters used for measuring performance of networks are throughput and delay. TCP
congestion control has been designed to ensure Internet stability along with fair and efficient allocation of
the network bandwidth. During the last decade, many congestion control algorithms have been proposed to
improve the classic Tahoe/Reno TCP congestion control. This project aims at evaluating and comparing
two control algorithms, which are Reno and Vegas TCP, using Ns-2.35 simulations. Simulation scenarios
are carefully designed in order to investigate throughput and fairness provided by each of the algorithms.
Results show that Vegas TCP is definitely better than TCP Reno.

Introductions

We are to implement network fed with TCP traffic and background traffic. The objective of
this project is to compare and evaluate the performance of Reno and Vegas TCP congestion
control.

The reference network model is shown below in Figure 1, which is a simple network
consisting of 7 nodes (node 2 and node 5 can be also seen as routers). The queue size of the
link between node 2 and node 5 is limited to 12 packets, and all other links have a limitation
of 50 queue size.

Create FTP traffic on top of a TCP connection between node 1 and node 6 (TCP Reno and
Vegas). The TCP connections maximum congestion window size is 40 packets, and the
packet size is 280 bytes. Make sure the minimum timeout period of the TCP is set to 0.2
second. The FTP starts at 0.0 second and ends at 15 seconds.

Add CBR traffic on top of a UDP connection between node 0 and node 6. The CBR service
generates and sends packets at the rate of 100 packets per second. The CBR source starts at
2.0 second and lasts for 7 seconds

Add VBR traffic with rate of 600kbps between node 3 and node 6, use On/Off exponential
traffic, set On period as 150ms, and Off period as 100ms. The size of each CBR and VBR
packet is 280 bytes. The VBR starts at 7.0 seconds and ends at 11 seconds.

The congestion window size should be monitored and plotted over the time. The bandwidth
(here the bandwidth is defined to be the number bytes received over a given time interval,
say 0.5 seconds) of
CBR and VBR traffic should also monitored and plotted during the entire simulation time.
The average throughput of TCP, CBR, and VBR should also be obtained. For the
throughput calculation of TCP, the NS-2 trace file will need to be examined by using a
script or a programming language awk, and perl.

Figure 1:NS2 Simulation Topology

MODELLING
A. Simulation design considerations
Following basic network comprising client nodes, Routers and a bottle neck link with Drop-tail queue
management is used to get data on throughput of dropping packets vs. simulation time when two TCP/FTP
senders commonly share the bottleneck link. During the simulations, different queue sizes were used and
recorded the throughput of dropping packets.
B. Software tools and environment
Network simulator V2.35 in Linux environment
Binary NAM V1.15
Xgraph
Gnuplot
1st phase:

Assume that we already installed NS2. Go to Ns-2.35 by cd ns-2.35".

Now create a main file that runs the simulation and prints the results to "out.nam" file.

Second phase:

Name this file fri.tcl. In ns2 directory enter this command: "ns fri.tcl".

Вам также может понравиться