Вы находитесь на странице: 1из 8

ECE 6110 CAD for Communication Networks

Project 2 Report
COMPARISON OF DROPTAIL AND RED QUEUING METHODS
The focus of the project is on comparing the two different types of queueing
methods Droptail queuing and Random Early Detection (RED) queuing. A
non-trivial network is set up in ns3 simulation tool and the variation in
throughput of the different flows are studied when we vary the parameters
for both Droptail queue and RED queue. The network which I have chosen for
simulation is given in Figure 1.

Figure 1. Network chosen for simulation


As we can see from the above figure, nodes 5, 6 and 7 are transmitting to
nodes 8, 9 and 10 respectively. Another node 11 is transmitting to node 12.
The bandwidth of the link connecting nodes 1 and 2 is 1Mbps and the light
delay is 5ms. The link connecting nodes 0 and 1 has a bandwidth of 5Mbps
and the light delay is 5ms. The link connecting nodes 2 and 3 has a
bandwidth of 5Mbps and light delay of 10ms and the link connecting nodes 3
and 4 has a bandwidth of 10Mbps and light delay of 5ms. So we can get from
this information that the bottleneck happens at the link between nodes 1 and
2.
For Droptail, the parameters varied are window size, queue size (in bytes)
and the load on the network (with respect to bottleneck link).

For RED, the parameters varied are queue length (in bytes), maximum drop
probability, min threshold for random drop, maximum threshold for force
packet drop and load on the network (with respect to bottleneck link).
The transmitting sources are modelled by On-Off helper from ns3. The load is
varied with respect to the bottleneck link bandwidth i.e. a load value of 0.5
or 50 percent means the applications transmit at 5e+6 bytes per time unit.
Also two of the flows are UDP flows and two of the flows are TCP flows. Flow
from node 5 to node 9, and from node 7 to node 10 are UDP flows whereas
the flow from node 6 to node 8, and node 11 to node 12 are TCP flows. This
kind of sums up the experiment parameters.
OBSERVATIONS
Drop Tail Queue
Variation in Goodput with Window Size
Load
Rate
MaxBytes
Queue 2000
0.5
5e+5
3000
Window Size
Goodput
Flow
Flow 1
Flow 2
Flow 3
4
29337.
1000
30874
51.2
6
51.2
28313.
2000
30566
1552
6
2624
28364. 2141.
4000
30566
962.4
8
6
28364. 2141.
8000
30566
962.4
8
6
28364. 2141.
16000
30566
962.4
8
6
Queue 2000
Window Size

Load
0.5
Flow 1

1000

49050

2000

48640

4000

48230

8000

48282

Rate
MaxBytes
1e+6
3000
Goodput
Flow
Flow 2
Flow 3
4
11673.
51.2
6
51.2
11878. 1659.
51.2
4
2
12185. 2838.
51.2
6
4
11980.
51.2
8
2892

16000

48282

51.2

11980.
8

2892

Table 1.
Table 1 represents the variation in goodput of the four different flows with
receiver window size. The values are as expected with UDP dominating over
TCP. This is because UDP does not focus on congestion control whereas TCP
has congestion control mechanisms like slow start and fast retransmit. So
UDP flows keep streaming the packets to the receiver but TCP due to
congestion and packets getting dropped because of the queue getting full
gets starved eventually.

Variation in Goodput with Queue Size


WinSize
4000
MaxBytes

1000
2000
4000
8000
16000
32000

Load
0.5

Rate
MaxBytes
5e+5
3000
Goodput
Flow
Flow
Flow 1 Flow 2
3
4
30771.
2
0
20736 1659.2
30873.
27750.
6
1605.6
4
212
30668.
29286.
8
1444.8
6
426.4
30259.
30720
480
2
1873.6
30361.
30003. 13826.
6
10235.2
2
4
30412.
30361.
8
9699.2
6
16024
Table 2.

Table 2 shows the variation in goodput of flows with queue size (controlled by
maxBytes parameter). The goodput increases with increase in queue size
which is what is expected. As the queue size increases, it is able to absorb
the congestion of the network. So we see an increasing goodput of the flows.
Also we still observe the domination of UDP flows over TCP. This is a trend
which we will continue to observe throughout the experiment.

RED Queueing
MinTh
MaxP
load
0.1
0.2
0.4
0.8
1

Variation in Goodput with Load


5
MaxTh
15
0.05
qLen
1000
Flow 1
6195.2
12390
24576
47974
46899

Flow 2
6144
265.6
855.2
640.8
51.2

Flow 3
6195.2
12339.2
24371.2
10240
11878.4

Flow 4
6195.2
2448.8
265.6
1230.4
158.4

Table 3.
In RED queuing we have different parameters to vary. Table 3 shows the
change in goodputs of the flows with variation in load. Again as the load is
increased, we should expect a drop in throughput (of TCP flows especially).
We see that TCP flows goodputs slowly drop as we increase the load due to
congestion in the bottleneck.

Variation in Goodput with MinTh and MaxTh


load
0.5
qLen
1000
MaxP

0.05

MinTh
5
5
5
5

MaxTh
15
30
60
90

Flow 1
30771.2
30412.8
29747.2
29286.4

Flow 2
801.6
212
855.2
212

Flow 3
21094
22579
25498
28570

Flow 4
104.8
104.8
104.8
212

10
10
10

30
60
90

30412.8
29747.2
29491.2

212
855.2
265.5

22579
25498
28416

104.8
104.8
212

15
15
15
15

30
60
90
120

30412.8
29747.2
29081.6
29235.2

212
855.2
265.5
158.4

22579
25498
28723
28826

104.8
104.8
212
1980.8

30

60

296960

855.2

25498

104.8

30
30

90
120

29235.2
29286.4

212
158.4

28621
28774

212
1980.8

90

120

29286.4

158.4

28774

1980.8

Table 4.
Table 4 shows the variation in goodput with maximum and minimum
threshold parameters for RED queue. The minimum threshold is the queue
size at which RED starts dropping packets randomly (at a probability
dicatated by maxP). The maximum threshold is the queue size at which RED
starts forcibly dropping packets.
The trend that can be deduced from the above table is as we increase the
minTh and maxTh values, in general goodput increases. This is because we
increase the threshold at which RED starts dropping packets. But if there is
less or no congestion in the network then RED will prove to be less efficient
as there is no need to drop packets when the transmission rates are well
within the network capacity.

Variation in Goodput with Drop Probability


load
qLen

0.5
1000

MinTh
MaxTh

15
60

MaxP
0.01
0.05
0.1
0.2
0.4
0.6
0.7
0.8
1

Flow 1
29747
29747
29747
29747
29082
29184
29133
29184
29235

Flow 2
855.2
855.2
855.2
158.4
855.2
908.8
1766.4
1123.2
1659.2

Flow 3
25497.6
25497.6
25497.6
25497.6
25241.6
25446.4
25702.4
25702.4
25958.4

Table 5.

Flow 4
104.8
104.8
104.8
265.6
3428
3374.4
1980.8
1766.4
1766.4

The above table shows the variations in goodput of the four flows with
maximum probability with which RED drops the packets. It is observed that
as we increase the probability the goodput increases. This was different from
what was expected as in theory, increase in probability should cause a
decrease in goodput as more packets would be dropped. This was one
observation which could not be reasoned.

Variation in Goodput with Queue Length


load
MaxP

0.5
0.6

MinTh
MaxTh

15
60

qlen
500
1000
2000
4000
8000

Flow 1
30771
29184
29542
29542
29542

Flow 2
0
908.8
587.2
587.2
587.2

Flow 3
20736
25446.4
25702.4
25702.4
25702.4

Load
MaxP

0.8
0.6

MinTh
MaxTh

15
60

qlen
500

Flow 1
48384

Flow 2
160.8

Flow 3
10137.6

Flow 4
1659.2
3374.4
265.6
265.6
265.6

Flow 4
158.4

1000
2000
4000
8000
16000

45619
45824
45978
45978
45978

908.8
855.2
855.2
855.2
855.2

12748.8
12390.4
12390.4
12390.4
12390.4

104.8
533.6
265.6
265.6
265.6

Table 6.
Table 5 is variation of goodput with queue length. The trends are similar to
those of the droptail queueing. As we increase the queue size, the goodput
increases but then saturates at a level where the queue length absorbs the
maximum congestion in the network.
Also we see that at a lower load the goodput saturates faster than at a
higher load. This is also expected as for a lower load, a larger queue absorbs
the congestion faster.

COMPARISON OF THE TWO QUEUING METHODS

Rate 1e+6

Comparison of Droptail vs RED


MaxBytes 3000
Window Size 4000

Load 0.5
DROP TAIL

Load 0.5
RED
Load 0.9
DROP TAIL

Load 0.9

Flow 1
48230.4
MinTh 15
Flow 1
29849.6
Rate 1e+6
Flow 1
47104
MinTh 15

Flow 2
51.2
MaxTh 60
Flow 2
212
MaxBytes 3000
Flow 2
908.8
MaxTh 60

Flow 3
Flow 4
Total Goodput
12185.6
2838.4
63305.6
QLen 3000
MaxP 0.5
Flow 3
Flow 4
Total Goodput
25395.2
265.2
55722
Window Size 4000
Flow 3
Flow 4
Total Goodput
13056
104.8
61173.6
QLen 3000
MaxP 0.5

RED

Flow 1
46284.8

Flow 2
212

Flow 3
12236.8

Flow 4
51.2

Total Goodput
58784.8

Table 7.
At the end of experiments it was observed that droptail queue performed
better than RED in all scenarios giving a better throughput in similar cases.
The above table shows performance of both protocols at a moderate and
heavy load. At both loads, Droptail gave a better performance. Also we
should keep in mind that these results have been derived based on the
network chosen for simulation and study based on a simulation. RED might
give a better performance in specific cases when tuned, but in general for a
bottleneck link in a network where nodes are trying to stream data as much
as possible, it is observed that Droptail queue seems to perform better.

Submitted by: Keerthi Vignesh Kumar Jaya Sundara Ganesh


GTID: 903068747

Вам также может понравиться