Академический Документы
Профессиональный Документы
Культура Документы
Stephen Patek
Erhan Yilmaz
Abstract
Recent research on statistical multiplexing has provided many new insights into the achievable multiplexing gain in QoS networks, however, generally only in terms of the gain experienced at a single
switch. Evaluating the statistical multiplexing gain in a general network remains a difficult challenge.
In this paper we describe two distinct network designs for statistical end-to-end delay guarantees, referred to as class-level aggregation and path-level aggregation, and compare the achievable statistical
multiplexing gain. Each of the designs presents a particular trade-off between the attainable statistical
multiplexing gain and the ability to support delay guarantees. The key characteristic of both designs is
that they do not require, and instead, intentionally avoid, consideration of the correlation between flows
at multiplexing points inside the network. Numerical examples are presented for a comparison of the
two designs. The presented class-level aggregation design is shown to yield very high achievable link
utilizations while simultaneously achieving desired statistical guarantees on delay.
Key Words: Statistical Multiplexing, Statistical Service, Quality-of-Service.
This work is supported in part by the National Science Foundation through grants NCR-9624106 (CAREER), ECS-9875688
1 Introduction
In recent years a lot of effort has gone into devising algorithms to support deterministic or statistical QoS
guarantees in packet networks. A deterministic service [18] service, which guarantees worst-case end-toend delay bounds for traffic [9, 11, 31, 32], is known to lead to an inefficient use of network resources [42].
A statistical service [18] that makes guarantees of the form
(1)
that is, a service which allows a small fraction of traffic to violate its QoS specifications, can significantly
increase the achievable utilization of network resources. Taking advantage of the statistical properties of
traffic, a statistical service can exploit statistical multiplexing gain, expressed as
0
1 0
1
Resources needed to
Resources needed to
B@ support statistical C
ANB
@ support statistical C
A:
QoS of N flows
QoS of 1 flow
Ideally, the statistical multiplexing gain of a statistical service increases with the volume of traffic so that
with a high enough level of aggregation the amount of resource allocated per flow is nearly equal to the
average resource requirements for a single flow.
Recent research on statistical QoS has attempted to exploit statistical multiplexing gain by taking advantage of knowledge about the characteristics of traffic sources, but without assuming knowledge of traffic
source characteristics [5, 12, 14, 15, 20, 21, 23, 27, 29, 34, 35, 36]. Under a very general set of traffic
assumptions, which are sometimes referred to as regulated adversarial traffic, one merely assumes that (1)
traffic arrivals from a flow are constrained by a deterministic regulator, e.g., a leaky bucket, and (2) traffic
arrivals from different flows are statistically independent. With these general assumptions it was shown
that even if the probability of QoS violations is small, e.g., " = 10;9 , the statistical multiplexing gain at a
network node can be substantial.
In this paper we are concerned with end-to-end statistical QoS guarantees in a multi-node network under adversarial regulated traffic assumptions. The difficulty of assessing the multiplexing gain in a network
environment is that traffic inside the network becomes correlated, and, therefore, the assumption of independence, as made by the regulated adversarial traffic model, no longer holds.
As in [17], one may mark out-of-profile traffic with a lower priority, rather than discarding it. However, for the purposes of this
study, we do not concern ourselves with out-of-profile traffic.
Flows
Flows
1
2
Flows
5
Flows
Core node
Edge node
Traffic Conditioner
Private Networks (VPN) [13] and ATM Virtual Paths [37]. We take such an approach in Section 4.
To our knowledge, there is only one previous study which applies the traffic model of adversarial regulated arrivals to multiple node networks [35]. The lossless multiplexer presented in [35] bears similarity to
our design for class-level aggregation in Section 3, but assumes that routes are such that traffic arrivals at
core nodes from different flows are always independent. We relax this assumption in our work, and instead,
enforce independence by adding appropriate mechanisms.
This paper makes extensive use of results from a recent study [5] which presented a general method to
calculate the statistical multiplexing gain at a single node. In particular, we exploit the notion of effective
envelopes [5], which are functions that provide with high certainty bounds on traffic arrivals. In addition,
previous work on rate-based scheduling algorithms with statistical service guarantees in single-node networks is very relevant to our work [19, 33, 46, 47]. Finally, we want to reference the recently developed
statistical end-to-end bounds for so-called coordinated-EDF scheduling algorithm [2].
The remainder of this paper is structured as follows. In Section 2 we state our assumptions on traffic
arrivals and we introduce the notion of effective envelopes. In Sections 3 and 4, respectively, we present
our two designs for end-to-end statistical QoS and analyze their ability to exploit statistical multiplexing
gain. In Section 5 we evaluate and compare the two designs through a computational study. In Section 6 we
present conclusions of our work and discuss future directions.
(A3)
(A4)
(A5) Homogeneity within a Class. Flows in the same class have identical deterministic envelopes. At
each node, flows from the same class have identical delay bounds.
2
A function f
) +
f t2 , for all t1 t2 :
(
These or similar assumptions are used in many recent work on statistical QoS [5, 12, 14, 15, 20, 21, 23, 27,
29, 34, 35, 36]. The assumptions are very general. Specifically, no assumptions are made on the distribution
of flow arrivals, other than that each flow satisfies a worst-case constraint.
Within the constraints of assumptions (A1)(A5), we consider arrival scnearios where each flow exhibits
its worst -possible (adversarial) behavior. Traffic which obeys the above assumptions is referred to as
regulated adversarial traffic. However, even if flows individually behave in a worst-case fashion, as allowed
by assumption (A2), the independence assumption (A4) prevents that several flows coordinate (or conspire)
to generate a joint worst-possible behavior.
.
As an example, consider a set of leaky-bucket constrained flows with Aj ( ) = j + j for all j
Here a feasible, possibly adversarial, arrival scenario for an individual flow is that each source is periodic,
and transmits a burst of size j , followed by a quiescent period of time of length j =j . A (possibly) joint
worst-possible behavior occurs if multiple sources coordinate and transmit a burst at the same instant of
time. However, due to the independence assumptions of flows, such a coordination is not allowed.
Remark: In this paper, we will not derive specific adversarial arrival scenarios. In fact, one can argue
that this may not be feasible in all cases [4]. Instead, our approach is to present an analysis, which holds
for all feasible arrival scenarios that comply to assumptions (A1)(A5), including the most adversarial
scenarios.
2C
PC
2C
1;"
G q with:
C
for all t
0:
(2)
Due to assumption (A3), an effective envelope provides a bound for the aggregate arrivals ACq for all time
intervals of length , which is violated with probability ".
Explicit expressions for effective envelopes can be obtained with large deviation results. In this paper,
we will use a bound from [5] which is established via the Chernoff Bound. The Chernoff bound for the
arrivals ACq from q is given by (see [30])
(3)
In [5] two notions of effective envelopes are introduced, called local effective envelope and global effective envelope. In this
paper, we only use local effective envelopes and refer to them as effective envelopes.
In [5], the following bound on the moment generating functions was proven.
Theorem 1 (Boorstyn, Burchard, Liebeherr, Oottamakorn [5]) Given a set of flows q from a single
class that satisfy assumptions (A1)(A5). Then,
MCq (s )
where q
Nq
;
( )
q
sA
q
1 + A ( ) e
;1
q
(4)
G q (
") := Nq min(x Aq ( ))
is an effective envelope for ACq , when x is the smallest number satisfying the inequality
x
q Aqx( ) Aq ( ) q 1; Aq ( ) "1=Nq :
x
;
Aq ( ) ; x
(5)
(6)
We will use the effective envelope given by Eqs. (5) and (6) in all our numerical examples in Section 5.
2.3 Example
We now present an example which illustrates the statistical multiplexing gain achievable with an effective
envelope for traces of MPEG2-compressed video. Due to its correlation of traffic over multiple time scales,
extracting the statistical multiplexing gain from MPEG2 video is considered difficult.4
We consider a trace of compressed MPEG2 video from [28], taken during the 1996 Olympic Games.
From the set of traces in [28], we selected the sequence Athletics 1 (Athletics). The encoding uses the
Group of Picture sequence (GOP) IBBPBBPBBPBBPBBI. The sequence consists of a total of about 72,000
video frames, corresponding to approximately 48 minutes of video. The data of these traces is given in
terms of frame sizes. We assume that the arrival of a frame is spread evenly over an interframe interval (of
length 1=25 sec); hence, a normally instantaneous frame arrival occurs at a constant rate. Here are the key
statistics for the chosen traffic trace:
Average frame size:
Mean Rate:
Peak Rate:
195,513 bits/frame
4.887 Mbps
25.190 Mbps
For the characterization of the MPEG2 trace, we assume that a deterministic bound A is obtained using
the following method [42]: (1) Obtain the tightest deterministic envelope, so-called empirical envelope [42],
for the MPEG2 trace5 ; (2) Determine the concave hull of the empirical envelope and obtain a piecewise
linear function; and (3) Use the piecewise linear function as the deterministic bound for the MPEG2 trace.
4
Fitting MPEG2 video traffic into the framework of adversarial regulated traffic makes the assumption that MPEG2 traffic is
random. Strictly, this is not the case.
5
If A(t1 t2 ) denotes the traffic generated by the MPEG2 trace in time interval t1 t2 ), the empirical envelope is given by
E ( ) = supt>0 A(t t + ).
30
Deterministic Envelope
Peak Rate
1.5
Effective Envelope
N = 100
N = 1000
N = 10000
0.5
Average Rate
0
(a)
20
40
60
80
100
120
20
15
Effective Envelope
140
10-9
10-6
10
G (
")=N for 150 msec and " = 10
(time interval) in msec
Peak rate
25
Average Rate
0
2000
4000
6000
8000
G (
")=(N ) at = 50 msec.
10000
N (number of connections)
;6 .
(b)
Figure 2: Comparison of envelopes for the MPEG2 trace Athletics. (a) shows GC ( ")=N as a function of ,
(b) shows GC ( ")= N as a function of N and with a fixed value of .
In Figure 2(a) we show the results for N multiplexed Athletics traces, where N is set to N = 100 1000,
and 10000 flows. For the effective envelope we show the results of ( )=N . The figure shows that the
effective envelope is much smaller than the deterministic envelope. The effective envelope gets smaller as
the number of flows N is increased, and quickly approaches a rate close to the average traffic rate of the
flow.
In Figure 2(b), we show, for a fixed value of , how fast the traffic rate allocated per flow, given by
(
")=(N ), decreases when the number of flows N is increased. We show results at = 50 msec for
" = 10;6 and " = 10;9 . The figure illustrates that the effective envelope approaches an average rate
allocation for large values of N .
Class 1
Class 2
1
2
B 61
Class 1
c 51
B 51
J
C
4
c 41
B 41
6
J
C
B 62
c 61
c 62
B71
J
C
B 72
c 71
c72
c 52
B 52
c 42
B 42
Class 2
Conditioner
Edge node
Core node
Figure 3: Network with class-level aggregation. At each edge node and each core node, there is one finite-length
buffer for each class. Each buffer is served at a fixed rate, and arrivals to a full buffer are dropped. Core nodes have a
delay jitter controller, labeled JC in the graph, which buffers traffic until it satisfies the maximum allocated per-node
delay at the previous node. The figure illustrates the buffers and jitter controllers of nodes 4 7. B mq indicates the
buffer size and cmq indicates the rate at which a buffer is served.
satisfy assumption (A2) in Section 2. Also, since traffic which arrives at a full buffer is dropped, traffic
conditioning is performed implicitly at the buffers of the edge nodes.
We will be able to show that networks with class-level aggregation can give performance guarantees
of the form that (1) traffic which is not dropped in the finite-sized buffers meets a given end-to-end delay
guarantee, and (2) the drop rate of traffic at each node is bounded.
jC j
;G (
") ; c c d
Cmq
mq
mq mq
>0
sup
and we set Bmq to
(7)
2C
(9)
;
Pr sup ACmq (t ; t) ; GCmq ( ) > 0 sup Pr ACmq (t ; t) ; GCmq ( ) > 0 :
>0
>0
(10)
The assumption in Eqn. (10) is similar to an assumption made in [5], as well as in related work [8, 22,
23, 24, 25]. A theoretical justification for this assumption is made in [24], and the assumption has been
supported by numerical examples [5, 8, 24].
Proof: The first part of the theorem follows from the construction of the queue size Bmq . Since the queue is
serviced at a rate of at least cmq , and since we set Bmq = cmq dmq , traffic in the buffer cannot have a delay
larger than dmq . The second part of the theorem is a result of Eqn. (7) and the definition of the effective
envelope, as we now argue. Consider an arbitrary time t with a traffic arrival to the buffer. By Eqs. (7)
and (8), an arrival will find a full buffer only if for some > 0,
(11)
Note that this expression is pessimistic since the arrivals to node m may be less than ACmq due to losses at
previous nodes. Thus, the probability that an arrival from class q at node m is dropped is bounded by
= Pr sup ACmq (t ; t) ; GCmq (
") > 0
>;0
sup
Pr ACmq (t ; t) ; GCmq (
") > 0
>0
":
(12)
(13)
(14)
(15)
Eqn. (13) is merely a different formulation of the right hand side of Eqn. (12). Eqn. (14) uses the assumption
in Eqn. (10). Eqn. (15) follows from the definition of the effective envelope.
Next, we turn to the question of how much traffic is lost when an arrival finds a full queue. By construction of the buffer size, when traffic which complies to Cmq arrives to the buffer, the most amount of traffic
that may be dropped is bounded by sup>0 ACmq (t t)
Cmq (
") . So, we have
; ;G
>supA mq (t ; t) ; G mq (
")
n
o
sup
A mq ( ) ; G mq (
") :
>
0
C
(16)
(17)
The second part of the theorem now follows from Eqs. (15) and (17).
3.3 Discussion
There are a number of discussion points to address regarding networks with class-level aggregation as presented in this section.
1. Loss rate on a path: Our analysis assumes that the arrivals from a flow j at each node on its path
are characterized by ACmq , independent of previous losses. So, our bounds do not quantify the losses
that occur at consecutive nodes. As a consequence, we assume that losses on a path of nodes occur
independent of losses upstream on the path. Consider a sequence of nodes m1
m2 : : : mL,
with ml q the set of class-q flows at each node ml . With the assumptions from Theorem 2, the loss
rate for class q on this path is bounded by:
! ! !
L
X
l=1
(18)
2. Calculation of cmq and signaling overhead: The calculation of cmq and Bmq is dependent on the
cardinality of the set mq . Each time a new flow is added to the network, the allocation of cmq and
Bmq must be modified at all nodes on the route of the new flow. However, compared to traditional
QoS approaches which maintain per-flow state information, e.g., IntServ [6] and ATM UNI 4.0 [1],
the signaling overhead is small.
3. Dynamic Routing: In our discussion we have assumed that all traffic of a given class, traveling from
specific network ingress to network egress points, traverses the network on the same fixed route. The
assumption of fixed routes can be relaxed using mechanisms such as PATH messages in RSVP [16].
10
Class 1
p2
p2
q1
Class 1
c q1
B q1
c r1
B r1
c r2
B r2
Class 2
c q1
B q1
c r1
B r1
c p2
B p2
c r2
B r2
q1
p2
q1
c r1
B r1
Br1
cr1
Bp2
cp2
B r2
Br2
cr2
B p2
c p2
c r2
r1
Class 2
r2
Conditioner
Edge node
Core node
Figure 4: Network with path-level aggregation. An end-to-end path for a traffic class defines a path or pipe.
The figure depicts a total of six pipes, and depicts the buffers of four of the pipes, labeled, p2, q 1, r1, r2. For each
pipe, the aggregate traffic is policed at the network entrance by a conditioner. At each node there is a separate buffer
for each pipe with traffic at this node. Node buffers are dimensioned such that no overflows occur.
4. Maximum delay bound is incurred at each node: Due to delay jitter control, traffic experiences
worst-case delays at all but the last node on a route, which leads to high buffer requirements. Also,
the delay bounds in a network with class-level aggregation are not independent of the number of nodes
traversed.
5. Discrete packet size: Since actual traffic is sent in discrete-sized packets, performance guarantees
given to fluid flow traffic must be matched to guarantees for actual traffic. For rate-based scheduling
algorithms the issues involved in transforming guarantees on fluid flow traffic for packet-level traffic
are well understood [31, 32, 44]. For example, fluid flow guarantees have been used in the IETF to
specify a guaranteed service class for packet-level traffic in the Integrated Services architecture [38].
buffer for each pipe with traffic at this node. Thus, flows in the same class are only multiplexed in the same
buffer, if they have the same end-to-end path.
With path-level aggregation, traffic from the same class is multiplexed only if it is traveling on the same
path. Also, networks with path-level aggregation perform traffic control separately for each pipe, and,
hence, exploit statistical multiplexing gain only for flows in the same pipe. In contrast to networks with
class-level aggregation, network nodes in the path-level scheme do not perform delay jitter control.
We use pq to denote the set of class-q flows which travel on a path p of nodes, where a path is a unique
loop-free sequence of nodes which starts and ends with edge nodes.6
An important part of path-level aggregation is that the traffic from pq which arrives to the network is
conditioned using the effective envelope Cpq (:
) as policing function. In other words, if ACpq (t t + )
denotes the aggregate traffic from class pq that is admitted into the network in the time interval t t + ),
the policing function Cpq ensures that
8t 0 8 0 :
(19)
Traffic in excess of Cpq is discarded by the conditioner or marked with a lower priority, e.g., as best effort
traffic.
For this architecture, Theorem 3 provides a bound for the traffic from a pipe which is dropped at the
network entrance.
2C
(20)
>0
;
Pr sup ACpq (t ; t) ; GCpq ( ) > 0 sup Pr ACpq (t ; t) ; GCpq ( ) > 0 :
>0
>0
(21)
Proof: The assumption in Eqn. (21) is the same as in Eqn. (10). The proof is conducted in a similar fashion
as the proof of Theorem 2. The probability that an arrival results in dropped traffic is given by
;
Pr 9 > 0 : ACpq (t ; t) > GCpq ( ) = Pr sup ACpq (t ; t) ; GCpq ( ) > 0
>;
0
sup Pr A (t ; t) ; G ( )
> 0
>0
Cpq
":
Cpq
(22)
(23)
(24)
The maximum amount of traffic that is lost at any instant in time t is given by
We say a path is loop-free if no node appears more than once in the path. We say a path is unique if it differs from every other
path in at least one position.
12
Since
sup nA (t ; t) ; G ( )o
Cpq
Cpq
>0
(25)
;G (
") ; c c d
Cpq
pq
pq pq
>0
(26)
(27)
sup
where dpq is the end-to-end delay bound for pq . And we set the buffer space Bpq according to
The next theorem states properties for the end-to-end performance of flows in pq with end-to-end delay
bound dpq .
2C
Proof: The proof is done using Cruzs service curves [10]. Assume the flows in pq take the route k =
m1 m2 : : : mK = l. Allocating a capacity of cpq to pq is equivalent to guaranteeing a service
curve of
! !
8 0 8i (1 i K ) :
Smi ( ) = cpq
(28)
follows from repeated application of the convolution operator. According to Theorem 4 in [10], the maximum end-to-end delay bound, denoted as Dmax , is bounded by the network service curve Sp and the bound
on the arrivals Cpq by
Dmax sup 0
inf
j GCpq (
") cpq ( +
) :
>0
(31)
Maximum
End-to-end delay is
dq
Delay is
dq / 3
B 1q
c1q
Maximum
delay is
dq / 3
Delay is
dq / 3
Jitter
Control
B 2q
Node 1
c2q
Jitter
Control
B 3q
Node 2
c3q
Node 3
Class q flows
dropped
traffic
dropped
traffic
dropped
traffic
B pq
Flows on path "pq",
i.e., all class q flows
with route 1 - 2 - 3
non-compliant
traffic
cpq
B pq
Node 1
cpq
Node 2
B pq
cpq
Node 3
Figure 5: Comparison of the class-level and path-level design, with a common end-to-end delay bound.
! !
14
For provisioning QoS guarantees in a general network, the trade-off presented by the class-level and
path-level approaches is not immediately obvious. Thus, a numerical evaluation of the approaches is needed
to determine which of the two presented approaches is more effective.
5 Numerical Evaluation
In this section, we present numerical examples to compare the ability of class-level and path-level aggregation to support statistical end-to-end delay guarantees. Our discussion so far has pointed out the trade-offs
presented by the two approaches. Class-level aggregation multiplexes larger groups of flows than the pathlevel approach and is thus expected to yield a better multiplexing gain. On the other hand, delay jitter control
in networks with class-level aggregation may result in higher delay bounds.
In our numerical examples, we perform a comparison of four different approaches for provisioning QoS.
Deterministic QoS: Here, all nodes perform a rate-based scheduling algorithm, such as GPS [31].
If Cmq is the set with class q flows at node m, then the rate allocated for this class at node n is the
smallest value cmq such that
jC j
>0
cmq dq
(32)
where ACmq ( ) = mq Aq ( ). From [32] we know that, if all nodes m allocate a bandwidth of cmq ,
all class-q traffic will satisfy an end-to-end delay bound of dq .
Statistical QoS with class-level aggregation: The bandwidth and buffer allocation is as given in
Eqs. (7) and (8). The QoS guarantee for class-q is as stated in Theorem 2. The calculation of the
effective envelope is done as discussed in Section 2.2.
Statistical QoS with path-level aggregation: The bandwidth and buffer allocation is as given in
Eqs. (26) and (27). The QoS guarantee for class-q is as stated in Theorems 3 and 4.
Average Rate Allocation: This scheme allocates bandwidth equal to the average traffic rate for flows.
So, if Cmq is the set with class q flows at node m, node m allocates a rate equal to jCmq jq for class
q.
(Recall that q = lim !1 Aq ( )= ). Average rate allocation only guarantees finite delays and
average throughput, but no per-flow or per-class QoS.
As an admission control condition, we require for all schemes above that the total allocated bandwidth on a
link must not exceed the link capacity.
We consider four classes of traffic, and assume that traffic flows in each class are regulated by a peak-rate
constrained leaky bucket with parameters (q q Pq ), and deterministic envelope Aq ( ) = min (Pq q + q )
for class q . The parameters for the flow classes are given in Table 1. We set " = 10;6 in all our examples.
The parameters are similar to those used in other studies on regulated adversarial traffic [15, 27, 34].
Class
1
2
3
4
Type
Burst Size
burstiness
delay
q (bits)
low
low
high
high
low
high
low
high
10000
10000
100000
100000
Mean Rate
Peak Rate
q (Mbps) Pq (Mbps)
0.15
0.15
0.15
0.15
6.0
6.0
6.0
6.0
End-to-End
Delay Bound
dq (msec)
10
40
10
40
16
4500
4500
te
Ra
g
era
4000
Av
3500
3000
Class-level
aggregation
2500
Path-level
Aggregation
1 node
5 nodes
10 nodes
2000
1500
1 path
10 paths
50 paths
100 paths
1000
500
0
Deterministic
100
200
300
400
500
Link Capacity (Mbps)
600
g
ra
Av
3500
3000
Class-level
aggregation
2500
tic
inis
1 node
5 nodes
10 nodes
2000
erm
Det
Path-level
Aggregation
1500
100 paths
50 paths
10 paths
1 path
1000
500
0
ate
eR
4000
700
100
200
(a) Class 1.
300
400
500
Link Capacity (Mbps)
600
(b) Class 2.
4500
4500
e
ag
Ra
te
4000
er
Av
3500
3000
Class-level
aggregation
2500
1 node
5 nodes
10 nodes
Path-level
Aggregation
2000
1 paths
10 paths
50 paths
100 paths
1500
1000
500
0
700
Deterministic
0
100
200
300
400
500
Link Capacity (Mbps)
600
e
ag
Ra
er
Av
3500
Class-level
aggregation
3000
1 node
5 nodes
10 nodes
2500
2000
Class-level
aggregation
1 path
10 paths
50 paths
100 paths
1500
1000
Deterministic
500
0
700
te
4000
100
(c) Class 3.
200
300
400
500
Link Capacity (Mbps)
600
700
(d) Class 4.
Loss Rateclass
In our example, since ACmq ( ) =
bound
1
< jC
q
L " sup
>
mq j
ACmq ( ) ; Gmq ( ) :
(33)
17
(34)
(35)
Class-level Aggregation
1 node
2 nodes
10 nodes
Classes 1, 2
Classes 3, 4
Path-level
Aggregation
6:7 10;8
6:7 10;8
Table 2: Normalized loss rate per flow. (Note that the loss rate for class-level aggregation is dependent on the path
length.)
In Table 2 we give the results for bounds on the normalized loss rate for all classes used in our examples.
The total loss rate is small in all cases, and is of the same order as ".
Classlevel Aggregation
1 node
2 nodes
10 nodes
6
x 10
Classlevel Aggregation
2 nodes
1 node
10 nodes
6
x 10
1 / effective rate
1 / effective rate
6
5
4
3
2
0
20
0
Pathlevel Aggregation
100
200
400
Transmission
rate (Mbits/sec)
200
60
Deterministic
300
500
600
500
Transmission
rate (Mbits/sec)
Number of Paths
(a) Class 1.
1/ effective rate
1 / effective rate
Classlevel Aggregation
7
1 node
2 nodes
10 nodes
4
3
2
60
300
400
Transmission
rate(Mbits/sec)
500
100
3
2
0
40
100
Deterministic
80 Pathlevel Aggregation
600
40
200
10 nodes
20
20
0
1 node
2 nodes
100
Number of Paths
Classlevel Aggregation
100
x 10
80
600
(b) Class 2.
x 10
7
60
400
100
20
40
300
80
Pathlevel Aggregation
Deterministic
100
40
200
60
300
400
Transmission
rate (Mbits/sec)
Number of Paths
(c) Class 3.
Deterministic
80 Pathlevel Aggregation
500
600
100
Number of Paths
(d) Class 4.
Figure 7: Results for 1=ceff , where ceff is the effective rate of a flow as a function of the link capacity and
the number of paths in the network.
justified by the high level of achievable statistical multiplexing gain. In the alternative, path-level scheme,
there is no need for delay jitter control, since flows in this design are multiplexed only if they are of the
same class and they share the same path through the network. Consequently, statistical multiplexing gain
is perceived only at the network ingress points, at a much lower level of aggregation. The tradeoff between
the two designs is one of enforced delay (and design complexity) in the form of delay jitter control for high
levels of achievable statistical multiplexing gain.
Our numerical results indicate that the increased statistical multiplexing gain achievable with class-level
aggregation is worth the price paid in terms of enforced delay. In the path-level aggregation design, as the
number of paths in the network increases, the achievable statistical multiplexing gain quickly diminishes to
the achievable multiplexing gain in making deterministic (worst-case) QoS guarantees. Thus, we assert that
the class-level scheme is the preferred approach for implementing statistical end-to-end delay guarantees.
There are a number of important issues that have to be addressed before the class-level design can be
implemented in a real network. First, we must reconcile our assumption of fixed routes with dynamic routing
which is prevalent in the Internet today. We need to develop appropriate data structures and algorithms that
allow rapid computation of guaranteed rates and buffer allocations within the network. We need to formulate
a packet-level version of our fluid-model constructs, in order for a real implementation to be possible. Here,
19
we expect that the well-known approach from [32] will be sufficient. Finally, we plan to address the issue
of how to characterize traffic flows in terms of worst-case bounding functions. Our development so far has
rested on the assumption that a worst-case, subadditive bounding function is available for each traffic flow.
If bounding functions for flows are not available a priori, it becomes essential to estimate these bounds from
data. Estimating a traffic bound from measurements is one topic of our ongoing research.
References
[1] ATM Forum, ATM Forum Traffic Management Specification Version 4.0, April 1996.
[2] M. Andrews. Probabilistic end-to-end delay bounds for earliest deadline first scheduling. In Proc. IEEE Infocom
2000, 2000. (to appear).
[3] M. Andrews and L. Zhang. Minimizing end-to-end delay in high-speed networks with a simple coordinated
schedule. In Proc. IEEE Infocom 99, March 1999.
[4] R. Boorstyn, A. Burchard, J. Liebeherr, and C. Oottamakorn. Statistical multiplexing gain of link scheduling
algorithms in QoS networks. Technical Report CS-99-21, University of Virginia, Computer Science Department,
June 1999.
[5] R. Boorstyn, A. Burchard, J. Liebeherr, and C. Oottamakorn. Effective envelopes: Statistical bounds on multiplexed traffic in packet networks. In Proc. IEEE Infocom 2000, March 2000. (To appear).
[6] R. Braden, D. Clark, and S. Shenker. Integrated services in the internet architecture: an overview. IETF RFC
1633, July 1994.
[7] C. S. Chang. Stability, queue length, and delay of deterministic and stochastic queueing networks. IEEE Transactions on Automatic Control, 39(5):913931, May 1994.
[8] J. Choe and Ness B. Shroff. A central-limit-theorem-based approach for analyizing queue behavior in high-speed
network. IEEE/ACM Transactions on Networking, 6(5):659671, October 1998.
[9] R. Cruz. A calculus for network delay, Part I : Network elements in isolation. IEEE Transaction of Information
Theory, 37(1):114121, 1991.
[10] R. Cruz. Quality of service guarantees in virtual circuit switched networks. IEEE Journal on Selected Areas in
Communications, 13(6):10481056, August 1995.
[11] R. L. Cruz. A Calculus for Network Delay, Part II: Network Analysis. IEEE Transactions on Information Theory,
37(1):132141, January 1991.
[12] B. T. Doshi. Deterministic rule based traffic descriptors for broadband ISDN: Worst case behavior and connection
acceptance control. In International Teletraffic Congress (ITC), pages 591600, 1994.
[13] N.G. Duffield, P. Goyal, A. Greenberg, P. Mishra, K.K. Ramakrishnan, and J. E. Van der Merwe. A flexible
model for resource management in virtual private networks. In Proceedings of ACM SIGCOMM99, pages
95108, Boston, MA, September 1999.
[14] A. Elwalid and D. Mitra. Design of generalized processor sharing schedulers which statistically multiplex heterogeneous QoS classes. In Proceedings of IEEE INFOCOM99, pages 12201230, New York, March 1999.
[15] A. Elwalid, D. Mitra, and R. Wentworth. A new approach for allocating buffers and bandwidth to heterogeneous,
regulated traffic in an ATM node. IEEE Journal on Selected Areas in Communications, 13(6):11151127, August
1995.
20
[16] R. Braden et. al. Resource reservation protocol (rsvp) version 1 functional specification. IETF RFC 2205,
September 1997.
[17] S. Blake et. al. An architecture for differentiated services. IETF RFC 2475, December 1998.
[18] D. Ferrari and D. Verma. A scheme for real-time channel establishment in wide-area networks. IEEE Journal
on Selected Areas in Communications, 8(3):368379, April 1990.
[19] P. Goyal and H. M. Vin. Statistical delay guarantee of virtual clock. In Proc. of IEEE Real-time Systems
Symposium (RTSS), December 1998.
[20] G. Kesidis and T. Konstantopoulos. Extremal shape-controlled traffic patterns in high-speed networks. Technical
Report 9714, ECE Technical Report, University of Waterloo, December 1997.
[21] G. Kesidis and T. Konstantopoulos. Extremal traffic and worst-case performance for queues with shaped arrivals.
In Proceedings of Workshop on Analysis and Simulation of Communication Networks, Toronto, November 1998.
[22] E. Knightly. H-BIND: A new approach to providing statistical performance guarantees to VBR traffic. In
Proceedings of IEEE INFOCOM96, pages 10911099, San Francisco, CA, March 1996.
[23] E. Knightly. Enforceable quality of service guarantees for bursty traffic streams. In Proceedings of IEEE INFOCOM98, pages 635642, San Francisco, March 1998.
[24] E. W. Knightly and Ness B. Shroff. Admission control for statistical QoS: Theory and practice. IEEE Network,
13(2):2029, March/April 1999.
[25] J. Kurose. On computing per-session performance bounds in high-speed multi-hop computer networks. In ACM
Sigmetrics92, pages 128139, 1992.
[26] J. Liebeherr, S. Patek, and E. Yilmaz. Tradeoffs in designing networks with end-to-end statistical qos guarantees.
Technical Report. University of Virginia. 2000 (In preparation).
[27] F. LoPresti, Z. Zhang, D. Towsley, and J. Kurose. Source time scale and optimal buffer/bandwidth tradeoff for
regulated traffic in an ATM node. In Proceedings of IEEE INFOCOM97, pages 676683, Kobe, Japan, April
1997.
[28] A. Mukherjee and A. Adas. An mpeg2 traffic library and its properties.
http://www.knoltex.com/papers/mpeg2Lib.html. February 1998.
[29] P. Oechslin. On-off sources and worst case arrival patterns of the leaky bucket. Technical report, University
College London, September 1997.
[30] A. Papoulis. Probability, Random Variables, and Stochastic Processes. 3rd edition. McGraw Hill, 1991.
[31] A. Parekh and R. Gallager. A generalized processor sharing approach to flow control - the single node case.
IEEE/ACM Transactions on Networking, 1(3):344357, June 1993.
[32] A. K. Parekh and R. G. Gallager. A generalized processor sharing approach to flow control in integrated services
networks: The multiple node case. IEEE/ACM Transactions on Networking, 2(2):137150, April 1994.
[33] J. Qiu and E. Knightly. Inter-class resource sharing using statistical service envelopes. In Proc. IEEE Infocom
99, pages 3642, March 1999.
[34] S. Rajagopal, M. Reisslein, and K. W. Ross. Packet multiplexers with adversarial regulated traffic. In Proceedings
of IEEE INFOCOM98, pages 347355, San Francisco, March 1998.
[35] M. Reisslein, K. W. Ross, and S. Rajagopal. Guaranteeing statistical QoS to regulated traffic: The multiple
node case. In Proceedings of 37th IEEE Conference on Decision and Control (CDC), pages 531531, Tampa,
December 1998.
21
[36] M. Reisslein, K. W. Ross, and S. Rajagopal. Guaranteeing statistical QoS to regulated traffic: The single node
case. In Proceedings of IEEE INFOCOM99, pages 10611062, New York, March 1999.
[37] J. Roberts, U. Mocci, and J. Virtamo (Eds.). Broadband Network Traffic: Performance Evaluation and Desgin
of Broadband Multiservice Networks. Final Report of Action. COST 242. (Lecture Notes in Computer Science.
Vol. 1152). Springer Verlag, 1996.
[38] S. Shenker, C. Partridge, and R. Guerin. Specification of guaranteed quality of service. IETF RFC 2212,
September 1997.
[39] D. Starobinski and M. Sidi. Stochastically bounded burstiness for communication networks. In Proc. IEEE
Infocom 99, pages 3642, March 1999.
[40] I. Stoica and Hui Zhang. Providing guaranteed services without per flow management. In Proceedings of ACM
SIGCOMM99, pages 8194, Boston, MA, September 1999.
[41] D. Verma, H. Zhang, and D. Ferrari. Guaranteeing delay jitter bounds in packet switching networks. In Proceedings of Tricomm91, pages 3546, Chapel Hill, North Carolina, April 1991.
[42] D. Wrege, E. Knightly, H. Zhang, and J. Liebeherr. Deterministic delay bounds for VBR video in packetswitching networks: Fundamental limits and practical tradeoffs. IEEE/ACM Transactions on Networking,
4(3):352362, June 1996.
[43] O. Yaron and M. Sidi. Performance and stability of communication networks via robust exponential bounds.
IEEE/ACM Transactions on Networking, 1(3):372385, June 1993.
[44] H. Zhang. Providing end-to-end performance guarantees using non-work-conserving disciplines. Computer
Communications: Special Issue on System Support for Multimedia Computing, 1995.
[45] H. Zhang and E. Knightly. Providing end-to-end statistical performance guarantees with bounding interval
dependent stochastic models. In Proceedings of ACM SIGMETRICS94, pages 211220, Nashville, TN, May
1994.
[46] Z.-L. Zhang, Z. Liu, J. Kurose, and D. Towsley. Call admission control schemes under the generalized processor
sharing scheduling discipline. Telecommunication Systems, 7(1-3):125152, July 1997.
[47] Z.-L. Zhang, D. Towsley, and J. Kurose. Statistical analysis of generalized processor sharing scheduling discipline. IEEE Journal on Selected Area in Communications, 13(6):10711080, August 1995.
22