Вы находитесь на странице: 1из 11

An efficient - high performance traffic scheduling and

shaping component for ATM systems



S. Dilis
*
, Gr. Doumenis
#
, G. Konstantoulakis
+
, G. Korinthios
*
, G. Lykakis
#
, D. Reisis
*
and G.
Synnefakis
#

#
National Technical University of Athens (NTUA)
Telecommunications Lab., Computer Science Division
Electrical Engineering Dept.

*
University of Athens
Electronics Lab., Applied Physics Division
Physics Dept.

+
Inaccess Networks S.A.
Athens, Greece
dreisis@telecom.ntua.gr


Abstract: - This paper presents a generic shaper/flow controller, which can efficiently schedule traffic
streams in networks using statistical multiplexing. The component can operate for a large number of
traffic streams of various profiles and Quality of Service. The controller uses a calendar for scheduling
and has a low cost VLSI implementation. Performance measures in this work for 70-90%
communication link utilization show the jitter (system impact) under various operating conditions.

Corresponding author: D. I. Reisis, Ast. Professor,
University of Athens
Electronics Lab., Physics Dpt.
Zographou, GR-157 84, Athens, Greece
Tel.: +30 1 7276720, +30 1 7721498
Fax: +30 1 7257658
E-mail: reisis@telecom.ntua.gr


Key-Words: - Traffic Shaping, Flow Control, Bandwidth Assignment, Traffic Scheduling, Quality of
Service Provision, and Network Performance.


1 Introduction
In broadband telecommunication
networks, statistical multiplexing is
the cornerstone concept leading to
the efficient utilization and sharing
of the network resources by a large
number of users concurrently.
Statistical multiplexing and
bandwidth sharing though, can
result in jitter in the traffic leading
to the saturation of the user
perceived QoS. A few techniques
can be used for traffic shaping and
control at the egress of a node to
assign the available bandwidth on a
fair basis [7], [8], [13], [14], [15],
(re)shape the injected traffic and
restore (to some extend) the
original traffic profile [1], [4].
These techniques involve the use of
intelligent buffers [10], [16], [17],
[18] and/or calendar organizations,
the flow controller of which is
optimized at most 64 connections
[2], [3].

This paper presents a
shaper/bandwidth assignment
component, which can efficiently
schedule a large number of traffic
streams (order of thousands) of
alternative profiles and QoS. The
presentation of the shaper controller
analyzes first, problems such as the
jitter (Cell Delay Variation in
ATM), QoS and scheduling, that
arise due to the large number of
traffic streams to be served, as well
as the variety of the classes they
belong to. Previous theoretical
work accomplished regarding
scheduling algorithms, was
primarily oriented to periodic
occuring tasks in
multiprogramming systems [19],
[20], [21], [22], [23]. As this work
is part of a VLSI component
development (EU funded ESPIT
IV-26320 R&D project), the paper
presents a single processor
implementation improving on the
characteristics and the performance
of the component.

The traffic shaper/controller can be
used in various ATM based systems
which are required to maintain the
QoS of the active streams,
minimize the system impact (jitter)
and in general preserve the
Network Performance (NP).
Network systems such as large
switches,
concentrators/multiplexers, and
access systems benefit from the
application of the traffic shaper
[11], [12], [25], [27], [28].
Furthermore, the component can be
used in large terminal systems [4],
[9], [15], [18], [24], [26].


2. Problem Statement
and Analysis
The generic shaper/traffic controller
is depicted in Figure 1.

IN
T Traffic Classes
(T queues)
OUT
S
T Traffic
Profiles

Figure 1: Shaper model
We assume that the incoming
connections are forming in total T
queues. Each queue corresponds to
a stream and can be either a single
connection (VP/VC or VP), or a
group of connections that have the
same traffic profile. The shaper
uses as input the traffic profile of
each class (stream) and assigns
slots to it. Each stream profile
includes the contracted with the
network traffic parameters and
stream priority. A traffic profile can
be a PCR (or effective rate)
definition, or more complex (PCR,
max burst size and idle period), or
even the moments of a higher level
distribution. The later is taken as a
sequence of generic ON/OFF traffic
shaping events [1], [5], [6].

The controller shapes the traffic of
the T active streams according to
the predefined profiles and QoS,
and sends the multiplexed data
towards the Output port of the
system. Each ATM stream (single
connection, group or class) has its
own traffic profile.

The system under consideration
consists of the following parts: (i)
A known number of sources (T).
(ii) Each source has a traffic profile
imposing a specific intercell
distance (icdsN). Icd is assumed to
have uniform pmf, i.e. source i has
intercell distance i. (iii) The
calendar memory locations (>N),
onto to which are mapped
consecutive time slots (slots t
u
and
t
u+1
are mapped onto locations k
and k+1). In our case T=4096, 100
sicds10000=N
o
.

This work studies the maximum
possible competition caused either
in a single location or in u
consecutive locations. Since, the
maximum icd is N we consider the
probability of k requests on u
consecutive positions of the
calendar at all t, t
i
stst
i+N
.

The worst case scenario is to let all
sources initiate transmission, so that
source i, 1sisT, requests a single
transmission at time t
x
, t
i
st
x
st
i+N/2
.
This case produces the maximal
number of requests per slot since,
repetition of the "faster" sources
(small icds) does not exist.
Computing then, the probability of
k requests in u consecutive slots: At
time t
i
, the probability of the
source, requesting a transmission
in 1 of the u locations, is the
probability hat this source has
intercell distance icd such that N-u
sicd sN, which is P(s
1
)=u/N. P(s
i
)
is defined as the probability of
success in the i
th
time slot and P(f
i
)
the probability of failure in the i
th

time slot. Similarly, in the time t
i+1
the probability of a source having
intercel icd such that N-u-1s icds
N-u is computed as follows :

P(s
2
)=P[(s
2
/s
1
)
(s
2
/f
1
)]=P(s
2
/s
1
)+P(s
2
/f
1
)=u/N (u-
1)/(N-1) + (N-u)/N u/(N-1)=u/N.

For each I the result is P(s
i
)=u/N,
for 1sisN. Therefore, the
probability of having k successes
over N repetitions has the form of
the binomial distribution B(N,p)
with p=u/N and q=(N-u)/N.
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
12 17 22 27 32
r
Bound
Bound when u=4
Bound when u=5
Bound when u=6
Bound when u=7
Bound when u=8

Figure 2: Bounds of Probability P
r
.
The mean number of requests
(successes) within the
aforementioned sequence of u slots
on the calendar is u =Np=N
(u/N)=u.

The probability of exceeding the
mean value u by r is bounded by the
formula [29]:
The bound of the probability is
depicted in Figure 2 showing a
uniform pmf, with high probability
(1-1/N) of the requests over the
calendar and utilization 100%.


3. Supported Traffic
Classes and
Functionality
3.1 Traffic Classes and
Priorities
The Shaper can accommodate any
source, which has been activated,
and uses as input a number of
traffic parameters that are computed
according to the traffic contract of
each active connection.

The shaper/flow controller supports
the following traffic profiles
according to the ATM Forum
traffic management 4.0
specification:

- CBR
- Real time VBR (rt-VBR)
- No real time VBR (nrt-VBR)
- ABR
- UBR
- GFR (currently under
investigation in the ATM
Forum)

These traffic types involve
parameters which are either static
for a long period (CBR and rt-
VBR) or vary during the lifetime of
the connection in the scale of bursts
(nrt-VBR), scale of operation when
end-to-end flow control is
concerned (nrt-VBR, minimum
ABR rate) or automatically if there
is bandwidth available (UBR,
GFR). The system controller
initializes the shaper and updates
the dynamic profiles.

We use two (2) levels of priorities
to generate credits: Time critical
(CBR and rt-VBR streams) and
non-Time Critical (nrt-VBR, ABR,
minimum UBR) and the remaining
can be assigned (in addition) to
UBR or GFR streams. However,
r
r
r
e u
r u X P
|
.
|

\
|

s > } {
the system controller is able to
assign any priority number to the
stream according to the user traffic
contract. According to the above
definitions, the sources are
classified into two groups:

Group1: CBR, rt-VBR, nrt-VBR,
ABR traffic classes. Each source is
described using a traffic shaping
profile (long or short time period)
and a priority number. The traffic
shaping profile may consist of a
few parameters only (peak and max
burst size shaping) or a longer
sequence of traffic boundaries. This
traffic shaping profile represents the
sustainable cell rate (effective
bandwidth) that is allocated to each
stream.

Group2: UBR and GFR. UBR and
GFR sources can use the slots
allocated to but unused by the other
sources. The bandwidth guaranteed
sources (Group 1) leave some cell
slots free since the effective
bandwidth allocated to these by the
call controller, is higher than their
average rate [1].

In the following Section we
concentrate mainly in the credit
generation assuming that the
system is able to handle up to T
traffic profiles, K for Group1 and
M for Group2, where, K+M sT, 0s
KsT, 0sMsT.

3.2 Functional
Description
The shaper is a single calendar
architecture. The sources will be
distinguished according to their
profile and priority. The calendar
will assign credits for the K Group1
sources and one aggregated credit
for all Group2 sources (again the
CBR part of the GFR sources
belong to Group1 and has one tag
per stream). A generic approach is
to use more than one credit ids for
the Group2 sources. However, the
later does not change the analysis
and evaluation of the system.

Note here, that using a set of
counters (a counter/connection) in
the case of a large number of traffic
streams results in a prohibiting
implementation cost as well as
possible PCR violation.

The generic functional model of the
flow controller is presented in
Figure 3. There are three memory
components, namely, the
Traffic_Descriptor_Table, the
Calendar and the Write FIFO.

The Traffic Descriptor Table is a
table of size T, that holds the traffic
descriptors (Intercell Distance ID,
Number of Cells NC, Silence
Duration SD, Priority - Pr) of the
connections. A fifth parameter
(Cnt) is used for counting the
remaining cells of the burst being
transmitted.
The Calendar is a circular buffer of
size N (N>T), that holds the
multiplexed profiles of the outgoing
traffic for the next N cell-slots.
Each entry in the Calendar
represents a cell-slot and thus entry
i stores t, i.e. the TAG of the
connection that is scheduled to
transmit at slot i. If the entry is 0
then the slot is not allocated to any
Group 1 source, so it can be
assigned to Group 2 sources.

The role of the Write_FIFO is to
decouple the deterministic
reading of the Calendar (once
every cell-slot) from its stochastic
writing. It contains the
connections that are waiting to be
scheduled. Each entry consists of
the TAG and the position Pos of the
slot that it should be assigned to.

0
1
N-1
Calendar
Traffic Descriptor Table
1
2
T
t
ID
IDt
NC
NCt
SD
SDt
Cnt
Cntt
Read Pointer Write Pointer
Write FIFO
t
T
0
5 0
2
9
i
j
Pr
Prt
8 3
1 5

Figure 3: Functional Model.
The Calendar is accessed by two
pointers: the Read Pointer points
to the current slot that is being
transmitted. The Write Pointer
scans the Calendar in order to
find an available slot for scheduling
a connection.

3.2.1 Single processor
organization and operations
The calendar organization has been
designed targeting an efficient low
cost implementation. This section
shows the operations on the
calendar realized sequentially by a
single processing element and a
memory storing the Traffic
Descriptor Table.

The Calendar read operation is
shown in Figure 4. The operation
takes place once per cell slot. As
soon as the Read Pointer reads a
valid TAG, a SendCell command
is issued to the systems Memory
Manager in order to transmit the
next cell of the connection
identified by TAG. A TAG equal to
0 defines an empty entry in the
Calendar, so a cell slot is available
for Group2 active streams. The new
position for TAG is calculated and
scheduled in the Write_FIFO. The
entry just read is marked as empty.
Finally, the Read Pointer is
advanced by one.

Wait for Enable
TAG := Calendar(Read Pointer)
TAG = 0
Calculate next position
WriteFIFO(TAG, Pos)
Calendar(Read Pointer) := 0
Read Pointer := (Read Pointer + 1) mod N
Y
N Command to
Memory Manager
SendCell(TAG)

Figure 4: Calendar Read Operation
The calculation of the next position
for TAG is shown in Figure 5.
Initially the Cnt field is checked. If
it is 0 then the burst period has been
ended and therefore the next slot of
the connection is scheduled after
SD + ID slots and Cnt is reset to
NC. If Cnt is greater than 0 then a
burst period is taking place and
therefore the next slot of the
connection is scheduled after ID
slots and Cnt is decreased by 1.

End
TAG := Calendar(Read Pointer)
Cnt = 0
Calculate next position
Pos := (Read Pointer + SD + ID) mod N
Cnt := NC
Y N
Pos := (Read Pointer + ID) mod N
Cnt := Cnt - 1

Figure 5: Next Position Calculation

The above operation does not take
into account the fact that the
position of the next slot may be
already occupied by another TAG.
Therefore, a parallel mechanism is
introduced (write operation) which
rearranges the tags in the Calendar,
based on their priority, in order to
fit the new.

The write operation flow chart is
shown in Figure 6. If the
Write_FIFO is not empty, then a
new TAG is read along with its pre-
calculated position (Pos). Starting
from Pos, the Write Pointer scans
forward the Calendar until an
empty slot, or a slot occupied by a
connection with lower priority, is
found. In the first case the TAG is
entered in the slot and the scanning
ends. In the second case the TAG of
the new connection is placed in the
slot of the old and the algorithm re-
executes for the connection that
owned the overwritten slot. In
worst case, the number of the
algorithm iterations is much larger
than the number that can be
accommodated within the period of
a cell slot. This justifies the role of
the FIFO for the decoupling of the
Calendar read and write operations.

Wait until Write FIFO not empty
ReadFIFO(TAG, Pos)
Write Pointer := Pos
Write Pointer := (Write Pointer + 1) mod N
TAG_c = 0
Y
N
TAG_c := Calendar(Write Pointer)
Pr_c > Pr
Calendar(Write Pointer) := TAG
TAG := TAG_c
N
Y
Calendar(Write Pointer) := TAG

Figure 6: Calendar Write Operation
4. Performance
Analysis
To measure the performance of the
proposed traffic shaper/controller,
under different operating
conditions, the calendar
multiplexing algorithm had been
subjected to extensive simulations.
The main issues that affect its
performance are the number of
traffic sources that are multiplexed,
and the loading of the physical link.
4.1 Shapers
Performance for Various
Number of Sources
A set of scenarios had been
simulated when various number of
traffic sources are multiplexed,
while physical links loading
remained constant. The number of
traffic sources affects the
competition a traffic source faces,
when it requests a slot on the
calendar. The possibility to find a
sequence of occupied slots is
affected by the number of the
entries on the calendar. Scenarios
with 1000, 2500 and 4000 sources
had been simulated. The link
utilization had a constant value of
90% in all of them.

Figure 7 illustrates the mean CDV
experienced by a cell in relation to
the amount of sources.

0
0,5
1
1,5
2
2,5
3
3,5
4
4,5
5
0 500 1000 1500 2000 2500 3000 3500 4000 4500
Number of sources
M
e
a
n

C
D
V

p
e
r

c
e
l
l

(
c
e
l
l

s
l
o
t
s
)

Figure 7: Shapers mean CDV vs.
number of sources
The possibility of a cell to face a
specific value of CDV during its
transmission is depicted in Figure
8.

1,00E-07
1,00E-06
1,00E-05
1,00E-04
1,00E-03
1,00E-02
1,00E-01
1,00E+00
1
2
6
5
1
7
6
1
0
1
1
2
6
1
5
1
1
7
6
2
0
1
2
2
6
2
5
1
2
7
6
3
0
1
3
2
6
3
5
1
CDV, x(cell slots)
L
o
g
(
P
D
F
(
x
)
)
4,000 sources
2,500 sources
1,000 sources

Figure 8: CDV PDF
The mean CDV and the CDV PDF
reflect the total performance of the
shaper, but they do not provide
information about the actual mean
CDV of an individual source. Each
traffic source is characterized, in
the scenarios, by its rate. No
priorities had been assigned and the
competition resolution algorithm
worked on FCFS, per calendar slot,
basis. Hence traffic sources that
have big intercell distance values
(slow sources) are served first.
Sources with small intercell
distance value (fast sources), may
request the same calendar slots after
the slow sources, hence the
possibility to find a slot occupied is
increased. This is depicted in
Figure 9, where the fast sources
have much higher mean CDV
values compared to those of slow
sources.

0
5
10
15
20
25
30
35
40
45
50
1
2
3
9
4
7
7
7
1
5
9
5
3
1
1
9
1
1
4
2
9
1
6
6
7
1
9
0
5
2
1
4
3
2
3
8
1
2
6
1
9
2
8
5
7
3
0
9
5
3
3
3
3
3
5
7
1
3
8
0
9
Traffic sources
M
e
a
n

C
D
V

p
e
r

c
e
l
l

(
c
e
l
l

s
l
o
t
s
)
4,000 sources
2,500 sources
1,000 sources
Fastest source
Slowest source

Figure 9: Mean CDV values vs.
sources rate
4.2 Shapers
performance for various
loading scenarios
Shapers performance, is also
affected by the loading of the
physical link, which is defined as
the percentage of time the shaper
transmits cells. The loading is
directly mapped to the density of
sources on the calendar, and affects
the probability of calendar slots to
be found occupied.

A set of scenarios had been
simulated for different loads, while
the number of sources was
constant. Three scenarios had been
simulated with loading 70%, 80%
and 90%. The number of sources
was 2,500 in all scenarios. No
priorities had been assigned to the
traffic sources, and the competition
resolution algorithm worked on
FCFS, per calendar slot, basis.

Figure 10 depicts the shapers mean
CDV per cell. Figure 11 and Figure
12 present the CDV PDF and
sources mean CDV per cell
respectively.

0
0,5
1
1,5
2
2,5
3
3,5
4
4,5
5
0 10 20 30 40 50 60 70 80 90 100
Loading
M
e
a
n

C
D
V

p
e
r

c
e
l
l

(
c
e
l
l

s
l
o
t
s
)

Figure 10: Shapers mean CDV vs.
loading

1,00E-07
1,00E-06
1,00E-05
1,00E-04
1,00E-03
1,00E-02
1,00E-01
1,00E+00
1
1
7
3
3
4
9
6
5
8
1
9
7
1
1
3
1
2
9
1
4
5
1
6
1
1
7
7
1
9
3
2
0
9
2
2
5
2
4
1
CDV, x(cell slots)
L
o
g
(
P
D
F
(
x
)
)
90% load
80% load
70% load

Figure 11: CDV PDF
Sources' mean CDV per cell
0
5
10
15
20
25
30
35
40
1
1
4
8
2
9
5
4
4
2
5
8
9
7
3
6
8
8
3
1
0
3
0
1
1
7
7
1
3
2
4
1
4
7
1
1
6
1
8
1
7
6
5
1
9
1
2
2
0
5
9
2
2
0
6
2
3
5
3
2
5
0
0
Traffic sources
M
e
a
n

C
D
V

p
e
r

c
e
l
l

(
c
e
l
l

s
l
o
t
s
)
90% load
80% load
70% load
Fastest source
Slowest source

Figure 12: Mean CDV values vs.
sources rate
4.3 Priority based
shaping
The FCFS competition resolution
algorithm indirectly assigns higher
priorities to the slow traffic sources
(see Figure 9 and Figure 12), and
results in non acceptable CDV
values for the fast sources, where
even a small CDV represents a
significant jittering to their
transmission rate. To avoid this
behavior, a priority is assigned to
each traffic source. The priority
based competition resolution
algorithm, named fixed priorities,
works as it is described in Section
3.2.1. The priorities are assigned
randomly to the traffic sources,
reflecting the demand for different
connections types.

To limit the maximum delay, of
low priority sources, two priority
increase algorithms are proposed
and evaluated. Both are based in the
temporary increment of sources
priority, while the Write Pointer
gradually deviates from the correct
slot, which corresponds to the
intended transmission time. The
temporary increase of the priority
takes place every time the Write
Pointer advances from the correct
slot a specific percentage (Pid) of
sources intecell (ID
i
) distance. In
the first priority increase algorithm,
named proportional priority
increase, the temporary priority
(Pr_temp
i
) of source i with priority
Pr
i
is increased by a percentage
(Ppr) of Pr
i
, every ID
i
Pid slots.
This priority increase is repeated
every ID
i
Pid slots. The temporary
priority of a source, when the Write
Pointer has advanced X places from
the correct calendar slot is:

Pr_ Pr
*
*Pr * temp
X
ID Pid
Ppr i i
i
i = +

(


After the transmission of the cell
the temporary priority of source i ,
is reset to P
i
again.

The second priority increase
algorithm, named constant priority
increase, differs from the previous
algorithm in the amount of
temporary priority increase. The
temporary priority of a source i is
now increased by a constant factor
Cp every ID
i
Pid slots. The
temporary priority of a source,
when the Write Pointer has
advanced X places from the correct
calendar slot is:

Pr_ Pr
*
* temp
X
ID Pid
Cp i i
i
= +

(
After the transmission of the cell
the temporary priority of source i ,
is reset to P
i
again.

4,25
4,26
4,27
4,28
4,29
4,3
4,31
4,32
4,33
4,34
4,35
4,36
0 0,5 1 1,5 2 2,5 3 3,5 4 4,5
M
e
a
n

C
D
V

p
e
r
c
e
l
l (
c
e
l
l s
l o
t
s
)
No priorities Fixed priorities Proportional
priority
increase
Constant
priority
increase

Figure 13: Shapers mean CDV for
different competition resolution
algorithms
In Figure 13 the mean CDV values,
per cell, are presented. The mean
CDV when no priorities had been
assigned is also provided for
comparison. The CDV PDF is
given in Figure 14, where it can be
seen that the constant priority
increase policy leads to smaller
values of maximum CDV.

1,00E-07
1,00E-06
1,00E-05
1,00E-04
1,00E-03
1,00E-02
1,00E-01
1,00E+00
1
2
5
4
9
7
3
9
7
1
2
1
1
4
5
1
6
9
1
9
3
2
1
7
2
4
1
2
6
5
2
8
9
3
1
3
3
3
7
3
6
1
CDV, x(cell slots)
L
o
g
(
P
D
F
(
x
)
)
No priorities
Fixed priorities
Proportional
priority increase
Constant priority
increase

Figure 14: CDV PDF for different
competition resolution algorithms
In Figure 15 and Figure 16 the
mean and the maximum CDV
values, per source, are presented.
The CDV of the resolution
algorithm without priorities present
random values, because the
priorities had been assigned to the
traffic sources randomly. Hence
when the sources are sorted
according to their priority, their
respective rates are distributed
randomly on the x axis.

0
5
10
15
20
25
30
35
40
45
50
1
1
4
6
2
9
1
4
3
6
5
8
1
7
2
6
8
7
1
1
0
1
6
1
1
6
1
1
3
0
6
1
4
5
1
1
5
9
6
1
7
4
1
1
8
8
6
2
0
3
1
2
1
7
6
2
3
2
1
2
4
6
6
Traffic sources
M
e
a
n

C
D
V

p
e
r

c
e
l
l

(
c
e
l
l

s
l
o
t
s
)
No priorities
Fixed priorities
Proportional
priority increase
Constant priority
increase
Lowest priority source
Highest priority source

Figure 15: Mean CDV values for
different competition resolution
algorithms

0
100
200
300
400
500
600
700
800
1
1
7
6
3
5
1
5
2
6
7
0
1
8
7
6
1
0
5
1
1
2
2
6
1
4
0
1
1
5
7
6
1
7
5
1
1
9
2
6
2
1
0
1
2
2
7
6
2
4
5
1
Sources
M
a
x

C
D
V
No priorities
Fixed priorities
Inreasing priorities
proportional
Increasing priorities
constant
Highest priority source
Lowest priority source

Figure 16: Maximum CDV values
for different competition resolution
algorithms
5. Conclusion
In this paper, we presented and
evaluated an efficient component
that can be used for traffic shaping
and control at the egress of ATM
switching or large server terminal
systems. The component uses a
calendar mechanism in order to
assign the allocated link bandwidth
to the active outgoing streams
according to their QoS parameters.
The components goal is to
maintain the granted QoS for all
active streams while preserving the
Network Performance. The
innovation is that the component
performs efficiently for thousands
of streams and for link utilization
close to 100% of the capacity.

The component has been
extensively evaluated with
simulation for various scenarios.

Currently, the work continuous
towards optimizing the presented
design targeting the following:
- Reduction of the
cell gaps in the
generated stream.
This appears when
two cells compete
for the same slot,
then one has to
wait. In that case
the write pointer,
checks not only
forwards but also
backwards the
position Pos.
- Integration of an
OFF Calendar to
support profiles
with long (more
than N) idle
periods.


References:
[1] ATM Forum, "Traffic
Management Specification,"
Version 4.0, April 1996.
[2] ABM PXB 4330, ATM Buffer
Manager, Siemens AG 1999,
http://www.siemens.com/semi
conductor/products/ics/33/ab
m.htm
[3] ATM_SHAP4, Quad 32 class
traffic shaper, ATecoM
GmbH,
http://www.atecom.de/shap4.
htm
[4] CCITT STUDY GROUP
XVIII, "Traffic control and
resource management in B-
ISDN," CCITT
Recommendation I.371,
Geneva, 1992.
[5] R. Jain, "Congestion control
and traffic management in
ATM networks: Recent
advances and a survey,"
Comp. Net. And ISDN Sys.
Vol. 28, pp. 1723-1738, 1996.
[6] N. Mitrou, K. Kontovasilis, E.
N. Protonotarios, "A closed-
form expression for the
effective rate of On/Off traffic
streams and its usage in basic
ATM traffic control
problems," Proc. Int.
Teletraffic Seminar. St.
Petersburg, pp. 423-430,
1995.
[7] P.E. Boyer, F. M. Fabrice, M.
Guillemin, M. J. Servel, J-P.
Coudreuse, "Spacing cells
protects and enhances
utilization of ATM network
links," IEEE network,
September '92, pp. 38-49.
[8] E. Wallmeier, T. Worster,
"The spacing polisher, an
algorithm for efficient peak
rate control in ATM
networks," ISS '92, vol. 2,
A5.5.
[9] H. J. Chao, N. Uzum, "An
ATM queue manager
handling multiple delay and
loss priorities," IEEE/ACM
Transactions on Networking,
Dec. 95, vol. 3, no. 6.
[10] N. Endo, T. Kozadi, T.
Ohuchi, H. Kuwahara, S.
Gohara, "Shared buffer
memory switch for an ATM
exchange," IEEE Trans.
Commun., vol. 41, no. 1, Jan.
1993.
[11] R. Y Awdeh, H. T. Mouftah,
"Survey of ATM switch
architectures," Comput.
Networks and ISDN Systems,
27 (1995), 1567-1613.
[12] D. Bertsekas and R. Gallager,
Data Networks, Prentice
Hall, Englewood Cliffs, NJ,
2nd ed., 1992
[13] M.G. Hluchyj and M.J. Karol,
Queuing in high
performance packet
switching, IEEE J. Selected
Areas Commun., Vol. 6, No.
9, pp 1578-1597, Dec. 1988
[14] J.S Tyrner, Queuing analysis
of buffered switching
networks, IEEE Trans.
Commun., Vol 36, No 6, pp
734-743, June 1988
[15] G. Hebuterne and A. Gravey,
A space priority queuing
mechanism for multiplexing
ATM channels, Comput.
Networks and ISDN Systems,
20 (1990) 37-43
[16] G. Konstantoulakis, K
Pramataris, D. Reisis, Real
Time Buffer Management for
High Speed Broadband
Networks, IEEE Conf.
COMCON 95, Rethymnon
Crete, June 1995
[17] K..C. Pramataris, G. E.
Konstantoulakis, D. I. Reisis,
G. I. Stassinopoulos, "An
efficient shared-buffer for
high speed ATM networks",
ICECS'96, Rhodes, Greece,
1996.
[18] G. Kornaros, C. Kozyrakis, P.
Vatsolaki, M. Katevenis,
Pipelined Multi-Queue
Management in a VLSI ATM
Switch Chip with Credit-
Based Flow Control, Proc.
17th Conference on Advanced
Research in VLSI
(ARVLSI'97), pp. 127-144,
Univ. of Michigan, Ann
Arbor, USA, Sept. 1997.
[19] E. L. Lawler, C. U. Martel,
"Scheduling periodic
occurring tasks on multiple
processors," Inform.
Processing Lett., vol. 12, no.
1, pp. 9-12, 1981.
[20] J. Y-T. Leung, "A new
algorithm for scheduling of
periodic, real-time tasks,"
Algorithmica, vol. 4, pp. 209-
219, 1989.
[21] J. Y-T. Leung, M. L. Merrill,
"A note on preemptive
scheduling of periodic, real-
time tasks, "Inform.
Processing Lett. Vol. 11, pp.
115-118. 1980.
[22] J. Y-T. Leung, J. Whitehead,
"On the complexity of fixed-
priority scheduling of
periodic, real-time tasks,"
Perform. Eval., vol. 2, pp.
237-250, 1982.
[23] C. L. Liu, J. W. Layland,
"Scheduling algorithms for
multiprogramming systems in
a hard-real-time
environment," J. ACM, vol.
20, pp. 46-61, 1973.
[24] B. Zheng, M. Atiquzzaman,
"Traffic management of
multimedia over ATM
networks," IEEE
Communications Magazine,
January 1999.
[25] M. Graf, "VBR video over
ATM: Reducing network
resource requirement through
endsystem traffic shaping,"
Proc. IEEE INFOCOM '97,
Kobe, Japan, Apr. 7-11 1997,
pp. 48-57.
[26] M. Krunz, "Bandwidth
allocation strategies for
transporting variable-bit-rate
video traffic," IEEE
Communications Magazine,
January 1999.
[27] A. Adas, "Traffic models in
broadband networks," IEEE
Communications Magazine,
vol. 35, no. 7, July 1997, pp.
82-89.
[28] V. S. Frost, B. Melamed,
"Traffic modeling for
telecommunications
networks," IEEE
Communications Magazine,
vol. 32, no. 3, March 1994,
pp. 70-81.
[29] H. Chernoff, "A measure of
asymptotic efficiency for tests
of a hypothesis based on the
sum of observations," Annals
of Mathematical Statistics, 23,
pp. 493-507, 1952.

Оценить