Академический Документы
Профессиональный Документы
Культура Документы
Abstract
This paper proposes a scheme for taking advantage of the properties of both connectionless (CL)
and connection-oriented (CO) networks to carry traffic from existing applications. Specifically, it pro-
poses a solution for allowing data generated by endpoints on a CL (IP) network to be redirected to CO
networks if there is an advantage from the user or service provider perspective. The advantage to a
user is that the user can ask for and receive a guaranteed quality of service for a specific flow. The
ter than in a CL network with precomputed routes since bandwidth can be dynamically allocated to
flows on an as-needed basis. These advantages only exist when the CO network is operated in a
switched mode (i.e., some connection “setup” actions are performed for each arriving flow) rather
than in a provisioned mode (where all these actions are performed a priori).
This paper addresses the problem of how to internetwork a CL (IP) network with a switched CO
network. Our solution is based on interworking routing schemes of the two networks, “halting” or
“turning around” datagrams during connection setup, and performing protocol conversion rather than
encapsulation (on the user plane) to reduce overheads, which results in a further improvement of
bandwidth utilization. The CO network can be an MPLS (MultiProtocol Label Switching) or RSVP
(Resource reSerVation Protocol) based IP network, a WDM (Wavelength Division Multiplexed) net-
work, an ATM (Asynchronous Transfer Mode) network, or an STM (Synchronous Time Multiplex-
ing) network, such as the telephony network or a SONET network. The choice of a CO network
The paper also presents a detailed discussion of why we propose operating CO networks in a
switched mode for carrying data generated by IP applications. We show that operating any of the
2
above-listed CO networks in a provisioned mode is less efficient than a network of typical CL IP rout-
ers for the purposes of carrying “bursty” IP traffic. We also consider perceived drawbacks of the
switched mode of operation, i.e., of having to deal with short-lived flows and large call handling capac-
1 Introduction
Connectionless (CL) networks and connection-oriented (CO) networks have some fundamental dis-
tinguishing features. CO networks are those in which connection setup is performed prior to informa-
tion transfer. In CL networks, no explicit connection setup actions are executed prior to transmitting
data; instead, data packets are routed to their destinations based on information in their headers.
These two types of networks enjoy advantages and disadvantages from both the user perspective and
the service provider perspective. CL networks, for instance, do not suffer the delay and processing
overhead associated with connection setup. In contrast, information about the connections in CO net-
works helps in providing service guarantees and, furthermore, makes it possible to most efficiently use
network resources (e.g., bandwidth) by “switching” them to appropriate connections as they are estab-
lished.
The need to exploit the advantages of both CL and CO networks has long been recognized. Two
examples are the use of SS7 (Signaling System No. 7) CL networks in conjunction with the CO tele-
phony network, and the use of RSVP (Resource reSerVation Protocol [1]) in IP networks. In both these
solutions, applications explicitly choose the networking mode appropriate to their needs.
In this paper, we propose a different way to exploit the advantages of both networking modes (CO
and CL). In our approach, existing applications running on endpoints continue operating in their cur-
rently-used networking mode (CO or CL). We propose that certain network nodes then decide whether
3
to continue carrying the information in this mode or whether to redirect the traffic to a network of the
opposite networking mode. Not all nodes of the first network need to have the ability to make these
redirect decisions. Only some nodes that have connectivity to both the CL network and the CO network
need this capability. We call such nodes CL-CO gateways or CO-CL gateways based on whether the
This redirect capability is needed if either (i) the users desire service requirements different from
those possible under the currently-used networking modes of their applications, or (ii) the service pro-
vider sees a potential advantage (such as lower cost or improved bandwidth utilization) in using an
alternate networking mode than that assumed by the applications. In the former case, our proposal is to
have users prespecify their new service requirements at subscription time, which are then met using
traffic redirection to offer users their new service requirements without altering applications at their
endpoints. In the latter case, if user service requirements can be met using both networks, traffic can be
As an example, consider telephone users who prefer that their connections be carried on the Internet
to take advantage of lower costs. This desire can be specified at subscription time. The users, however,
continue using their existing phone devices that have been designed to operate in the CO mode. Tele-
phony-Internet gateways then redirect those users’ traffic from the telephony network on to the Inter-
net. As another example, consider Internet users who want stricter quality-of-service guarantees for
their file transfer application than is currently offered by the Internet. Such requirements are indicated
to the service provider at subscription time while users continue using their existing file transfer appli-
cations assuming the CL mode. In this case, the CL-CO gateways perform the opposite function by
redirecting traffic generated by Internet users on to some CO network that offers the required QoS guar-
antees. In Section 5, we list some of the benefits that service providers see in each mode.
4
To allow traffic to be redirected between CL and CO networks (in order to exploit the advantages of
working modes. General internetworking configurations with a combination of CO and CL modes can
arise out of necessity (because, for example, only one mode is available some places) or intentionally to
work since, in a CO network, a call setup is generated, which can then be carried transparently through
the CL network to set up a connection from the far-end CO-CL gateway to the destination endpoint on
the CO network. Data arriving on the connection after it has been set up is carried in the CL mode on
the segment between the CO-CL gateways. On the other hand, accepting data generated by an endpoint
on a CL network at a CL-CO gateway and then setting up a connection in the CO network causes prob-
lems since there is a significant difference in the magnitude of user data arrival rates in current CL net-
works (packets arriving every few microseconds, or less) and current call setup times in CO networks
(order of milliseconds). Buffering packets at the CL-CO gateway, while waiting for a connection to be
In this paper, we focus primarily on the more challenging problem of redirecting traffic generated by
endpoints on a CL network on to a CO network. Our solution has two parts. First, we propose tech-
niques that cause data to flow from the CL network nodes to the CL-CO gateways. This is done by set-
ting up routing tables to take into account “shortest paths” that exist through the CO network. Second,
we address the problem of how traffic (user-plane data) arriving at the CL-CO gateways is handled
(until the desired connection is established in the CO network), given the “long” call setup delays asso-
5
ciated with CO networks. We propose either (i) “halting” the packets by, for example, “intercepting”
TCP open-connection (Synchronize) segments while setting up the connection, or (ii) “turning around”
a few packets back on to the CL network using, for example, source routing until the connection is set
up. In Section 3, we discuss these and other schemes that fall into these two broad categories of “halt-
Before presenting our solutions in Sections 3 and 4, however, we first define the specific problem of
interest in greater detail in Section 2. In Section 5, we provide additional motivation for solving these
problems, and, in Section 6, we show that multiple networks (each with a different networking technol-
ogy) can beneficially coexist by drawing an analogy to transportation networks. Finally, in Section 7,
we make some comparisons of our approach to other existing proposals, and, in Section 8, we present
our conclusions.
2 Problem Definition
As stated in Section 1, this paper addresses the problem of internetworking a CL network and a CO
network specifically for traffic generated by endpoints on the CL network. Since IP networks are cur-
rently the most dominant CL networks in use, we rephrase our problem formulation as follows:
How do you internetwork an IP network (operated in CL mode) with a CO network (operated in the
switched mode)?
The problem statement emphasizes that the IP network is operated in CL mode because, currently, new
protocols are being designed to allow for a “CO” mode of operation of an IP network (we treat this case
in Section 4). Also, we emphasize that the CO network is operated in a “switched” mode; for clarifica-
tion, we now consider the definition of this mode of operation and its alternatives.
actions consist of
data exchange, then the CO network is said to be operated in “switched mode.1” Otherwise, the CO net-
work is said to be operated in “provisioned mode.” For example, when SONET is used to create point-
to-point links between IP routers, then it is operating in a provisioned mode; the SONET connections
are not set up for specific data transfers, but rather to just transport IP packets between routers. In Sec-
tion 5, we discuss the advantages and disadvantages of the two modes (switched and provisioned), and,
in Section 7, we list the advantages and disadvantages of internetworking IP networks with CO net-
Fig. 1 shows a simple diagram of the two basic internetwork configurations we consider in this
CL
.....
CL CL CL CL
..... ..... ..... .....
Endpoint
CO CO
................. .................
A. Parallel B. Interconnecting
Fig. 1 Two modes of internetworking CO and CL networks
1. This definition allows for resources to be shared among multiple flows even in a switched mode of operation. More (shared)
resources are reserved as additional flows arrive and are admitted.
7
paper. Fig. 1A shows a switched CO network “in parallel” with a CL network, and Fig. 1B shows a
switched CO network between (“interconnecting”) two CL networks. The parallel configuration could
occur, for example, if two service providers, one with an IP-router-based network and the other with a
tributed networks. The interconnecting configuration occurs, for instance, when an enterprise decides
to route all their traffic through a specific service provider who happens to use a CO network. More
general internetwork configurations are also possible (e.g., combinations of Fig. 1A and Fig. 1B, and
In both “parallel” and “interconnecting” configurations, the problem of how to contend with CL data
arriving at CL-CO gateways before a connection has been established through the CO network needs to
be solved. However, the issue of how to set up routing tables at the routers in the CL network to take
advantage of “shorter paths” that may exist through the CO network (or to meet users’ subscribed-to
service requirements) arises only for the “parallel” configuration (Fig. 1A) since communication paths
between the CL-CO gateways exist in both the CL and the CO networks. In Fig. 1B, there is no choice
but to direct the traffic to the CO network, which implies that the routing issue is answered more
readily. In Fig. 1A, it is also important to note that while a connection is being set up in the CO net-
work, communication can still take place through the CL networks. Furthermore, even after the connec-
tion is established in the CO network, data can be allowed to flow simultaneously through the CL and
Fig. 2 shows one CL network (the IP network) and three important examples of CO networks: ATM
(Asynchronous Time Multiplexed), STM (Synchronous Time Multiplexed) and WDM (Wavelength
Division Multiplexed) networks2. In this paper, we assume all three CO networks can be operated in the
2. In this paper, we assume WDM networks are circuit-switched (and, hence, CO). Newer WDM packet-switching technologies
are currently under research.
8
switched mode. As a fourth example of a CO network (not pictured), traffic generated by applications
assuming the CL mode can be carried in a CO mode through the IP network itself, if we apply RSVP or
LDP (Label Distribution Protocol) [2] from router to router rather than from endpoint to endpoint. We
Endpoint
Connection-oriented (CO)
Connectionless (CL)
ADM: Add-Drop Multiplexer
PDH: Plesiochronous Digital Hierarchy
SDH: Synchronous Digital Hierarchy
Fig. 2 Four types of networks SONET: Synchronous Optical Networks
STM: Synchronous Transfer Mode
WDM: Wavelength Division Multiplexed
discuss this special case in Section 4. It is a special case because every CO node also has CL capability;
for the three CO networks shown in Fig. 2, the CO nodes do not have collocated CL capability.
Assuming that all three CO networks shown in Fig. 2 are physically connected to the IP network
(i.e., as in Fig. 1A), the problem is to design methods of directing some traffic (from applications, for
example, that desire service requirements different from that offered in the CL network) from IP rout-
ers to any of the three switched CO networks and then back on to the IP network prior to reaching the
end destinations. We only consider endpoints physically connected to the IP network (see Fig. 2)
because, as stated in Section 1, we are not considering the problem of how to redirect traffic originated
In Section 5, we explain in greater detail why we are interested in solving this problem (especially
9
why the use of “switched” CO networks), and we make some additional observations about CL and CO
As defined in Section 1, CL-CO gateways (as shown in Fig. 3) are IP routers that are equipped to
decide when to redirect traffic on to a switched CO network. In addition, these CL-CO gateways are
also nodes of the CO network and therefore execute the routing and signaling protocols of the CO net-
work. For example, a CL-CO gateway between an IP network and a WDM network is both an IP router
R1 R6 GW 1
Switch
R3 4
Switch
2
R5 Switch
R2 GW 2 1
R7 Switch
5
R4 Switch
3
IP network (consisting of IP routers) GW 3
CO network
In this section, we discuss actions performed in the CL-CO gateways that enable the internetworking
of connectionless IP networks and CO networks operated in a switched mode. These actions define our
1. Creating routing tables that enables data flow from the IP network to the CO network.
2. Designing user-plane actions to handle IP packets that arrive at CL-CO gateways to be carried
on (not-yet-established) connections in the CO network, plus IP packets that arrive at CL-CO
gateways but then remain in the CL network.
These two sets of actions are described in Sections 3.1 and 3.2, respectively. Both internetworking
configurations shown in Fig. 1 need the set of “user-plane” actions, while only the “parallel” configura-
10
tion of Fig. 1A needs the set of “routing” actions. In Fig. 1B (the “interconnecting” configuration),
there is only one option for routing packets between the CL islands, i.e., through the CO network.
Hence, no special routing actions are needed to create data flows to the CL-CO gateways.
There are two ways (described in Sections 3.1.1 and 3.1.2) to create the routing tables that will
maintained at the CL-CO gateways, can be operated in conjunction with either of the above schemes.
Consider the OSPF (Open Shortest Path First) routing protocol [3] commonly used in IP networks.
In the most common configurations, routers running OSPF are connected via point-to-point links or
broadcast networks, such as ethernet. However, OSPF also allows routers to be interconnected via non-
network, such as an X.25 network or ATM network. Thus, OSPF allows routers to be interconnected
via a CO network without provisioned connections; provisioned connections are treated merely as
point-to-point links by OSPF. To handle non-broadcast networks, two modes of operation are recom-
mended in the OSPF specification [3]: (i) NBMA (Non-Broadcast Multi-Access) and (ii) point-to-multi-
point. In the NBMA mode, the non-broadcast network emulates a broadcast network and assigns
designated routers (and backup designated routers) to generate network state advertisements, while in
the point-to-multipoint mode, the non-broadcast network is advertised as a set of point-to-point links
11
between some of the routers on the non-broadcast network. Using either of these modes, the CO net-
work is represented as part of the IP network topology. Routing tables constructed in the IP routers
(including the CL-CO gateways) take into account the presence of the non-broadcast network, and
enable traffic to flow from the IP network to the CO network and back to the IP network if paths
For purposes of this discussion, we assume that the CO network is operated in a point-to-multipoint
mode. Use of the NBMA mode is equally applicable. Also, while the description below assumes the
OSPF routing protocol, the concept is readily applicable to other IP routing protocols.
We use the network of Fig. 3 to illustrate the concept. CL-CO gateway nodes GW 1 , GW 2 , and
GW 3 generate Link State Advertisements (LSAs) that report point-to-point links between themselves
even when there are no connections between the CL-CO gateways. For example, GW 1 sends an OSPF
LSA to router R6 indicating that it has direct links to routers GW 2 and GW 3 . The links reported in the
LSAs appear in the OSPF topology database of all the routers in the same OSPF area and have associ-
ated link weights. In the Fig. 3 example, the network view of all the IP routers (including the CL-CO
gateways) could be as shown in Fig. 4. The weight shown next to each link is an operator-assigned link
2
1 GW 1
R1 5 R6 1
R3
1 1 2 3
1
1 R5 2 1
R2 GW 2
R7
4 1
R4 1
weight. In this example, the shortest path from R4 to R7 is R4 → GW 3 → GW 2 → R7 , while the short-
12
est path from R 4 to R5 is R 4 → R3 → R 5 . Thus, based on the link weights, part of the end-to-end path
can be taken through the CO network, or the entire end-to-end path may be taken through the IP net-
work. For paths that take gateway-to-gateway links, connections need to be set up through the CO net-
work (from CL-CO gateway to CL-CO gateway) when data arrives at a CL-CO gateway.
In this scheme, no announcements are made about the CO network by the CL-CO gateways. Packets
only appear at a CL-CO gateway if it is part of the shortest path according to the IP routing tables.
Besides the IP and CO routing tables, CL-CO gateways also maintain a routing table that integrates
information about the IP and CO networks. Each CL-CO gateway determines shortest paths to IP desti-
nations by comparing its path on the two networks for each destination. The shorter of the two paths is
maintained at the CL-CO gateway in an integrated IP-CO routing table for each destination address.
Given that the CL-CO gateway is an IP router, it needs to maintain a database of routes to IP addresses
that do not make use of the CO network, since these databases are synchronized between adjacent rout-
ers. However, the shortest path used is that indicated in the integrated IP-CO routing table.
For example, the OSPF topology database maintained at the IP routers in Fig. 4 would not know
about the three links between the CL-CO gateways. If R6 needs to send data to R 4 , it will send it
routing table in GW 1 indicates that a shorter path exists to R4 via GW 3 , and, consequently, the data
Notice that this integrated routing scheme requires that a CL-CO gateway be at least double-homed
in to the IP network in order for it to receive any transit packet (not destined for itself) from the IP net-
work; otherwise, it could not be part of the shortest path according to the IP routing tables. Further-
more, since the availability of the CO network is unknown to the IP routers, the amount of traffic that
13
will be sent to the CL-CO gateways will probably be smaller than can be handled by the CO network.
As stated in Section 1, one motivation for building these CL-CO gateways is to allow users to
exploit the advantages of CO and CL networks without having to change their existing applications.
Thus, if users specify their desired service requirements at subscription time, the network provider can
set user-specific routing tables at the CL-CO gateways. Once data is directed to CL-CO gateways from
IP routers based on generic routing tables (using either of the schemes described in Sections 3.1.1 or
3.1.2), the user-specific routing then determines which users’ flows are sent to the CO network.
Having addressed in Section 3.1 the routing related actions that enable data flow from the IP net-
work to the CO network, in this section we focus on user-plane actions to handle IP packets that arrive
at CL-CO gateways. Connections are set up through the CO network for some, but not necessarily all,
of the arriving TCP and UDP flows. Thus, CL-CO gateways need to both handle traffic from flows for
which connections are set up, as well as continue forwarding packets through the CL network from
flows for which connections are not set up. The decision to set up connections is made at the CL-CO
gateways, based on the user-specified service requirements and the traffic situation in the CL and CO
If the routing scheme used is that of Section 3.1.2, neither type of traffic poses a serious problem
since the default path expected by the IP network provides a path from the CL-CO gateways through
the IP network to the destination. For instance, for the first type of traffic, packets can be forwarded on
the IP network while a connection is being set up. Later in this section, we discuss other solutions for
However, if the routing scheme of Section 3.1.1 is used, then both types of traffic - flows for which
14
a connection is set up and flows for which a connection is not set up - pose a problem. First, for traffic
that requires a connection, a situation unnatural to CO networks arises in that user data is presented to a
CO network without a request for connection setup. Typically, this does not happen. For example, we
do not start speaking into a phone before dialing. Similarly, a connection is first set up from a PC to an
IP router through the telephony network (for PCs with modem access to the Internet), and then an IP
application (one that generates IP traffic), such as a web browser, is started at the endpoint PC.
Second, if the routing scheme of Section 3.1.1 is used and a connection is not set up for a particular
traffic flow, then that traffic needs to be forwarded through the CL network. However, shortest paths as
computed by the IP routers (using the routing scheme of Section 3.1.1) may require the use of links
Before listing some solutions to handle these two different types of problems, we first identify four
types of IP datagrams that could arrive at a CL-CO gateway: (i) TCP SYN segments, (ii) TCP data seg-
ments that arrive at a CL-CO gateway without a preceding TCP SYN segment (e.g., because the IP
routing tables change after the TCP SYN was sent), (iii) UDP datagrams from applications that have an
initial end-to-end exchange (such as H.245 exchanges before multimedia data is sent in UDP data-
grams), and (iv) UDP datagrams from applications without such an exchange. We propose solutions
corresponding to each of these types of IP datagrams, plus an additional flow control scheme.
First, for flows for which a connection is set up through the CO network, we propose the following
mechanisms:
1. Intercept and hold a TCP SYN (Synchronize) segment at the CL-CO gateway while a connection
is being set up in the CO network (Section 3.2.1).
2. For TCP data segments that arrive at CL-CO gateways without a preceding TCP SYN segment
(e.g., due to routing table changes), use “TCP foolers” to indicate a zero receive-window size,
thus “halting” data flow until the connection is set up (Section 3.2.2).
15
3. Use application protocol message indications to trigger the setup of connections for UDP traffic
from applications that have an initial end-to-end exchange. For example, an H.245 open logical
channel message [4] is sent end-to-end prior to sending RTP (Real Time Protocol) data as UDP
datagrams if H.323 [5] is used in conjunction with RTP for multimedia communication (Section
3.2.3).
4. For UDP datagrams from applications without initial end-to-end exchanges, two schemes can be
used while setting up connections:
i. Turn back IP datagrams to the CL network using IP source routing to override routing
tables at the routers (Section 3.2.4).
ii. Use a set of small-bandwidth provisioned connections for such datagrams (Section
3.2.5).
5. Use flow control mechanisms to stop user data flow while a switched connection is being set up
in the CO network (Section 3.2.6).
Second, to handle the traffic for which connections are not set up, we can use two of the mechanisms
listed above. Specifically, either (i) default routing tables in IP routers can be overridden to carry this
traffic through a route on the CL network using source routing, or (ii) connections can be provisioned
between pairs of CL-CO gateways that are declared to be connected by point-to-point links in OSPF or
Finally, to reduce layer overheads and thus improve bandwidth utilization, we propose using user-
plane protocol conversion instead of protocol encapsulation for TCP and UDP flows that are handled
with switched connections through the CO network. This scheme, described in Section 3.2.7, can be
used in conjunction with any of the above-listed schemes for flows handled through the CO network
TCP uses a three-way handshake between endpoints to open a connection before sending data (see
Fig. 5). The SYN (synchronize) segment is sent to synchronize starting sequence numbers on both sides
16
Endpoint Endpoint
SYN
SYN: Synchronize
SYN ACK: Acknowledgment
ACK
needed to achieve reliable transport. Our proposal is to have the CL-CO gateway detect a SYN segment
and trigger a connection setup procedure through the CO network, while, at the same time, holding up
the SYN segment. No user data packets will arrive at the CL-CO gateways for this flow since data
packets will not be transmitted until the three-way handshake shown in Fig. 5 is completed. The SYN
segment is typically a zero-payload TCP segment with 20 bytes for the TCP header and 20 bytes for the
IP header. The SYN flag is set in the “Flags” field of the TCP header.
Since routes are often asymmetric in the IP network, the SYN sent by the first endpoint may trigger
the setup of a unidirectional connection, and the SYN sent by the other endpoint could similarly lead to
the setup of a unidirectional connection in the opposite direction. Different CL-CO gateways could be
involved in the two directions of the connection. Time-outs involved in the three-way handshake shown
in Fig. 5 are typically long enough to allow for connection setup through the CO network.
Consider the networks shown in Fig. 3. If an endpoint connected to R4 opens a TCP connection to
an endpoint connected to R 7 , R4 will send the IP datagram carrying the TCP SYN segment to CL-CO
gateway GW 3 to route it to GW 2 and then on to R7 since this is the shortest path assuming the routing
scheme of Section 3.1.1 (see Fig. 4). Upon receiving the IP datagram, CL-CO gateway GW 3 will first
look for an existing connection to GW 2 . If there is none, it will check to see if the datagram is a TCP
SYN segment. If it is, GW 3 will initiate a connection setup procedure to GW 2 through the CO network
17
using its signaling protocol. The shortest path between these CL-CO gateways may, for example, be
through Switch 3 and Switch 1. Meanwhile, GW 3 holds the TCP SYN segment and does not forward
it. When connection setup confirmation is received at GW 3 , it will send the IP datagram carrying the
TCP SYN segment on the new connection to GW 2 that goes through Switch 3 and Switch 1.
Releases of connections in the CO network are initiated if there is no traffic for a given time period.
FIN (Finish) segments and RST (Reset) segments, which are TCP segments used for normal and abor-
tive releases, respectively, cannot be counted on to initiate connection release in the CO network
because routes may change in the IP network during the lifetime of the TCP flow. Furthermore, there
may be advantages to keeping connections in the CO open longer than the lifetimes of TCP flows. For
example, in typical HTTP (Hypertext Transfer Protocol) implementations, a new TCP connection is
created for each transaction and terminated as soon as the transaction completes [6]. However, often
there will be several consecutive transactions from a client to a web server; hence, keeping the connec-
tion open even after the TCP flow has been terminated could simplify the setup actions required when a
Upon receiving a TCP data segment without having seen a corresponding SYN segment, the CL-CO
gateway immediately generates an ACK (Acknowledgment) segment with the window size set to 0 (a
TCP “fooler” from the CL-CO gateway rather than from the destination endpoint) to cause the sending
endpoint to stop sending data. Once a new connection is set up in the CO network, a new ACK can be
sent by the CL-CO gateway with a non-zero window size. While the connection is being set up, if the
persist timer [7] happens to send a query to determine if the window size has been increased, the CL-
CO gateway intercepts the query and responds to it. If the persist query happens to get routed through a
different path and the destination endpoint answers indicating a non-zero window size, then TCP seg-
18
ments for the flow may again start to appear at the CL-CO gateway before connection setup is com-
plete. If this happens, the CL-CO gateway will simply send another TCP fooler (i.e., an ACK with
UDP is currently primarily used for support applications, such as DNS (Domain Name Service) and
SNMP (Simple Network Management Protocol), rather than any primary applications, such as web
access, telnet, file transfer, electronic mail, etc. However, for Internet telephony and other multimedia
traffic, RTP (Real Time Protocol) has been defined to use UDP. This is a primary application in that
significant user data can be expected to be generated. To handle this latter type of UDP datagram, we
propose that connections be set up when certain H.245 messages, such as open logical channel mes-
sages, arrive at CL-CO gateways. These H.245 messages, part of the H.323 protocol suite [5] that sup-
ports multimedia applications, are typically sent end-to-end to assign UDP port numbers for audio and
video traffic. These messages can be treated in a manner similar to TCP SYN segments as triggers for
In this solution, IP datagrams are turned around and sent back to the IP network if the routing
scheme of Section 3.1.1 indicates that the shortest path is through the CO network, but the connection is
not yet established. Only the first few datagrams need to be sent back to the IP network (i.e., while the
connection is being set up). Subsequent datagrams are routed on the direct link between CL-CO gate-
ways once the connection has been set up. To avoid looping of datagrams that are turned around and
sent back to the IP network, we propose using source routing to force intermediate routers to use the
source route carried in the datagram header instead of the path indicated by their precomputed routing
tables.
19
For the example given in Section 3.2.1, if CL-CO gateway GW 3 detects one of the cases for invok-
ing this scheme, it will add a source route R4 → R 3 → R5 → R 7 to the datagram and send it back on the
interface to R 4 . This route is chosen since it is the shortest path between R4 and R7 that does not
include GW 3 (see Fig. 4). Routers R 4 , R3 , and R5 will route the datagram according to the source
route rather than their precomputed routing tables. The first few datagrams follow this route while
GW 3 initiates the setup of a connection through the CO network. Once established, the connection will
It is interesting to note that this approach of using source routing to override precomputed routing
tables can be used even within the IP network whenever there is congestion. In other words, if a pre-
computed route becomes overly congested, a router can use this option of creating a source route on the
While the source-routing scheme described in Section 3.2.4 works in theory, in practice the source
routing capability is not enabled in many existing routers, even when the routers are equipped with the
required software. In such cases, we need an alternate solution for handling UDP datagrams generated
For such cases, we propose provisioning small-capacity pipes through the CO network between
pairs of CL-CO gateways that are announced as having links by the IP routing protocol. Capacities of
these provisioned connections can be adjusted based on usage. In addition, it is important to recognize
that only the first few UDP datagrams need to use the provisioned connection. Once the switched con-
nection is established (perhaps using the same route as the provisioned connection), then it will carry
the remaining UDP datagrams through the CO network. Consequently, the capacity needed for provi-
Recall that the basic problem we are addressing in Section 3.2 is how to deal with packets that show
up at a CL-CO gateway while a switched connection is being set up in the CO network to the far-end
CL-CO gateway. Naturally, one way to deal with this situation is to make use of simple “backpressure”
flow control signals, if possible, in the CL network. This way, packets will be “halted” from reaching
(and overflowing) the CL-CO gateway, until the connection is established. In the meantime, the packets
will be buffered at routers in the CL network, until the flow control signals permit the packets to reach
the CL-CO gateway again. Unfortunately, in some situations, this flow control may temporarily stop all
traffic on a link, not just packets of the new connection. It is preferred to use a “selective” flow control
signal that only halts (on the link into the CL-CO gateway, or back to the generating endpoint) packets
One simple example of a backpressure flow control signal is the PAUSE function that is used to
implement flow control on full-duplex Ethernet links (including the recent gigabit Ethernet standard).
A router (such as a CL-CO gateway) that needs to temporarily inhibit incoming packets (e.g., while it
sets up a connection in the CO network) simply sends a PAUSE frame that indicates how long the full-
duplex partner should wait before sending more packets. The router or endpoint that receives the
PAUSE frame stops sending packets for the period specified. PAUSE periods can either be extended or
canceled (by the CL-CO gateway) by transmitting another PAUSE frame with the revised PAUSE
parameter.
Noting that IP headers are essentially overhead bytes needed to route packets through the IP net-
work, we consider “protocol conversion” schemes in the CL-CO gateways as a means to improve band-
width efficiency. Protocol conversion is possible when the CO network is operated in a switched mode,
21
and can be used in conjunction with the previously described schemes for flows handled through the
CO network. The alternative to “protocol conversion” is “protocol encapsulation,” which is used when
By encapsulation, we mean that packets of one network are tunneled through the second network by
adding headers for the second network on to the packets from the first network. For example, if the CO
network in use is a provisioned ATM network, then an IP datagram (with its IP header) is encapsulated
into an AAL5 frame, which is then segmented into ATM cells for transport through the ATM network.
Recently it has been noted that up to 45% of IP traffic in the backbone is either 40- or 44-byte-long dat-
agrams [8]. When these datagrams are encapsulated into ATM cells, due to LLC/SNAP encapsulation
[9], ATM cell and AAL5 overheads, each such datagram occupies two ATM cells. This leads to an
overall bandwidth waste of 20% when IP datagrams are carried in ATM cells.
the user payload is extracted from the IP datagram and carried directly on connections in the CO net-
work. For example, if the CO network is an ATM network, then application data is carried in an AAL5
frame through the ATM network. Since AAL5 performs transport-layer functions, the TCP header can
also be converted to an AAL5 header in a switched (rather than provisioned) ATM network. In other
words, TCP/IP headers are converted into AAL5/ATM headers. By doing this, it appears that TCP con-
nections are terminated at the CL-CO gateways from each endpoint of the communication, and a con-
nection of the type supported by the CO network is set up between the CL-CO gateways.
Conversion is typically used when one endpoint of the communication is on one network and the
other is on the second network (see, for example, Path 1 in Fig. 6). For example, if a user with an Inter-
net telephony PC connected to the Internet via an ethernet link is communicating with a user that has a
3. See the Appendix for further examples of protocol stacks that result from encapsulation.
22
telephone, then voice signals sent by the Internet user are extracted from the IP datagrams in the CL-
CO gateway node (CL-CO gateway I in Fig. 6) and carried directly on a DS0 circuit in the STM CO
network.
Encapsulation is typically used when the two communicating endpoints are on the same network,
but part of the transport is over a second network (see, for example, Path 2 in Fig. 6). In this case,
encapsulation is used as an easy means of reconstructing the packet for transport on the first network at
the far end. For the example Path 2 shown in Fig. 6, CL-CO gateway III would easily extract the IP dat-
Endpoint IP network
A
Endpoint
C
Endpoint
B
CL-CO
CL-CO Gateway III
Gateway I CL-CO
Path 1 Gateway II
Path 2
CO network
Endpoint D
agram from the AAL5 frame and send it on its way through the IP network to endpoint C.
If provisioned connections are used through the CO network, then encapsulation appears to be the
only choice for paths such as Path 2 in Fig. 6. However, if switched connections are used through the
CO network, then TCP/IP header fields can be passed during connection setup to the far-end gateway,
and protocol conversion can be used. Since we focus on the use of switched connections through the
23
CO network in this paper, we recommend the use of protocol conversion to reduce overhead. Protocol
conversion can be applied to both “parallel” and “interconnecting” configurations of Fig. 1, all three
CO networks shown in Fig. 2, and in conjunction with all the routing schemes discussed in Section 3.1.
Currently, two solutions are being pursued in the IETF for adding a CO networking mode to IP rout-
ers: RSVP [1] and MPLS/LDP (MultiProtocol Label Switching/Label Distribution Protocol) [2]. In
both these approaches, those routers that are upgraded to support these CO modes continue supporting
the CL mode of operation. This property makes this switched CO network different from the CO net-
works shown in Fig. 2 since, in the latter case, CO switches do not have CL IP routing capability. In
this section, we apply our solution, described in Section 3, to this special case.
Consider both configurations shown in Fig. 7. Nodes in the CO/CL cloud are IP routers equipped
with RSVP or MPLS capability and are hence called CO/CL IP switches. The CL-CO IP gateways are
IP routers equipped with RSVP or MPLS capability that are connected to “CL-only” IP routers (in the
CL cloud) and have additional capability to recognize when to initiate the setup of a connection using
RSVP or LDP messages. Thus, Fig. 7 shows three types of nodes: (i) CL IP routers, (ii) CO/CL IP
Some of the schemes presented in Section 3 can be used in the CL-CO IP gateways to possibly trig-
ger connection setup from router-to-router. First, consider an RSVP-based CO/CL IP network. The
1. The routing scheme of Section 3.1.3, where user-specific information triggers the setup of
connections.
2. The TCP-based scheme of Section 3.2.1 that initiates connection setup following the reception
24
CL
.....
CL CL CL CL
..... ..... ..... .....
Endpoint
CO/CL CO/CL
................. .................
routing and provisioned-connections schemes described in Sections 3.2.4 and 3.2.5 are not required.
Also, the routing schemes of Sections 3.1.1 and 3.1.2 are not required since all the nodes shown in Fig.
7 participate in the same IP routing protocol, thus automatically creating data flows to the CO network.
This above solution allows for the use of RSVP from router-to-router rather than from endpoint-to-end-
point, which means that “RSVP islands” can be created without requiring all routers in the IP network
to be upgraded all at once. Clearly, the configuration of Fig. 7 can be repeated multiple times in an end-
to-end flow that traverses many RSVP islands. This allows for a gradual introduction of RSVP routers.
We note here that while RSVP provides the means to operate the IP network in switched CO mode,
its use of the IP protocol in the user-plane is wasteful since, once connections are set up, packet headers
no longer need to carry destination addresses. Lower protocol overhead is possible in a CO mode that
25
Instead of an RSVP-based CO/CL IP network, suppose the CO nodes of Fig. 7 are MPLS-based IP
routers. Then, we propose applying the same subset of schemes from Section 3 that are applicable for
RSVP. In the current MPLS specification, although LDP hooks have been provided for “downstream-
on-demand” label allocation, no mention is made of the procedures that trigger such requests. Using our
schemes of Sections 3.1.3, 3.2.1, 3.2.2, and 3.2.3, we propose that CL-CO IP gateways can trigger the
downstream-on-demand label allocation scheme. In addition, the protocol conversion scheme of Sec-
tion 3.2.7 can be applied in MPLS-based CO/CL IP networks. The current MPLS specification requires
the shim header (or in the case of ATM, the ATM header) to be carried between the IP header and the
layer-2 header. This implies the use of protocol encapsulation. We propose using conversion to reduce
the protocol overhead and improve bandwidth utilization. This is possible because we are proposing the
use of LDP in a switched mode, i.e., some connection setup actions are performed for each arriving
flow for which a connection setup is triggered. In addition, protocol conversion is possible in the MPLS
solution (unlike the RSVP solution) because the user-plane protocol for the CO-handled flows (for
example, ATM, with ATM switching hardware used for the label switches) is different from the user-
In summary, our solution for internetworking a CL IP network with a switched CO network can be
used even when the latter is an island of MPLS or RSVP CO/CL IP switches.
Why is it important to study this internetworking problem of a switched CO network with the CL IP
network? Since the IP network has access to the endpoints that are generating bursty traffic (e.g., data
and MPEG video), it is probably easiest to just keep this traffic on the IP network. Why then should we
26
bother trying to route some of the traffic already on the IP network on to a CO network?
As stated in Section 1, there are potential user and service-provider benefits in redirecting traffic to
the CO network. If users desire service guarantees for given flows (without changing the applications)
and can specify their requirements, this is probably better handled with a CO network that is capable of
reserving resources on-demand. From a service provider’s perspective, there may be an opportunity to
use bandwidth more efficiently with a switched CO network than with an existing CL network. Also, in
Many different resource-sharing and switching techniques can be employed in CL networks and CO
networks. This results in a wide spectrum of networking options and blurs the distinctions between CL
and CO networks. Various levels of performance guarantees are possible, corresponding to various
degrees of isolation between traffic flows. On the other hand, more partitioning of network resources
leads to less efficient use of network resources (e.g., bandwidth) when carrying bursty IP traffic.
In the subsections that follow, we first describe some characteristics of various CL and CO network-
ing modes (Section 5.1). Then, since the bandwidth utilization (of interest to the service provider)
depends to large extent on the network’s switching technique, we discuss some differences between
packet switching and circuit switching (Section 5.2). Finally, we discuss the implications of the various
networking modes when they are combined with different switching modes (Section 5.3).
To enable fast packet forwarding in CL networks, routes are typically preconfigured by precomput-
ing the routing tables in IP routers. The routes can be precomputed based on operator-assigned link
weights only (as is done in most IP networks today), or on a scheme that takes into account available
bandwidth [10]. In either case, there is some partitioning of resources that causes the bandwidth to
sometimes be underutilized on certain links even while other links are congested. To more efficiently
27
make use of all the network bandwidth would require a more complicated routing scheme that deter-
mined the route for each packet on-the-fly as it arrived at a router. Offering individual users service
Further partitioning in a CL network is possible by not only precomputing routes, but also reserving
resources on a destination basis, which allows for providing service guarantees on an aggregate basis to
users for all their incoming traffic. This is done by reserving bandwidth and buffer resources for desti-
nation-rooted trees. While service guarantees can be provided to users for all their incoming traffic, this
As in CL networks, there are multiple ways to allocate and share resources in CO networks. First,
there is a switched mode of operation, where route determination and resource allocation is done as
calls/flows arrive. Precomputed routing tables may be used in such networks as in CL networks, but
typically there are alternate paths precomputed so that if a call setup is not successful on the first
attempt (i.e., on the first choice of precomputed routes), the alternate paths are attempted.
A second mode of CO network operation is the (pairwise) provisioned mode. Fig. 8A shows a CO
1 1
2 2
6 6 7
7
3 3
4 4
5 5
A. Pairwise provisioned connections B. Provisioned trees
network in which connections are provisioned (set up a priori; not in response to requests from specific
data exchanges) for an anticipated traffic matrix. In Fig. 8A, the lines interconnecting nodes represent
28
provisioned connections. In contrast, the switched mode of operation, which permits connections to be
set up and released on demand, allows for greater sharing of resources and more adaptability to varia-
tions in the traffic matrix [11]. Furthermore, a provisioned mode of operation could be quite inefficient
A third mode of operation of a CO network is with provisioned trees as shown in Fig. 8B. Here
resources are reserved a priori (i.e., provisioned) for an expected traffic demand in a tree configuration
instead of the pairwise configuration of Fig. 8A. In other words, the bandwidth reserved on the link
from node 6 to node 7 is determined by the total expected traffic from nodes 1, 2, 3, 4, and 5 toward
node 7. Since this bandwidth is shared, this provisioned tree configuration is more robust to traffic
changes than the configuration of Fig. 8A. However, even though provisioned trees provide more
resource sharing than provisioned pairwise paths, they are nevertheless set up a priori for some
expected traffic. Deviations from this expected load could lead to call blocking. A switched CO net-
work would have better bandwidth utilization since resources can be “switched” to wherever they are
required on-the-fly. A small part of this gain in bandwidth utilization is offset by the bandwidth needed
Finally, we note that operating a CL network with precomputed routes and resource reservation on a
We have just discussed the bandwidth benefits of switched connections compared with provisioned
connections in CO networks. We also mentioned that IP networks are typically wasteful of bandwidth
because of their precomputed routing tables. However, before concluding that it is more bandwidth-
between packet switching and circuit switching. The CL IP network is packet-switched, and of the three
29
CO networks shown in Fig. 2, only one, the ATM network, is a packet-switched network. The remain-
In circuit-switched networks, a “circuit” is set up with peak rate resource allocation; no advantage
can be taken of silences in the data transmission. In contrast, such silences can be taken advantage of in
network can be made on a VBR (Variable Bit Rate) or ABR (Available Bit Rate) basis [13] for bursty
traffic (i.e., traffic that starts and stops, with idle periods, within one connection). By “VBR basis” we
mean that the bursty traffic is somehow modeled a priori and an equivalent bandwidth is computed
given a source model and an acceptable packet loss ratio. In ABR, a minimum bandwidth is allocated
and then reactive feedback messages are sent to throttle back or increase the maximum data rate used
networks due to this possibility of allocating bandwidth to a connection on a VBR/ABR basis for bursty
traffic. The bandwidth utilization advantage of packet-switched networks over circuit-switched net-
works is slightly offset by the header bytes overhead needed in packet-switched networks.
network operated in switched mode, the “switched mode” aspect gives the circuit-switched CO net-
work the edge on a call/flow basis, while the “packet-switched” aspect gives the packet-switched CL
network the edge on a packet (within a call/flow) basis. But a packet-switched CO network operated in
switched mode enjoys bandwidth utilization advantages both from its switching and networking
modes! In Fig. 9, we classify various networks according to their networking and switching modes. The
lower-right entry in the table of Fig. 9 is by far the worst in terms of bandwidth utilization. This class of
networks allocates peak rate bandwidth on a per-connection basis and uses provisioned connections for
30
Networking mode
(call level)
Connection-oriented Connectionless (with Connection-oriented
Switching mode (in switched mode) precomputed routes) (in provisioned mode)
(packet level)
- ATM (Switched Virtual - ATM (Permanent Virtual
Circuits mode) IP (current) Circuits mode)
Packet-switched
- MPLS (control-driven) IP
- MPLS (data-driven) IP
- RSVP IP
Circuit-switched - SONET (provisioned paths)
Telephony
- WDM (provisioned lightpaths)
As stated in Section 5.1, operating a CL network with precomputed routes and resource reservation
provisioned network is less resource efficient than a CL network with only precomputed routes (i.e.,
without resource reservation). This implies that operating a CO network, such as a SONET, WDM or
ATM network, in provisioned mode is less resource efficient than a typical network of CL IP routers.
Furthermore, when comparing circuit-switched CO networks, such as SONET and WDM networks,
with CL IP networks, the packet-switching aspect of IP gives the latter a further advantage. While this
disadvantage does not exist for CO ATM networks, nevertheless, an ATM PVC-based network is also
less resource-efficient than a network of IP routers due to the 20% overhead created by carrying TCP/
IP data in ATM cells [14], a result of protocol encapsulation. Reference [14] makes a similar observa-
tion, and hence proposes that IP-based “transport networks” be used in telecommunications networks.
We also observe that the advantage offered to users by CO networks whereby a user can obtain guar-
anteed service for specific flows cannot be offered by a CO network operated in provisioned mode.
Thus, from both the user and service provider perspective, to take advantage of the properties of CO
31
In the above discussion, note that we did not consider the cost of the various switching and transmis-
sion technologies in concluding that CO networks should be operated in a switched mode and that CL
IP networks are better than CO provisioned networks. Considering these costs, in certain scenarios the
(less-efficient) provisioned CO network (e.g., a circuit-switched WDM network) might turn out to be
“cheaper” than a CL IP network or a switched CO network. For example, if IP traffic becomes non-
bursty deep inside the network, then for a small backbone region, a provisioned WDM network in
which traffic is not demultiplexed in the time domain may be “cheaper” than the switched alternatives.
Finally, there are a couple perceived drawbacks of the “switched” mode of operation: (i) the holding
times of connections to web servers are reported to be short (10 kbytes/flow) [8], and (ii) the numbers
of flows per second to process is expected to be large. We respond to these concerns in the following
manner.
First, connections are not necessarily set up for every flow, as described in Section 3. For example,
setting up connections for just 20% of the (longest) flows will redirect 50% of the packets [8] to the CO
Second, certain users might be willing to tolerate the connection setup delay even for short-lived
flows in order to receive the performance guarantees provided by a CO network (rather than just taking
Third, connections are not completely released for every flow; as noted in Section 3.2.1, connection
releases are not triggered directly by TCP flow departures. If the connection is maintained through the
CO network even after the flow terminates and a new TCP flow arrives, only the association of TCP
port numbers and IP addresses with connection identifiers is required for the new flow. Other actions,
such as route determination, resource reservation, switch fabric programming, etc., are not repeated for
32
Finally, regarding the call handling throughput issue, we note that newer CO switches of 20 Gb/s
fabric size can process 1M calls/hour. Consequently, data exchanges must be approximately 70Mb/call
for the call processing capacity to be consistent with the switch fabric capacity. This is 1000 times
larger than the reported holding times of connections to web servers (10 kbytes/flow) [8]. However, for
comparison, note that 64-kb/s telephony calls (assumed to have 3-minute holding times) generate 12
Mb/call. So, as the IP network carries increasing numbers of audio and video calls, the flow holding
times and amount of data exchanged per flow will increase significantly, making it quite reasonable to
assume that CO switches will have call handling capacities that are consistent with their user-data han-
dling capacities.
The four types of networks shown in Fig. 2 can beneficially coexist in a manner similar to the four
types of transportation networks: roadways, railroads, shipping, and airlines. In this section, we draw
several analogies between these transportation networks and the communication networks of Fig. 2.
First, we note that not all transportation networks have direct access to “customers.” Consider, for
example, “people.” Primarily, only the roadways network has direct access to “people.” In large cities,
people might walk directly into railroad stations, but with these and a few other exceptions, people sel-
dom walk directly into an shipping dock, railroad station, or airport. Most of the time people need to
use roads first before being transported by airplanes (or trains or ships), and then typically need to use
roads again to reach their final destinations. Given the differences in services offered by these four
transportation networks, all four networks are viable, and internetworking “gateways,” such as railroad
stations, shipping docks, and airport terminals on which roads terminate, allow people to move seam-
33
lessly between networks. Analogously, of the four networks shown in Fig. 2, primarily only the IP net-
work and the STM network enjoy direct access to data-generating endpoints. But given the different
types of services offered by the four networks, internetworking gateways should be implemented to
Second, this analogy to transportation networks can also be extended to understand the differences
between protocol encapsulation and conversion, as discussed in Section 3.2.7. In the transportation
model, an example of a protocol conversion when the source and destination are on the same network
(roadways) occurs when we drop off one car at an origination airport, take a flight, and then pick up a
new car at the destination airport to reach our destination endpoint. An example of encapsulation is
when we take our cars with us while taking a ferry (ship). The cost of transporting cars is quite signifi-
cant in the case of the airways network, which is the reason for resorting to “protocol conversion.” Sim-
ilarly, transporting TCP and IP headers should be avoided in some cases such as in the switched ATM
Third, we can compare the roadways network to an IP network and the remaining three transporta-
tion networks to the three CO networks on the basis that reservations are made in latter but not in the
former. Congestion on roads leads to the queueing of cars in the same manner that IP datagrams get
buffered with long delays when there is congestion in the IP network. In the other transportation net-
7 Existing proposals
This paper proposed solutions for the problem of internetworking IP networks with CO networks.
The solution consists of operating CO networks in the switched mode, and using CL-CO gateways that
convert user-plane protocols of the two networks and interwork the routing schemes to create data flow
34
to/from the CO network. Interworking of addressing schemes and other aspects of internetworking are
described in [15].
traffic is bursty, bandwidth utilization efficiency will be low when compared to any packet-switched
network. In the case of packet-switched CO networks, such as ATM, while the burstiness aspect of IP
traffic is handled better than with circuit-switched networks, since the protocol conversion scheme of
Section 3.2.7 cannot be applied to provisioned networks, the 20% overhead [17] incurred by carrying
IP traffic in ATM cells makes this less efficient than a CL IP network [14]. If an MPLS-based CO net-
work is run in provisioned mode, it will also be less efficient than a CL IP network due to protocol
encapsulation overhead incurred by carrying both IP and shim headers. Thus, as stated in Section 5,
operating a CO network in provisioned mode is less resource efficient than a network of CL IP routers.
A comparison of our solution with a generalized MPOA solution applicable to any CO network indi-
cates:
1. In the user-plane, MPOA proposes encapsulating protocols, whereas in our solution, we propose
protocol conversion (see Section 3.2.7 for the advantages).
2. The premise for internetworking in MPOA is flow detection: long-lived flows are directed to the
ATM network and short-lived flows are directed to the IP network. With the implementation of
hardware-based IP forwarding, all IP packets can be forwarded fast on the IP network (i.e., there
is no advantage in setting up “cut-through” connections through the ATM network). In our
proposal, internetworking is based on constructing routing tables. If the routing tables indicate
that the CO network provides a “shorter” path, this path is taken. The issue of a mismatch
between the user data packet arrival speeds and the connection setup speeds is not addressed in
35
MPOA.
3. MPOA proposes an explicit address resolution scheme in which IP addresses are translated into
ATM addresses (20-byte addresses). In our solution [15], we propose address encapsulation to
remove the inefficient step of resolving addresses and incurring a front-end delay.
We conclude that there are advantages both from the user perspective and the service provider per-
spective to carry some of the data generated by Internet applications on CO (Connection-Oriented) net-
(Connectionless) mode, but special nodes called CL-CO gateways determine whether or not to redirect
traffic from the CL network to the CO network. CL-CO gateways are nodes that combine both IP router
and CO switch capabilities. We further conclude that to realize the advantages of CO networks, they
should be operated in the switched mode rather than in a provisioned mode. It is less bandwidth-effi-
network. Furthermore, users cannot be provided with service guarantees for specific data flows in a
provisioned CO network. However, if the CO network is packet-switched and operated in the switched
For these above reasons, we studied the problem of internetworking a CL network (specifically IP)
with a CO network operated in a switched mode. Our solutions to this problem consist of interworking
routing schemes of the CL and CO networks to create data flows from the CL network to the CL-CO
gateways, and user-plane solutions for “halting” or “turning around” datagrams that appear at the CL-
CO gateway (from the CL network) while a connection is being set up through the CO network. This
combined networking solution allows both users and service providers to exploit both the CL and CO
networking modes.
36
We also addressed perceived drawbacks of carrying IP traffic on a switched CO network, i.e., short
holding times of web traffic and the need for large call handling capacities, by noting that connection
setup is not required for each flow, and through delayed releases, the number of connection setup
actions can be reduced for each incoming flow. Further, we note that current measurements indicating
that web traffic is short-lived can be explained by the fact that most http implementations establish a
new TCP connection for each transaction, and no correlation data has been obtained for how many suc-
cessive TCP flow establishments are between the same client and server. Also, multimedia traffic from
applications such as Internet telephony can be expected to have telephony-like holding times.
9 Acknowledgment
We thank Fabio Neri, Politecnico di Torino, for his valuable comments and feedback on this work.
References
[1] R. Braden, L. Zhang, S. Berson, S. Herzog, S. Jamin, “Resource ReSerVation Protocol (RSVP) Version 1 Functional
Specification,” IETF RFC 2205, Sept. 1997.
[2] R. Callon et al, “A Framework for Multiprotocol Label Switching,” Internet Draft, Nov. 1997.
[3] J. Moy, “OSPF Version 2,” IETF RFC 2178, July 1997.
[4] M. Nilsson, “Recommendation H.245 Version 2,” ITU-T SG 16 Contribution, Mar. 1997.
[5] G. A. Thom, “Draft Recommendation H.323V2,” ITU-T SG 16 Contribution, Jan. 21, 1998.
[6] W. Stallings, “Data and Computer Communications Fifth Edition, Prentice Hall 1997.
[7] W. R. Stevens, “TCP/IP Illustrated Volume 1, The Protocols,” Addison-Wesley Professional Computing Series, 1994.
[8] K. Thompson, G. J. Miller, and R. Wilder, “Wide Area Traffic Patterns and Characteristics,” IEEE Network Magazine,
Dec. 1997.
[9] J. Heinanen, “Multiprotocol Encapsulation over ATM Adaptation Layer 5,” RFC 1483, 1993.
[10] R. Guerin, S. Kamat, A. Orda, T. Przygienda, D. Williams, “QoS Routing Mechanisms and OSPF Extensions,” draft-
guerin-qos-routing-ospf-03.txt, Internet Draft, Jan. 30, 1998.
[11] M. Veeraraghavan, M. Kshirsagar, and G. L. Choudhury, “Concurrent Connection Setup Reducing Need for ATM VP
Provisioning,” Proc. IEEE Infocom’96, San Francisco, CA, pp. 303-311, Mar. 1996.
[12] E. Livermore, R. P. Skillen, M. Beshai, M. Wernik, “Architecture and Control of an Adaptive High-Capacity Flat Net-
work,” IEEE Communications Magazine, vol. 36, no. 5, pp. 106-112, May 1998.
[13] ATM Forum, “Traffic Management, Release 4.0,” af-tm-0056.000, April 1996.
[14] Alan Chapman and H. T. Kung, “Enhancing Transport Networks with Internet Protocols,” IEEE Communications
Magazine, vol. 36, no. 5, pp. 136-142, May 1998.
[15] M. Veeraraghavan, “Internetworking telephony, IP and ATM networks,” Proc. of the SBT/IEEE Telecommunications
Symposium, Sao Paulo, Brazil, pp. 251-256, Aug. 1998.
[16] ATM Forum, “Multiprotocol Over ATM (MPOA) version 1.0,” STR-MPOA-01.00, Apr. 1997.
[17] J. Manchester, J. Anderson, B. Doshi, S. Dravida, “IP Over SONET,” IEEE Communications Magazine, vol. 36, no.
37
tion is carried through a provisioned ATM network, with a link in the provisioned ATM connection
being realized as a SONET provisioned path, and with part of the SONET path being carried within an
optical lightpath, a protocol stack resulting from the use of encapsulation in this scenario will consist of
TCP, IP, AAL, ATM, SONET path, SONET line, SONET section, Optical channel, Optical multiplex
section, Optical transmission section [18], and optical physical layer. This stack essentially has the
transport, network, and data link layers repeated several times. AAL and ATM provide OSI transport-
and network-layer functionality within an ATM network. These layers are needed to carry the packets
through the ATM network. Similarly, the three SONET sub-layers correspond to the OSI transport, net-
work and data link layers of a SONET network, and the WDM multiplex section and optical section
correspond to the OSI network layer and data link layer of a WDM network, respectively. Such
repeated layers lead to gross inefficiencies but are needed if protocol encapsulation is used. Operating
CO networks in provisioned mode necessitate the use of protocol encapsulation. Operating CO net-
works in switched mode allows for protocol conversion, which is one of the schemes proposed in this
paper.