Академический Документы
Профессиональный Документы
Культура Документы
• Mass storage
• Programs/Packages
• Data
• Processing power
• Printers, plotters etc.
• Improved reliability: distributed systems degrade gracefully, centralised systems tend to crash
abruptly.
• Price/Performance ratio: mainframe performance is around 20x that of a PC, cost is at least 100x.
5. Worldwide Internetwork
(Halsall p10, fig 1.6)
• Until relatively recently, different manufacturers' computer systems could not exchange
information (they were "closed systems").
• International standards define interfaces, information format and control of the exchange of
information.
• Any equipment adhering to such international standards can be used interchangeably with
equipment from other manufacturers adhering to the same standards.
• Standards may be
• "de jure" ("by law") e.g. ISO-OSI
• "de facto" ("from the fact") e.g. DOS, Windows, Unix, TCP/IP.
• Error free, timely delivery of information to the correct destination (network services)
• Presentation of received information to the end user ('application process') in a suitable
format that the end user can recognise and manipulate (end user services).
• Clearly a complete communication system will be a complex mix of hardware and software to
provide these functions.
• To deal with this complexity, it is vital that such systems are designed and implemented in a highly
structured fashion
• With layered architectures, the system is conceptually broken down into layers, where each layer
is defined by:
• In this simple 3-layer architecture, the services offered by the layers to the layer above might be:
• Each layer provides a specific set of functions using the services provided by the layer below.
• Both methods transmit additional (redundant) error checking information along with the data.
• Error correction requires substantially more redundant checking information than error correction.
Therefore, feedback error control is more widely used, although forward error control is usually
preferred for
• Simplex systems
• Broadcast systems
• Transmission links with large loop delay (e.g. satellite systems) where the amount of data
in transit is large.
• Appends an extra bit to data units, set to '0' or '1' depending on the number of 1's in the
data unit.
• e.g. if 11001101 is the data unit, a parity bit set to 1 is appended (even parity) since the
number of 1's in the data is odd.
• The receiver counts the number of 1's in the received block (data+parity bit) and if this is
even, it is assumed no errors have occurred.
• Simple parity checking can only detect odd numbers of errors, and if a long sequence of
data is corrupted (a burst error), the probability of error detection is only 1/2.
Error Detecting Codes
• Data is formed into a block, N bits wide by M bits high, and both longitudinal and
transverse parity bits are added.
(Halsall p129, fig 3.15)
• This allows detection of all burst errors of length up to and including N, although some
error patterns are undetected.
• Extra check bits are calculated and appended to data so that the extended frame (data +
CRC check bits) is exactly divisible by some predetermined binary number (called the
division sequence or generator polynomial)
• The receiver divides the received frame by the division sequence and if the remainder is
0, assumes no errors.
• Mathematical operations (division etc.) are carried out using modulo 2 arithmetic (no
carries or borrows)
1. Append (n-1) 0's to the right hand side of the data (n is the number of bits in the division
sequence).
2. Divide the resulting bit sequence by the division sequence to give the remainder. This
remainder is the check sum.
3. Append the check sum to the original data and transmit the resulting data block
Error Detecting Codes
• CRC checking is very powerful (i.e. it detects almost all errors) and is easily implemented in very
fast hardware.
x16 + x12 + x5 + 1
• If n is the number of bits in the division sequence, then adding (n-1) 0's to the RHS is equivalent to
multiplying by xn-1, giving xn-1D(x).
• Let the generator polynomial be G(x). After performing the division of x n-1D(x) by G(x), we get a
remainder R(x) and a quotient Q(x) satisfying
• Appending this remainder to the original data D(x) gives us the transmitted block, T(x) which is
equal to
xn-1D(x) + R(x)
which, from equation 1 is exactly divisible by G(x), and if no errors occur, the receiver calculates a
zero remainder.
• Clearly for T′ (x) to give zero remainder when divided by G(x), this implies that E(x) must be
exactly divisible by G(x)
• Put another way: Error patterns which are multiples of the generating polynomial will not be
detected; all other error patterns will.
• As mentioned earlier, the calculations needed to perform CRC are easily implemented in fast
hardware.
• Generally employ error detecting codes in conjunction with some form of retransmission
mechanism.
• The most commonly used mechanisms are called automatic repeat request (ARQ) protocols
• Idle ARQ
• Continuous ARQ
• Selective repeat
• Go-back-N
General mechanism:
• If Rx receives an I-frame without errors, it accepts the frame and sends back a short
acknowledgement (ACK-frame).
• On receipt of an error-free ACK frame, Tx sends the next I-frame, restarts the timer and waits.
• If the Tx timer expires before an error-free ACK frame is received, Tx resends the I-frame, restarts
the timer and waits.
Additional comments:
• The Tx timeout interval must be greater than the I-frame transmission time + (2 x end-end
propagation delay) + processing time at Rx.
• I-frames and ACK-frames must include a sequence number, to allow Rx to discriminate between
duplicate copies of I-frames.
• Optionally, Rx may send back a negative acknowledgement frame (NACK) when an erroneous I-
frame is received.
• If the link propagation delay is large compared to the I-frame transmission time, Idle ARQ has
poor link utilisation.
Idle ARQ (Stop-and-Wait)
Continuous ARQ
• Tx sends I-frames continuously without waiting for ACK-frames to be returned (although the
number of unacknowledged frames allowed to be outstanding is limited to a certain maximum,
called the "window size").
• When Tx receives an error free ACK-frame, it removes the corresponding I-frame from its
retransmission list.
Continuous ARQ
• If an I-frame or its corresponding ACK-frame are lost or damaged, Tx detects this either via a
timeout, or because ACK's are out of order.
Continuous ARQ
• Selective Repeat
(Halsall p191, fig. 4.12)
Continuous ARQ
• Go-back-N
(Halsall p196, fig. 4.14)
• Clearly the use of ARQ protocols entails some overhead which reduces the link utilisation.
Tf
U =
Tt
where
• Parameters which affect link utilisation for ARQ protocols include: frame transmission time, link
propagation delay, error rate, window size (continuous ARQ).
• In the following treatment, processing time at Tx and Rx are assumed to be negligible, and ACKs
are assumed to be very short.
Case 1: No Errors
Idle ARQ
The total time to successfully exchange a frame is equal to the sum of:
The last two are each equal to the link propagation delay Tp.
Tf 1
U = =
T f + 2T p 1 + 2a
where a = Tp/Tf
Link Utilisation of ARQ Protocols
Case 1: No Errors
Continuous ARQ
With no errors, utilisation for selective repeat and go-back-N are the same (there are no retransmissions).
If K ≥ 1+2a, the transmitter can send continuously without pause since ACK frames come back before the
window size is reached.
If K < 1+2a, the transmitter sends K frames and then has to wait until a time T f+2Tp from the start of
transmission until ACKs start returning. The utilisation is therefore given by
KT f K
U = =
T f + 2T p 1 + 2a
Idle ARQ
To calculate utilisation with a frame error rate of P, we need to estimate how many times (on average) a
frame must be transmitted to be received without error.
The probability that a frame must be transmitted i times before being successfully received is equal to
Pi-1(1-P)
That is, we have (i-1) unsuccessful attempts followed by one successful attempt.
∞ ∞
∑ i P ( i=) ∑ i Pi-1( 1- P )
i= 1 i= 1
It can be shown that this sum is simply equal to
1/(1-P)
Since only erroneous frames are retransmitted, the utilisation is simply reduced by the average number of
times a frame needs to be sent for it to be received without errors.
From the earlier case of Idle ARQ with errors, this average number is equal to 1/(1-P).
U = 1-P when K ≥ 1 + 2a
The situation is more complicated here since an erroneous frame entails the transmitter "going-back-N" and
retransmitting several frames.
Let f(i) be the total number of frames which must be retransmitted if the original frame must be transmitted i
times. If, for each erroneous transmission of the original frame, the transmitter has to "Go-back-N" then
f(i) = 1 + (i - 1)N
The average total number of frames, Nr, which must be transmitted for the successful exchange of a single
frame is then given by
∞
N r ∑ P ( 1- P )
i -1
= f ( i )
i= 1
This can be simplified to
1 - P + NP
Nr =
1- P
To complete the estimation of utilisation, we need to determine the value of N - i.e. exactly how far must the
transmitter go back when a frame error occurs?
The value of N depends on the relative values of the window size, K, and the normalised link propagation
delay, 1 + 2a.
If K ≥ 1 + 2a, then N = 1 + 2a
After some manipulation, this gives us the utilisation of the Go-back-N protocol as
1- P
U = when K > 1 + 2a
1 + 2aP
K(1 - P)
U = when K < 2a + 1
(2a + 1)(1 - P + KP)
(Note: This is a different, and better, derivation than given in Halsall's text - for more details see W.
Stallings, "Data and Computer Communications", 5th edition, 1997, pages 190-196).
∞
1
∑i= i X1 = ( -1X )2 f o( -<r X1 <1 )
i -1
Lecture 4: Protocol Specification and Verification
• To provide a basis for the generation of test cases to verify protocol implementations.
Protocol Specification
• Most methods for specifying a communication protocol are based on based on modelling the
protocol as a finite state machine or automaton - the protocol entity can only be in one of a finite
number of defined states at any instant.
• High level structured programs, or specification languages (e.g. Estelle. SDL, LOTOS).
• Petri Nets
Protocol Specification
• Outgoing events, usually generated as a result of an incoming event (e.g. send next I-
frame)
• Idle ARQ state transition diagram and extended event-state table - primary
(Halsall p181, fig. 4.6)
• Idle ARQ state transition diagram and extended event-state table - secondary
(Halsall p181, fig. 4.7)
• Specialised specification languages have been developed for defining state-driven systems such as
protocols.
• One such language is Estelle - Extended State Transition Language - which is an extended version
of Pascal which allows explicit representation of transition rules and actions.
• Widely used in studying all types of concurrent systems, a Petri net is made up of the following
four elements:
• Places which represent the state of part of the system (e.g. primary, secondary, channel)
• The net is initially marked - tokens are deposited in certain places to indicate the initial
state of the system.
• Any enabled transition will fire at will, removing tokens from all input places and
depositing a token in each output place.
• If two or more transitions are enabled, any one of them may fire (selected at random) -
this enables the modelling of nondeterminism in the system.
Protocol Verification
• The most commonly used method is that of state exploration (reachability analysis):
• All system states reachable from the initial state are determined by systematically
exploring all transitions.
• Incompleteness (e.g. the specification may not say what is to happen when a particular
event occurs in a particular state.)
• Deadlock (e.g. a subset of states exists for which it is impossible to exit and continue).
• Definition of a LAN
• LAN Topologies
Definition of a LAN
• A computer network used to connect machines in a single building or localised group of buildings
LAN Topologies
• A LAN topology describes the physical layout of the network cabling and the way in which
connected nodes access the network.
• Choice of topology is affected by a number of factors including: economy; type of cable used; ease
of maintenance; reliability.
LAN Topologies
• Star Topology
LAN Topologies
• Ring Topology
Nodes are connected together in a closed loop or ring. Data flow round the ring is usually one-way,
and nodes contain active repeaters.
(Halsall p274, fig. 6.2b)
LAN Topologies
• Bus Topology
A single network cable is routed to all nodes. Nodes "tap" onto the shared cable.
(Halsall p274, fig. 6.2c)
LAN Topologies
• Hub/Tree Topology
A combination of star/bus or star/ring. The hub is simply the bus or ring wiring collapsed into a
central unit, and does not perform switching.
(Halsall p274, fig. 6.2d)
• With bus and ring topologies (the most common), nodes are connected by a single transmission
channel
• Nodes must obey a discipline which determines the way in which access to the shared transmission
medium is controlled. This is the medium access control method.
2. If the channel is busy, monitor the channel until it becomes free, then transmit a
packet.
• All nodes read all packets from the channel. Packets contain destination addresses and
error check bits. When a node reads a packet containing its own address and with no
errors, the packet is accepted.
• Token Passing
• Token Ring
• The token is passed around the ring from node to node. A node wishing to transmit data
waits until it reads the token.
• When the token arrives, the node removes it and then begins transmitting one or more
data frames.
• Data frames circulate round the ring. All nodes inspect the frame destination address. The
node for which the frame is intended makes a copy of the frame.
• Finally when the frame arrives back at the sending node, it is removed by that node. The
node then regenerates the token and passes it to the next station in the ring.
• Token Ring
(Halsall p282, fig. 6.6a)
• Token Bus
• Token passing similar in principle to token ring, except that the token must now contain
an address field (since the bus is a broadcast medium)
• Every node must keep a record of which node is next in the logical ring
• An Aside: In reality, token passing is more complex than described. It commonly implements
priorities and must have some method built into the protocol to deal with loss of token and the
adding/removal of stations to/from the network.
• Token Bus
(Halsall p282, fig. 6.6b)
CSMA/CD Token
• Introduction
• Cabling Options
• Original Ethernet developed by Xerox (Metcalf and Boggs, 1976), then adopted and further
developed by Xerox, DEC and Intel. Finally extended and standardised by the IEEE as standard
802.3.
• main data rate is 10 Mbps (although other data rates are included in the standard)
• Cabling options:
• 10BASE5
• Station connects to thick coaxial cable via a transceiver cable and transceiver unit:
(Halsall p286, figs. 6.8a, 6.8b)
• 10BASE5
• 10BASE2
• Broadly similar to 10BASE5 but uses cheaper cable. Sometimes called "Thin Ethernet"
or "Cheapernet"
• 10BASET
• Twisted pairs run to a network hub (hence a physical star topology). On pair is used for
transmit, the other for receive
• A collision is detected when a node senses incoming data on the receive pair while it is
transmitting.
• 10BASEF
• Similar physical star topology as 10BASET but using dual optical fibre cable for longer
transmission distances
• A general point about commercial 802.3 network cards: most provide multiple connectors to
support the different types of transmission medium
• Frame format:
(Halsall p289 fig. 6.10a)
• Operational Parameters
(Halsall p289, fig. 6.10b)
• IEEE 802.3 networks use the truncated binary exponential backoff algorithm when there are
repeated collisions
• A basic 802.3 hub repeats an incoming transmission to all outgoing links and clearly only
one transmission can be in progress at any one time
• By increasing the complexity of the hub electronics, the hub can operate in non-broadcast
mode:
• hub reads source addresses from packets and builds up a table of MAC
addresses and corresponding ports
• using this table, the hub can repeat packets only to the ports to which they are
addressed, and so several 10 Mbps paths can effectively be in use
simultaneously.
• To operate in this mode, the hub must be able to repeat several frames in parallel. This
can be done with the following arrangement:
(Halsall p356, fig. 7.2a)
• Heavily used paths (e.g. connections to file servers, connections between hubs) can use
higher data rates (in multiples of 10 Mbps):
(Halsall p356 fig. 7.2b)
Lecture 7: High Speed LANs and Bridged LANs
• Aim: obtain an order of magnitude increase in speed (100 Mbps compared to 10 Mbps) while
retaining the same wiring systems, MAC method and frame formats.
• Architecture
• A convergence sublayer provides the interface between the standard IEEE 802.3 MAC
sublayer and the underlying physical medium dependent sublayer:
(Halsall p358, fig. 7.3)
• 100BASE4T
• Each node is connected to the hub by four twisted pairs, used as follows:
• Each twisted pair carries data at 33.3 Mbps (giving the composite data rate of 3 x 33.3 =
100 Mbps)
• The limited bandwidth of the twisted pair cable means that Manchester encoding (used in
10BASET) cannot be employed. Instead an encoding method called 8B6T is used:
• 8B6T takes 8 binary symbols (bits) and converts them into 6 ternary (3 -level)
symbols. This reduces the baud rate on each cable to 25 Mbaud
• 6 ternary symbols gives 729 (36) possible codewords of which only 256 (28) are
needed to encode 8 bits of data. Ternary codewords are chosen to achieve DC
balance, and to ensure all codewords contain at least two signal transitions (for
synchronisation).
• 100BASEX
• Employs a coding technique called 4B5B (same as FDDI) to ensure guaranteed signal
transitions at least every two bits for synchronisation
FDDI Networks
• Two types of station: dual attached stations (connected to both rings) and single attach stations
(connected only to one ring - the primary)
(Halsall p377 fig. 7.12)
FDDI Networks
• Transmission medium: multi-mode optical fibre, giving a maximum network length of 100km and
a maximum internode spacing of 2km (a copper version, CDDI, is available for shorter distance
working.)
• Employs a modified release after transmission token passing protocol called a timed token rotation
protocol
FDDI Networks
• A preset parameter - the target token rotation time (TTRT) - is defined (4ms - 165ms)
• For each rotation of the token, each station measures the time expired since it last
acquired the token. This is the token rotation time (TRT).
• On receiving the token, a station computes TTRT - TRT, called the token hold time
(THT) If the station has data to send, this is the maximum amount of time it is allowed to
transmit for before passing on the token.
FDDI Networks
• FDDI provides an option for synchronous data - data that is delay sensitive and must be transferred
within a guaranteed maximum time interval.
• To deal with synchronous as well as asynchronous data, the timed token rotation protocol is
modified as follows:
• If capacity is available, the network management station will allocate the requested
amount of capacity.
• Every time a station receives the token, it may send its allocated amount of synchronous
traffic.
• Any remaining time available up to the token hold time (THT) may be used for
asynchronous transmission.
DQDB Networks
• Designed for both LANs and Metropolitan Area Networks (MANs) and standardised in IEEE
802.6
• Each bus has a head-end which generates a steady stream of 53 byte cells. Each cell travels
downstream from the head-end.
(Halsall p603, fig. 10.19a)
DQDB Networks
• Each cell contains a 44byte payload field, together with control and address bits.
• DQDB, as its name implies, implements a distributed first-come-first-served queue medium access
protocol:
• Each cell contains two control bits - a busy bit and a request bit. Inspection of these bits
allow a station to determine:
• if a cell is in use
• if an "upstream" station is requesting to transmit
DQDB Networks
• A request for transmission on one bus is made by setting the request bit in a cell on the
other bus.
• For each bus, when a cell passes through:
• If the busy bit = 0, decrement RC
• If the request bit = 1 increment RC
• Therefore, at any time, a station "knows" how many requests there are outstanding by
other stations for each bus
• When a station has data to send, it copies RC into CD and every time a cell goes by with
the busy bit = 0, it decrements CD.
• A bridge connects network segments locally or remotely so they appear to the user as a single
network.
• A bridge reads, error checks and buffers incoming data frames on the network segments it
connects. The frame destination is inspected and the frame is forwarded to the correct network
segment.
• Transparent bridges
• primarlly used on Ethernet-style networks
• route calculation performed by the bridges
• We will look only at the operation of transparent bridges (details on source routing
bridges can be found in Halsall p409-417)
• A bridge maintains a routing table which stores, for each station, the outgoing
port to be used for frames addressed to that station
• When a frame arrives at the bridge (which operates in promiscuous mode), the
destination address is inspected and used to index into the routing table
• The frame is forwarded to the correct outgoing port (unless this is the same port
on which the frame arrived, in which case the frame is discarded)
• Bridge Learning
• The bridge "learns" where stations are by inspecting the source address in each
frame. Routing table entries are constructed using this information.
• When a frame is read by a bridge with no routing table entry for its destination
address, the frame is forwarded to all other outgoing ports of the bridge
(flooding)
• Removal and moving of stations is catered for by using an inactivity timer for
each entry in the routing table.
Lecture 8: Wireless Local Area Networks
• Wireless Media
• Radio
• Infrared
• WLAN Standards
• Wired LANs incur costs of cabling, and of changing the wiring plan if the installation changes
• Wired LANs do not naturally support the increasing proliferation of hand-held terminals and
portable computers
• Some applications of WLANs: factories, hospitals, historic buildings, emergency LAN backup
• Transmission impairments:
• This has the effect of widening the spectrum of the information carrying signal (hence
"spread-spectrum)
• The pseudorandom sequence is also known as the spreading sequence; each bit in the
sequence as a chip, the resulting transmission bit rate as the chipping rate and the number
of bits in the sequence as the spreading factor.
• The pseudorandom binary sequence is usually generated using a feedback shift register:
(Halsall p327, fig.6.28a)
Transmission Schemes (Radio)
• To allow synchronisation at the data rate (as opposed to the chipping rate), data frames
are transmitted with a preamble (e.g. a sequence of 1's) and a start of frame delimiter:
(Halsall p326, fig.6.27)
• As the signal arrives at the receiver, the demodulated binary stream is fed into an
autocorrelation detector:
(Halsall p327, fig.6.28b,c)
• The allocated frequency band is divided into a number of lower frequency sub-bands
called channels.
• Transmitter uses each channel for a short period of time before "hopping" to a different
channel
• Carrier modulation - binary data is modulated onto a suitable frequency carrier using FSK or PSK
• Code Division Multiple Access (CDMA): different pairs of stations use different frequency
hopping sequences
• CSMA/CD (modified):
• the same collision detection methods as used on wired LANs cannot be used since, with
radio and infrared, transmission and reception at the same time is not possible
• CSMA/CD on wireless LANs uses a "comb" - a pseudorandom sequence appended to the
start of each frame (different stations use different, random combs):
(Halsall p336, fig.6.33)
• CSMA/CA (Collision Avoidance) - when the medium becomes quiet, a station with data to send
waits a random amount of time before transmitting
(Halsall p337, fig.6.34)
• TDMA - the portable access unit (PAU) establishes the slot/timing structure. Stations are offered
timeslots on demand.
• FDMA - the PAU determines different frequency channels and assigns these on demand.
• IEEE 802.11
1 and 2 Mbps using frequency hopping, direct sequence spread spectrum radio , and direct-
modulated infrared.
• ETSI HiperLAN
Radio
• Introduction
• Introduction
• WANs include:
• Public Data Networks (PDNs), which are operated and administered by national
telecomms authorities; international standards define the interfaces to these
networks
• Enterprise networks, operated by large organisations (who can justify the cost
by the amounts of traffic required to be conveyed); network links are leased
from telecomms authorities
Circuit Switching
• A dedicated connection is established exclusively for the use of two subscribers for the duration of
the connection
• Connection data rate is usually fixed, and end-end delays are small and fixed
• Usually involves connection setup and connection cleardown phases (although leased point-point
permanent circuit switched connections are available)
• Circuit switched networks do not usually offer any kind of error or flow control
Packet Switching
• With packet switching, DTEs break down the data to be conveyed into packets, which are
individually offered to the network
• Packets contain some form of destination address, and are individually routed through a network of
packet switching exchanges (PSEs) on a store-and-forward basis
Packet Switching
Packet Switching
• Each network link carries interleaved packets from different sources to different destinations
• When packets arrive simultaneously at a PSE for routing on the same outgoing link, the packets are
placed in a first-come-first-served queue or buffer
• Network congestion results in unpredictably long delays (PSE buffers become full)
• Packet switched networks will commonly provide error and flow control
• Datagram
• Virtual Circuit
Packet Switching
• Each packet contains the full destination address, used by PSEs to route packets
individually
• Since packets are routed independently, they can arrive at the destination out of order -
sequence numbering, buffering and reordering is required
Packet Switching
• A virtual circuit is established before data packets are sent; packets contain a virtual
circuit identifier and all follow the same route
• To establish a virtual circuit to a specific destination DTE, a source DTE sends a special
call request packet to its local PSE. Each call request packet contains
• As the call request packet is routed through the network to the destination, each PSE in
the path taken creates a similar routing table entry
Packet Switching
• When the call request packet arrives at the destination DTE, the latter responds with a call
accept packet, which is returned to the source DTE
• At this point the virtual circuit has been set up and a fixed route through the network has
been established (through the PSE routing table entries)
• Subsequent data packets contain the virtual circuit identifier (as opposed to the full
destination address) which is used by the PSEs to route packets
• If there are no errors, a virtual circuit PS network delivers packets in the correct sequence
Packet Switching
• Originally approved in 1976, subsequently revised in 1980, 1984, 1988, 1992, 1993; it is
considered old for many purposes (most X.25 networks run at 64 kbps) but you should be aware of
its existence.
• X.25 defines three layers (which correspond to the bottom 3 layers of the OSI model):
• physical layer
• link layer
• packet layer
• The frame layer provides the packet layer with a reliable (error free and no duplicates) packet
transport facility between DTE and local PSE. It is based on another protocol called HDLC
• The packet layer provides a virtual circuit packet transfer facility, and deals with such issues as
virtual circuit setup/cleardown, addressing, flow control and delivery confirmation.
• DTEs which do not "speak" X.25 (e.g. simple character mode terminals which do not have the
facility to generate packets) connect to X.25 networks via a packet assembler/disassembler (PAD)
Lecture 10: Wide Area Networks II
• Frame Relay
Integrates video, audio and data in addition to telephony over the same digital network with a
common interface.
• Digitised voice right up to the subscribers premises (no analogue local loop).
• Very fast call setup times internationally (since purely digital).
• PABX style services such as call transfer, calling party ID, conferencing, 'camp-on' etc.
but internationally.
• Closed user groups, allowing an organisation to use the public network as its own local
PABX.
• Data services:
• High-speed (multiples of 64kbps) switched data services, either circuit or packet
switched.
• Videotex (remote database access; e.g. on-line directory assistance), Teletex (E-mail),
high-speed facsimile.
• Telemetry and alarm services.
• Network terminating equipment (NTE) connects the customers premises to the local ISDN
exchange.
• 'R' access point used to connect devices using existing interface standards (such as X21,
V24) to an ISDN terminal adaptor;
• 'S' access point connects ISDN devices locally on the customers premises;
• 'T' access point connects customers premises to local ISDN exchange.
• ISDN 'bit pipes' provide multiple channels interleaved using time division multiplexing
• basic rate access: (2B + D). B channels are 64 kbps, D channels are 16 kbps, giving
composite bit-pipe user data rate of 144 kbps (actual bit rate is 192 kbps, including
synchronisation and framing)
• primary rate access (30B+D, Europe; 23B+D, USA and Japan) giving composite bit rates
of 2.048Mbps in Europe (which fits in nicely with CCITT PCM hierarchy) and
1.544Mbps in USA/Japan which fits in nicely with AT&T's T1 system.
Frame Relay
• Existing X.25 networks perform switching and multiplexing at the packet layer, even though
information arrives in frames.
i.e. frames need to be reassembled to form packets, which are then routed, and split up into frames
again for retransmission over the correct outgoing link.
• X.25 employs flow control and error correction (using retransmission protocols) at both the frame
level and the packet level.
• Clearly this is appropriate for a low quality network (such as an analogue PSTN for which X.25
was originally designed) but is extremely inefficient for a high-speed low error rate network.
Frame Relay
• Frame relay alleviates these problems by switching and multiplexing at the frame level (hence its
name).
• In addition, frame relay does not provide any error correction within the network (although it will
discard erroneous frames). Higher layer, end-to-end protocols are responsible for error correction.
• Frame relay operates over current networks giving users end-to-end data rates of typically 2Mbps.
• A typical use of frame relay is to connected geographically dispersed LANs in an enterprise WAN.
• to provide a single new network to replace the entire telephone system and all the
specialised data networks with a single integrated network for all kinds of information
transfer
• video on demand
• live TV from many sources
• full motion multimedia electronic mail
• CD quality music
• LAN interconnection
• very high speed data transport services
• The proposed access rates for B-ISDN are 155 Mbps and 620 Mbps
• Summary of ATM:
• uses a modification of the virtual circuit packet switching model; a virtual channel is set
up between two end users through the network and a variable-rate full duplex flow of
fixed-size cells is exchanged over the connection;
• use of small, fixed-size cells (53 bytes) allows faster switching and lower queueing delay
for high priority cells;
• To deal with traffic of very different characteristics and very different requirements, ATM offers a
number of service categories:
(Tanenbaum p459)
• In addition to different service categories, ATM also supports quality of service negotiation when
connections are established
• when a DTE requires a new virtual circuit, it must describe the traffic to be offered and
the service expected
• the network then checks to see if it can offer this connection without adversely affecting
existing connections
• if it can, the request is accepted (admitted) and the connection is set up; if it cannot, the
connection is rejected
• The ATM network also carries out policing: usage of network usage is monitored for each
established connections. If this usage is greater than that negotiated during admission, excess cells
can be discarded.
Parameter Meaning
Cell transfer delay How long delivery takes (mean and maximum)
Parameter Meaning
• Queue Parameters
• M/M/1 Queues
• Calculation of Mean Number of Customers and Mean Waiting Time for M/M/1 Queues
• Supermarket checkouts
• Aircraft takeoffs/landings
• Printer spoolers
• Statistical Multiplexers
• Packet Switches
Queue Parameters
• Number of servers
(Diagram)
M/M/1 Queues
n!
M/M/1 Queues
• Service time pdf = EXPONENTIAL, with mean service rate μ (same equations as for λ above
apply)
• Number of servers = 1
• (An aside for computer networks. The mean arrival rate, λ , is the mean rate at which packets arrive
for transmission over a particular link. The service rate, μ , is equal to the mean packet size divided
by the data rate of the link).
• Define the state of the queue at a given time as the number of customers in the queue at a given
time
• It can be shown that, for a queue in equilibrium, that the probability of finding the system in a
given state does not change with time
• From this follows the Principle of Detailed Balancing which states that:
λPk = μPk+1
• Hence Pk+1 = (λ/μ)Pk = ρPk where ρ = λ/μ and is called the "traffic intensity"
• Therefore:
P1 = ρP0
P2 = ρP1 = ρ2P0
..........
Pn = ρnP0
Pn = ρn(1-ρ)
i n f in ity
∑
n= 0
n Pn
or
in f in i ty
∑ ρ n ( 1- ρ )
n= 0
which is equal to
ρ λ
=
1- ρ µ- λ
• The mean waiting time is calculated using Little's result which states that
N = λT
where N is the average queue occupancy, and T is the mean waiting time
• From this, we end up with the simple result that the average waiting time is equal to
1
µ- λ
• e.g if μ = 1.0 customers/sec and λ = 0.5 customers/sec, tha mean waiting time is 2 seconds
• Note that if λ ≥ μ the mean waiting time is infinite (in fact, the queue never reaches equilibrium
and the analysis given above does not hold)
• Two computers are connected by a 64 kbps line. There are eight parallel sessions using the line.
Each session generates Poisson traffic with a mean of two packets/sec. The packet lengths are
exponentially distributed with a mean of 2000 bits. The system designers must choose between
giving each session a dedicated 8 kbps piece of bandwidth (via TDM or FDM) or having all
packets compete for a single 64 kbps channel. Which alternative gives a better response time?
• Here λ = 16 (8 sessions, 2 packets/sec per session) and μ = 32 (64 kbps data rate with
mean frame size 2000 bits)
• This conclusion is very general - splitting up a single channel into k fixed pieces makes the
response time k times worse (approx.). The reason is that it frequently happens that several of the smaller
channels are idle, while other ones are overloaded. The lost bandwidth can never be regained.
Lectures 12/13: Principles of Network Routing
• Introduction
• Definition
• Desirable characteristics of routing algorithms
• Static versus adaptive routing
Introduction
• The network layer is concerned with getting packets from the source all the way to the destination.
Getting packets to the destination typically requires making many hops at intermediate routers
(switches) in a complex interconnected mesh of such routers.
• The routing algorithm is that part of the network layer software responsible for deciding which
outgoing line an incoming packet should be transmitted on.
• Datagram networks apply routing on a packet-by-packet basis; virtual circuit networks apply
routing at the virtual circuit set-up time (sometimes called session routing).
Introduction
• Correctness
• Simplicity
• Robustness
• Routing algorithm should cope with host/router/line failures and changes in
traffic and topology
• Stability
• For example, a routing technique which reacts quickly to changing conditions
(e.g. traffic) may exhibit unstable swings
• Fairness
• Different users (sessions) should be treated fairly (i.e. offered similar grades of
service)
• Optimality
• e.g. optimise some criterion such as mean packet delay, network throughput..
Introduction
Routing decisions are changed to reflect changes in topology and/or traffic. Adaptive
algorithms differ in
• where they get their information (e.g. locally, adjacent routers, all routers),
• when they change the routes (e.g. every Δt, when the load changes, when the
topology changes),
• and what metric is used for optimisation (e.g. distance, number of hops,
estimated transit time
• Considers the network as a graph, where each node represents a router and each arc
represents a communication link. To choose a path between a given pair of routers, the
algorithm just finds the shortest path between them on the graph
• delay (each link is assigned a cost equal to measured or estimated queuing and
transmission delay)
Static Routing Methods
• Several algorithms for computing the shortest path between two nodes in a graph are
known. Perhaps the most widely used is that due to Dijkstra
• Dijkstra's algorithm works in a step-by-step fashion, building up the shortest path tree
from a source node until the furthermost node has been reached
Definitions: D(v) is the distance (i.e. the sum of link weights or costs along a given path)
between the source (node 1) to node v.
N is the set of nodes for which the shortest path has been calculated in
a particular step of the algorithm
There are two parts to the algorithm: an initialisation step and a step to be repeated until the algorithm
terminates:
1. Initialisation. Set N = {1}. For each node not in N, set D(v) = L(1,v). For nodes not connected to
node 1 set D(v) = ∞
2. At each subsequent step. Find a node w not in N for which D(w) is a minimum. Then update D(v)
for all nodes remaining that are not in N by computing
• Flooding
• Every incoming packet is sent on every outgoing line except the one on which it arrived
• This guarantees that all packets will reach all destinations along the shortest path (but
unfortunately many other paths too!)
• Clearly to avoid an infinite number of duplicate packets, some form of damping must be
applied. Some methods are:
• Include a hop counter in each packet, which is decremented at each hop. The
packet is discarded when the hop counter reaches zero
• Include a sequence number in each packet and have each node record each
sequence number the first time the packet is routed. Duplicate copies of the
packet at later times are discarded
• Flooding
• Clearly, even taking measures to control duplicate packets, flooding is not practical in
most applications. However, it is extremely robust (if any path exists between source and
destination, then flooding will find it). It therefore has some specialised uses e.g.
• Flooding of link state data packets for adaptive routing (see later)
• Is a static routing method which takes into account both network topology and the
expected mean data flow between each pair of nodes
• If the data flow between all nodes is known in advance and is, to a reasonable
approximation, constant in time, flow based routing can be used to analyse the flows
mathematically to optimise the routing
• Whereas shortest path algorithms pick a particular route for all traffic between a particular
source and destination pair, flow based routing allows this traffic to be shared over
several paths (it is often called multi-path or bifurcated routing)
• Flow based routing attempts to minimise the network-wide average packet delay, E(T),
given by:
1M 1M λi
E (=T ∑ ) λ i T i = ∑ 15
γ i= 1 γ i= µ1 i - λ i
where M is the number of links, λi is the offered traffic on link i, μi is the service rate of
link i and γ is the total external offered traffic to the network, given by
N N
γ =∑ ∑γ ij
16
i= 1 j = 1
the γij's can be visualised as entries in an N by N traffic matrix consisting of average
traffic arrival rates flowing between the different nodes (this must be known).
1 M fi
E ( T=) ∑ 17
γ i= 1 C i - f i
where Ci is the capacity of the i'th link and fi is the flow over that link
• Flow based routing attempts to minimise the value of E(T) given in equation (1) subject
to a number of flow constraints. These constraints are, namely, that the flow of traffic
from session (source i, destination j) into a particular node must be equal to the traffic
leaving (unless the node is i or j). This difference in traffic flow is
N N
∑ f ijm l - ∑ f ijln 18
m= 1 n= 1
which must equal -rij if l = i, +rij if l = j or 0 otherwise. This gives us a set of N 2(N-1)
equations for the flows in the network which must be satisfied.
Static Routing Methods
• The objective of the routing strategy is now to find each flow in the network fijml
satisfying the conservation equations (4) and minimising the network wide delay (3) (this
is called a constrained optimisation problem)
• Various algorithms have been devised to solve this problem, most using some form of
iteration or "gradient descent" (the function to be optimised is convex and unimodal)
• The routing methods studies so far do not adapt to changes in network topology or load (they are
static)
• We will now look at two popular adaptive routing methods which vary routing decisions over time
according to measured/estimated network state. These are:
• Each router maintains a table (vector) giving the best known distances (using some metric
e.g. delay) to each destination, and which outgoing link to use to get there
• At any time, a router knows its "distance" to each neighbour (e.g. it can locally estimate
the queuing delay or directly measure delays by time stamping packets)
• Once every T milliseconds, each router sends its current estimated delays for each
destination to each of its neighbours.
• Imagine node A sends its estimated delays for all destination nodes {B, C, D....} to node
J. Node J can then compute a new set of estimated delays to all nodes going via node A
(these are equal to A's estimated delays plus the delay from J to A which J knows).
• Once J receives tables from all of its neighbours, it can update its table of estimated
distances and route its packets over the outgoing links which it estimates provide the
lowest overall distance to each destination
• Unfortunately, the simple distance vector routing algorithm described suffers from a
potential drawback - although reacts well to "good news" (e.g. a new router coming on-
line), it can react slowly to "bad news" (e.g. a router or a link failing)
(Tanenbaum p 357)
2. Measure the delay or cost to each of its neighbours (i.e. over each outgoing
link).
4. Send this packet to all other destinations (i.e. not just neighbours).
• In effect, the complete topology and all delays are measured and distributed to every
router, which can then apply a shortest path algorithm (e.g. Dijkstra's) to find the best
routes to every other node
• When a router is booted, it sends a special "HELLO" packet over each outgoing
link
• Neighbours then send replies giving their (unique) network name or address
• Node can send a special time-stamped "ECHO" packet to its neighbour, which
the neighbour is required to return immediately
• Clearly, distribution of link state packets must be done reliably. For this reason,
a modified flooding method is often used:
• Once a router has accumulated a complete set of link state packets, it can
construct the complete network graph
• With this information, it can run a shortest path algorithm to determine the best
routes which are then used to update the routing table
• The ARPANET (ARPA = Advanced Research Projects Agency) was the testing ground
for a number of different adaptive routing algorithms.
• Designed in 1969
• Used distance-vector adaptive routing (each node maintained a table of estimated delays
to all other nodes, and exchanged these with neighbours every 2/3 seconds)
• The cost metric was the instantaneous (i.e. not averaged) queue length and did not take
into account link bandwidth and latency
• With relatively frequent measurements of instantaneous queue length, the method
produced pronounced instabilities and looping of packets
• Took both bandwidth and latency into consideration and actually measured packet delays
using time stamps:
• This provided a great improvement over the 1st generation method, although problems
still persisted at high loads. In particular:
• The range of link values was to high (too large a dynamic range) - some routes
could appear 100s times more attractive than others
• The cost metric was calculated using a function which included line type and
utilisation
(Diagram)
• Introduction
Introduction
• When the traffic offered to (part of) a packet network exceeds network capacity, congestion sets in
and performance degrades
• As queuing delays become large, transmitters repeatedly time-out and retransmit duplicate packets
=> even worse performance
(Tanenbaum p374, fig. 5-22)
Taxonomy of Congestion
Control Methods
• Router Centric
• Host Centric
Addresses congestion from the hosts on the edge of the network (e.g. transport layer)
Taxonomy of Congestion
Control Methods
Effectively attempts congestion avoidance by reserving an agreed amount of network capacity for
each session, which is adhered to (e.g. admission control in ATM networks)
• Explicit: packets are sent from the point of congestion to control the source
• Implicit: source deduces the existence of congestion by making local observations (e.g.
time for ACKs to return)
Taxonomy of Congestion
Control Methods
• Window Based
The transmitter may send packets without ACKs up to some maximum window size
(e.g. TCP)
• Rate Based
The transmitter is limited in the rate (maximum and mean) at which traffic is offered to the
network
(e.g. ATM)
• Traffic Shaping
• Traffic shaping forces hosts to regulate the burstiness of traffic offered to the network, so
packets are offered at a more predictable rate
• Often used in conjunction with service negotiation and policing (e.g. ATM networks)
• Traffic Shaping
• Although the flow into the bucket may be bursty, the output from the bucket is regulated
• Host is allowed to put one packet per clock tick into the network (if the application is
generating packets faster than this, they are buffered in the host)
• A "byte counting leaky bucket" can be used for variable length frames
• Traffic Shaping
e.g. consider a computer which can generate data at 25 Mbyte/sec (200 Mbps), connected
to a network which can handle 2 Mbyte/sec on average without congestion. Data comes
in 1 Mbyte bursts, one 40ms burst every second.
(Tanenbaum p382, fig. 5-25)
Open Loop Congestion Control
• Traffic Shaping
• The leaky bucket algorithm is perhaps a little too rigid in its enforcement of a
fixed output rate
• The token bucket algorithm allows the output to speed up temporarily when
large bursts arrive
• The algorithm:
The leaky bucket contains tokens (NOT packets), generated by a clock at the
rate of one token every Δt seconds
An idle transmitter can save up tokens (up to the size of the token bucket),
allowing it to send large bursts later
• Traffic Shaping
• Traffic Shaping
If the bucket size is C (bytes), the token arrival rate is ρ, the maximum output rate is M
bytes/sec, and the time allowed for transmission at rate M is S, then
C + ρS = MS
and S = C/(M-ρ)
Output from a token buckets with capacities, C, 250KB, 500KB, 750KB (M = 25 MB/s,
ρ = 2 MB/s).
• Choke packets
• Each router monitors the utilisation of its outgoing lines (this is usually averaged with the
last utilisation to damp oscillations)
• Whenever the utilisation rises above some threshold, a warning flag is set for that link
• When a newly arriving data packet arrives for routing over that link, the router extracts
the packet's source address and sends a "choke packet" back to the source. This choke
packet contains the destination address
• The original data packet is tagged so that it will not generate any more choke packets,
then forwarded
• When the source host gets the choke packet, it is required to reduce the traffic sent to the
particular destination by X%; it ignores other choke packets for the same destination for a
fixed time interval
• At high speeds and over large distances, sending a choke packet to the source host takes
too long
• An alternative approach is to have the choke packet take effect at every node it passes
through.
• This gives quicker response at the price of requiring more buffer space upstream
• TCP/IP uses host centric (source based), window-based congestion control with implicit feedback
• The congestion window is modified by the source according to the level of congestion it perceives
in the network
• When a connection is established, the sender sets the congestion window equal to the maximum
TCP segment size.
• This amount of data is then transmitted, and timers are started (see later). If the data is ACK'd
before the timers expire, the congestion window is doubled (this is called slow start).
• This continues until either the receiver's window is reached (when end-end flow control kicks in)
or a timeout occurs. When a timeout occurs, the congestion window is set back to the maximum
TCP segment size.
• Clearly the exponential growth in window size needs to be controlled and for this a third
parameter, the threshold is used (initially set to 64K).
• When a timeout occurs, the threshold is set to half of the current value of the congestion window.
• Slow start is then applied again (with exponential growth in congestion window size) until the
congestion window size reaches the threshold. From then on, the congestion window is grown
linearly (i.e. one maximum segment size for each burst)
• Example
(Tanenbaum p539, fig. 6-32)
• We have assumed the existence of a timeout interval. How long should this interval be?
• The solution is to use a highly dynamic algorithm which constantly adjusts the timeout
interval based on continuous measurements of network performance.
• For each connection, TCP keeps a variable, RTT, that is the current best estimate of the
round-trip time to the destination in question. It computes this estimate using measured
times for ACKs to arrive (using smoothing with previous estimates)
• In addition to computing RTT, TCP also measures the deviation (D) of RTT (RTT is a
statistical quantity). Most TCP implementations then calculate the timeout interval as
equal to RTT + 4D.
Lecture 15: Internetworking with IP
(the Internet Protocol)
• Introduction
• IP Address Structure
• IP Datagram Structure
• IP Routing
• the Address Resolution Protocol
• Interior Gateway Protocols
• Exterior Gateway Protocol
• IPv6
Introduction
• Internetworking: the interconnection of several separate networks, which typically run different
protocols
Introduction
• The sheer scale of the global Internet (doubling every year in size). How are routing and
addressing carried out efficiently in networks containing billions of hosts?
Introduction
IP Address Structure
• Each host and router on the Internet has its own unique 32 bit IP address (IPv4)
• A node's IP address is distinct from its physical address (e.g. Ethernet address)
• a network ID
• a host ID
• Given 32 bits of address, how many bits should be allocated for the network ID and how many for
the host ID? There is a tradeoff in the number of networks that can be encoded and the maximum
number of hosts that can be connected to a network.
IP Address Structure
• IP Address Classes
(Tanenbaum p416, fig. 5-47)
• This allows 126 class A nets each with 16 million hosts, 16,382 class B nets with 64K hosts, 2
million class C nets with up to 254 hosts
IP Address Structure
• IP addresses are usually written in dotted decimal notation : each of the 4 bytes is written in
decimal 0 to 255 e.g.
(Halsall p497)
• IP addresses are assigned by Internet Service Providers, coordinating with the central Internet
Assigned Number Authority
• An IP address with a host ID of 0 indicates a network rather than a host; an IP address with a host
ID of all 1's indicates broadcast to all hosts on the particular network
IP Address Structure
• For large sites (e.g. campus networks) with several subnetworks, a further level of addressing is
often used called subnetting
• the IP net ID relates to the complete site rather than a single network
• the host ID is locally (i.e. within the site) viewed as two subfields: a subnet ID and a host
ID (the outside Internet knows nothing of this)
• local site routers mask off the host ID field to find the subnet id, which is then used for
routing packets to the correct subnet
• a router on subnet k knows how to get to all other local subnets, and also how to get to all
hosts on its own subnet - it does not need to know all the details of hosts on other subnets
(resulting in simpler routing tables).
IP Datagram Structure
• IP Packet Header
Tanenbaum p413, fig. 5-45
IP Datagram Structure
• Type of service: a quality of service parameter (e.g. fast delivery or accurate delivery). Many
routers ignore this field.
• Total length: total datagram length (header plus data). Maximum 64 KBytes
• Fragment offset: where in the current message the fragment belongs (used for message reassembly)
• Time to live: a counter used to limit packet lifetimes and prevent looping
• Protocol: which transport process the datagram belongs to (e.g. TCP, UDP)
• Header checksum: verifies the header only. Useful for detecting errors caused by bad memory in
routers.
IP Datagram Structure
IP Routing
• To deal with the vast number of hosts on the Internet, routing is hierarchical
• To reflect the fact that the Internet is made up of a number of separately managed and run internets,
each internet is treated as an autonomous system with its own routing algorithms and management
authority
• the combined Internet is considered as a core backbone network to which a number of autonomous
systems are attached
(Halsall p506, fig. 9.12)
IP Routing
• Interior gateways are used within an autonomous system (running an interior gateway
protocol)
• Exterior gateways are used to connect autonomous systems to the core network (running
an exterior gateway protocol)
IP Routing
IP Routing
• Hosts and routers in individual (sub) networks must know the physical addresses (e.g.
Ethernet addresses) corresponding to all local IP host ID addresses
• If the destination address is not in the routing table, the local ARP software
generates an ARP request packet containing its own IP and physical addresses
together with the required (target) IP address. This is either broadcast or sent to
a router for forwarding
IP Routing
• the ARP software in the required destination recognises its own IP address and
sends an ARP reply message back to the requesting host. It will also update its
own routing table
• IP/Physical addresses are often held on hosts local permanent storage and read by the
operating system on startup
• With diskless hosts, this information is stored on the host's server and is acquired by the
hosts using the Reverse Address Resolution Protocol (RARP):
• On startup, the diskless host sends an RARP request to the server, containing its
own physical address
IP Routing
• the original Internet Interior Gateway Protocol distributed with Berkeley (BSD)
Unix
• Works well for small autonomous systems, less well for large. Suffers from
count to infinity problem and slow convergence
• Replaced in the Internet in 1979 with a link state routing protocol, but still
widely used.
IP Routing
• Allows load balancing - splitting the load to a destination over multiple routes
• Exterior gateway routing has different requirements to interior gateway routing. The latter
moves packets as efficiently as possible within a given autonomous system. Exterior
gateway routing involves political, security and economic considerations
• Example routing constraints for an exterior gateway protocol might include such things as
(Tanenbaum p429):
IP Routing
• BGP uses a modified version of distance-vector routing - not only are distances to
destination ASes advertised, but also the actual routes
(Tanenbaum p430, fig. 5-55)
• In this example, F examines the routes (and distances) to determine the route with the
shortest distance which does not violate any policy constraints
• Note that since BGP routers exchange routes as well as distances, the count to infinity
problem does not occur
• Error reporting
• Reachability testing
• Congestion control
• Route-change notifications
• Performance measuring
• Has been developed to provide the next generation Internet Protocol. Improvements over IPv4
include:
• Removal of the checksum and simplified header for faster packet processing
• Quality of Service
• Congestion Control
To provide a single new network to replace the entire telephone system and all the specialised data
networks with a single integrated network for all kinds of information transfer
(Halsall p559, fig. 10.1)
• ATM uses fixed size packets called cells, containing a 5-octet header and a 48 byte payload (53
bytes in all)
• Use of small cells, while somewhat inefficient in terms of header overhead has the advantage of
reducing queuing delay of high priority cells
• Use of fixed size cells means that cells can be switched more easily in fast hardware
• ATM is intended to convey all kinds of traffic, including telephony. A small cell size reduces the
experienced speech delay
• the UNI (User-Network Interface) which defines the boundary between a host and an
ATM network
• the NNI (Network-Network Interface) which applies to the line between ATM switches
• In both cases, cells consist of a 5-byte header and a 48-byte payload, although the headers are
slightly different:
(Halsall p577, fig. 10.7)
• Generic Flow Control: Originally conceived as being used for flow control or
prioritisation between hosts and networks, but not used ("Think of it as a bug in the
standard" - Tanenbaum)
• VCI: Virtual Channel Identifier selects a virtual channel (circuit) within the chosen virtual
path
• PTI: Payload Type Identifier indicates the kind of information carried in the cell, in
accordance with:
(Tanenbaum p452, fig. 5-63)
• CLP: Cell Loss Priority, can be set by host to differentiate between high priority traffic
and low priority traffic. If congestion occurs and cells must be discarded, switches first
attempt to discard cells with CLP = 1
• HEC: Header Error Checksum, an 8-bit CRC on the first 4 octets of the header
• Logical connections in ATM are referred to as virtual channel connections (VCCs) (analogous to
X.25 virtual circuits)
• As well as virtual channels, ATM supports virtual paths. A virtual path connection (VPC) is a
bundle of VCCs that have the same end points. All VCCs in a VPC are switched together
(Tanenbaum p451, fig.5-61)
ATM Logical Connections
• The intention of the ATM designers was that routing between interior switches is done on the VPI
field of cells - the VCI field is used at the last hop between a switch and a host. This has a number
of advantages:
• Once a virtual path has been established between a source and a destination, a new virtual
channel can be set up by the end users - no network routing decisions have to be made
• Routing is done on a 12-bit number (VPI) rather than a 12-bit number and a 16-bit
number (VPI + VCI)
• Routing on virtual paths makes it easy to re-route a whole group of virtual circuits (e.g. in
case of link or switch failure)
• Virtual paths make it easy for carriers to offer closed user groups (private networks)
• Whether real switches will actually use the VPI for routing as planned remains to be seen
• ATM Layer: deals with cells and cell transport; defines layout of cells and deals with establishment
and release of virtual circuits. Common to all AAL services.
• ATM Adaptation Layer (AAL): Provides a range of service types/classes for the transport of the
byte streams/message units generated by higher layers
• User Plane: provides for user information transfer along with associated controls (e.g.
flow control, error control)
• The AAL provides an adaptation (convergence) functions between the class of service provided to
the user layer (e.g. message transfer) and the cell based service provided by the ATM layer
• A number of different classes of services (A, B, C and D) are defined within the AAL layer
together with a corresponding set of protocols (AAL type 1, AAL type 2, AAL type 3/4, AAL type
5)
Class A CBR Constant bit rate, connection oriented, synchronous traffic, (e.g. uncompressed
voice or video)
Class B VBR-RT Variable bit rate, real time, connection oriented (e.g. real time
videoconferencing)
Class B VBR-NRT As above but not real time. (e.g. video playback, multi-media)
Class C ABR Available bit rate, connection oriented (e.g. asynchronous traffic such as X.25 or
Frame Relay over ATM, browsing the web)
Class D UBR Unspecified bit rate, connectionless packet data (e.g. background file transfer)
Quality of Service
• When an ATM virtual channel is established, the user transport layer ("the customer") and the
ATM network layer ("the carrier") must agree on a contract defining the service
• To enable a concrete definition of traffic contracts, ATM defines a number of Quality of Service
(QoS) parameters.
Quality of Service
Parameter Meaning
Peak cell rate Maximum rate at which cells can be sent
Cell transfer delay How long delivery takes (mean and maximum)
Quality of Service
Parameter Meaning
• The mechanism for enforcing quality of service parameters is based on a specific algorithm - the
Generic Cell Rate Algorithm (GCRA)
• GCRA has two parameters: the peak cell rate (PCR) and the cell delay variation tolerance
(CDVT); the reciprocal of PCR, T = 1/PCR, is the minimum cell interarrival time
• A sender is always permitted to space consecutive cells by T or greater - any cell arriving
more than T after the previous cell is said to be conforming
• Cells arriving more than L seconds early is said to be non-conforming; depending on the
carrier, non-conforming cells are either discarded or have their priorities set to low (L is
usually set equal to the CDVT)
Congestion Control
• After every k data cells, a host transmits a special RM (Resource Management) cell
(indicated by 110 in the call payload type field) - the cell travels the same path as data
cells, but is treated specially by the switches along the way (see below)
(Tanenbaum p470, fig. 5-75)
• The RM cell sent contains the rate at which the sender would currently like to transmit
(possibly the Peak Cell Rate, possibly less) - this is called the ER (Explicit Rate)
• As the RM cell passes through the various switches on its way to the receiver, those that
are congested reduce ER (no switch may increase it!)
• When the sender gets the RM cell back, it can adjust its actual cell rate to bring it into line
with what the slowest switch can handle
http://www.webproforum.com/nortel2
Lecture 17: The Transport Layer
"The transport layer is not just another layer. It is the heart of the whole protocol hierarchy. Its
task is to provide reliable, cost-effective data transport from the source machine to the destination
machine, independent of the physical network or networks currently in use."
Andrew Tanenbaum
__________________________________________________
• Introduction
• Implementation Issues
Introduction
• The purpose of the Transport Layer is to provide to applications a message transfer facility which
is independent of the underlying network.
• Like other functional layers in a layered architecture model, the Transport Layer is defined in terms
of (i) the services it offers to the layer above, (ii) peer-peer protocols and the (iii) services it uses
from the layer below: (Halsall p657, Fig. 11.10)
• Addressing
• Establishing Connections
• Transfer of Data
• Releasing Connections
• Congestion Control
• Multiplexing
• Crash Recovery
• Provides TWO services: User Datagram Protocol (UDP) and Transmission Control Protocol (TCP)
• Both build on the layer 3 Internet Protocol (IP) which implements datagram packet switching.
• UDP is 'connectionless' and does not provide sequencing or flow control. It is used for fast 'one-
shot' message exchanges.
• TCP is 'connection-oriented', provides reliable data transfer, and implements flow control,
congestion control etc.
Implementation Issues
(or "What does the software look like?")
• Each layer is usually implemented as one or more 'tasks' or 'processes', which implement the layer
protocol.
• Layers (Tasks) communicate with each other via first-in-first-out (FIFO) queues of 'Event Control
Blocks': (Halsall p686, Fig. 11.31)
Implementation Issues
(or "What does the software look like?")
Application-oriented Layers in
OSI Model
Application-oriented Layers in
TCP/IP Model
• Main functions:
- establish synchronisation points during a dialog and, in the event of errors, resume the
dialog from an agreed synchronisation point.
• Different applications, operating systems, programming languages etc. typically use different
representations for data.
• The aim of the presentation layer is to ensure that messages exchanged between two application
processes have a common meaning or shared semantics.
• ISO has defined the Abstract Syntax Notation 1 (ASN.1) - a data description language and a set of
encoding rules - which defines a transfer syntax for unambiguously converting data structures to a
sequence of bytes for transmission (and for unambiguously decoding at the receiver).
• ASN.1 compilers are available for a range of programming languages - these compilers generate
encoding and decoding functions to convert between specific language data types and the ASN.1
representation.
• The Data Encryption Standard (DES), defined by the US National Bureau of Standards is one
commonly used encryption technique.
• DES is a block cipher - it operates on fixed size blocks of data (64 bits) using a 56 bit encryption
key. The same key is used for decryption.
• DES uses a combination of substitution (replace a group of bits by another) and transposition
(change the order of the bits) using "S" and "P" boxes (implemented in fast hardware):
(Halsall Figs.12.17, 12.18, p721, 722)
• DES uses 19 stages of transposition/substitution, with different "sub-keys" derived from the 56 bit
encryption key used at each stage:
(Halsall Fig.12.19, p723)
• Using a 56 bit key gives around 1017 possible keys. Many believe that this is insufficient. (See
Tanenbaum's book for a good discussion of this).
• Using identical keys for encryption and decryption suffers from the key distribution problem - the
selected key needs to be sent to the receiver and may be intercepted.
• Public Key Cryptography provides an elegant method for overcoming the key distribution
problem.
- Under these conditions, the encryption key can be made public - a recipient, A, generates
encryption and decryption keys EA and DA. The encryption algorithm and EA are made
public. Anyone can send encrypted data to A using EA but only A can decrypt it.
- One commonly used algorithm for calculating E and D is the RSA algorithm (Rivest,
Shamir and Adleman) which uses number theory to generate the key pairs.
(Halsall Fig.12.21, p727)
• Public Key Cryptography can also be used as a method of message authentication. The idea here is
that DA is made public and EA is kept secret. If someone receives a message which can be
decrypted using DA, it must have come from A since only A knows EA.
• Part of the OSI Application layer, CASE's provide general support functions to specific
applications services such as File Transfer, E-mail etc. etc.
• Application protocols exist in both ISO-OSI and TCP/IP stacks which provide services such as:
remote terminal access; file transfer; electronic mail; network management; WWW; directory
services...
• The way in which ISO-OSI and TCP/IP application protocols operate is somewhat different:
• the ISO-OSI model provides extensive application support protocols and Presentation and
Session layers
• in the TCP/IP stack, the application protocol communicates directly with the transport
layer
• Both ISO-OSI and TCP/IP use the idea of virtual devices (although at different layers)
• telnet
• Enables a user at a terminal to remotely login to a remote machine and use it as if the
terminal were directly connected to it.
• enables a user to send and receive files to/from a remote file system
• Provides a networkwide mail transfer service between the mail systems associated with
different machines
• Enables a user (e.g. the network manager) to gather performance data or to control the
operation of network elements (e.g. bridges) via the network itself
• VT (Virtual Terminal)
• Provides a facility for a user application process(AP) to submit a job to a remote AP for
processing
• Provides a standard protocol for manufacturing related messages (e.g. for robot or
numerical machine control)