Вы находитесь на странице: 1из 82

UNIT I – DATA NETWORK FUNDAMENTALS

Network hierarchy and switching – Open system interconnection


model of ISO – Data link control protocol – BISYNC – SLDC –
HLDC – Media access protocol – Command – Token passing –
CSMA/CD, TCP/IP.

Network switch
A network switch or switching hub is a computer networking device that connects network
segments.The term commonly refers to a network bridge that processes and routes data at the
data link layer (layer 2) of the OSI model. Switches that additionally process data at the network
layer (layer 3 and above) are often referred to as Layer 3 switches or multilayer switches.The
term network switch does not generally encompass unintelligent or passive network devices
such as hubs and repeaters.

Function
The network switch, packet switch (or just switch) plays an integral part in most Ethernet local
area networks or LANs. Mid-to-large sized LANs contain a number of linked managed switches.
Small office/home office (SOHO) applications typically use a single switch, or an all-purpose
converged device such as a gateway access to small office/home broadband services such as
DSL router or cable Wi-Fi router. In most of these cases, the end-user device contains a router
and components that interface to the particular physical broadband technology, as in Linksys 8-
port and 48-port devices. User devices may also include a telephone interface for VoIP.

A standard 10/100 f switch operates at the data-link layer of the OSI model to create a different
collision domain for each switch port.If there are 4 computers (e.g., A, B, C, and D) on 4 switch
ports, then A and B can transfer data back and forth, while C and D also do so simultaneously,
and the two "conversations" will not interfere with one another. In the case of a "hub," they
would all share the bandwidth and run in Half duplex, resulting in collisions, which would then
necessitate retransmissions. Using a switch is called micro-segmentation.

Role of switches in networks


Switches may operate at one or more OSI layers, including physical, data link, network, or
transport (i.e., end-to-end). A device that operates simultaneously at more than one of these
layers is known as a multilayer switch.
In switches intended for commercial use, built-in or modular interfaces make it possible to
connect different types of networks, including Ethernet, Fibre Channel, ATM, ITU-T G.hn and
802.11. This connectivity can be at any of the layers mentioned. While Layer 2 functionality is
adequate for speed-shifting within one technology, interconnecting technologies such as Ethernet
and token ring are easier at Layer 3.

Interconnection of different Layer 3 networks is done by routers. If there are any features that
characterize "Layer-3 switches" as opposed to general-purpose routers, it tends to be that they are
optimized, in larger switches, for high-density Ethernet connectivity.

Layer-specific functionality
A modular network switch with three network modules (a total of 24 Ethernet and 14 Fast Ethernet
ports) and one power supply.

While switches may learn about topologies at many layers, and forward at one or more layers,
they do tend to have common features. Other than for high-performance applications, modern
commercial switches use primarily Ethernet interfaces, which can have different input and output
speeds of 10, 100, 1000 or 10,000 megabits per second. Switch ports almost always default to
Full duplex operation, unless there is a requirement for interoperability with devices that are
strictly Half duplex. Half duplex means that the device can only send or receive at any given
time, whereas Full duplex can send and receive at the same time.

At any layer, a modern switch may implement power over Ethernet (PoE), which avoids the need
for attached devices, such as an IP telephone or wireless access point, to have a separate power
supply. Since switches can have redundant power circuits connected to uninterruptible power
supplies, the connected device can continue operating even when regular office power fails.

Layer-1 hubs versus higher-layer switches

A network hub, or repeater, is a fairly unsophisticated network device. Hubs do not manage any
of the traffic that comes through them. Any packet entering a port is broadcast out or "repeated"
on every other port, except for the port of entry. Since every packet is repeated on every other
port, packet collisions result, which slows down the network.

There are specialized applications where a hub can be useful, such as copying traffic to multiple
network sensors. High end switches have a feature which does the same thing called port
mirroring. There is no longer any significant price difference between a hub and a low-end
switch.[6]

Layer 2

A network bridge, operating at the Media Access Control (MAC) sublayer of the data link layer,
may interconnect a small number of devices in a home or office. This is a trivial case of bridging,
in which the bridge learns the MAC address of each connected device. Single bridges also can
provide extremely high performance in specialized applications such as storage area networks.

Classic bridges may also interconnect using a spanning tree protocol that disables links so that
the resulting local area network is a tree without loops. In contrast to routers, spanning tree
bridges must have topologies with only one active path between two points. The older IEEE
802.1D spanning tree protocol could be quite slow, with forwarding stopping for 30 seconds
while the spanning tree would recon verge. A Rapid Spanning Tree Protocol was introduced as
IEEE 802.1w, but the newest edition of IEEE 802.1D-2004, adopts the 802.1w extensions as the
base standard. The IETF is specifying the TRILL protocol, which is the application of link-state
routing technology to the layer-2 bridging problem. Devices which implement TRILL, called R
Bridges, combine the best features of both routers and bridges.

While "layer 2 switch" remains more of a marketing term than a technical term, [citation needed] the
products that were introduced as "switches" tended to use microsegmentation and Full duplex to
prevent collisions among devices connected to Ethernets. By using an internal forwarding plane
much faster than any interface, they give the impression of simultaneous paths among multiple
devices.

Once a bridge learns the topology through a spanning tree protocol, it forwards data link layer
frames using a layer 2 forwarding method. There are four forwarding methods a bridge can use,
of which the second through fourth method were performance-increasing methods when used on
"switch" products with the same input and output port speeds:

1. Store and forward: The switch buffers and, typically, performs a checksum on each frame before
forwarding it.
2. Cut through: The switch reads only up to the frame's hardware address before starting to
forward it. There is no error checking with this method.

3. Fragment free: A method that attempts to retain the benefits of both "store and forward" and
"cut through". Fragment free checks the first 64 bytes of the frame, where addressing
information is stored. According to Ethernet specifications, collisions should be detected during
the first 64 bytes of the frame, so frames that are in error because of a collision will not be
forwarded. This way the frame will always reach its intended destination. Error checking of the
actual data in the packet is left for the end device in Layer 3 or Layer 4 (OSI), typically a router.

4. Adaptive switching: A method of automatically switching between the other three modes.

Cut-through switches have to fall back to store and forward if the outgoing port is busy at the
time the packet arrives. While there are specialized applications, such as storage area networks,
where the input and output interfaces are the same speed, this is rarely the case in general LAN
applications. In LANs, a switch used for end user access typically concentrates lower speed (e.g.,
10/100 Mbit/s) into a higher speed (at least 1 Gbit/s). Alternatively, a switch that provides access
to server ports usually connects to them at a much higher speed than is used by end user
devices.Cypress Semiconductor design and manufacturing company along with TPACK offers
the flexibility to cope with various system architecture for Ethernet switches through reference
design. The reference design involves TPX4004 and CY7C15632KV18 72-Mbit SRAMs.
Layer 3

Within the confines of the Ethernet physical layer, a layer 3 switch can perform some or all of
the functions normally performed by a router. A true router is able to forward traffic from one
type of network connection (e.g., T1, DSL) to another (e.g., Ethernet, WiFi).

The most common layer-3 capability is awareness of IP multicast. With this awareness, a layer-3
switch can increase efficiency by delivering the traffic of a multicast group only to ports where
the attached device has signaled that it wants to listen to that group. If a switch is not aware of
multicasting and broadcasting, frames are also forwarded on all ports of each broadcast domain,
but in the case of IP multicast this causes inefficient use of bandwidth. To work around this
problem some switches implement IGMP snooping.

Layer 4

While the exact meaning of the term Layer-4 switch is vendor-dependent, it almost always starts
with a capability for network address translation, but then adds some type of load distribution
based on TCP sessions.

The device may include a stateful firewall, a VPN concentrator, or be an IPSec security gateway.

Layer 7

Layer 7 switches may distribute loads based on URL or by some installation-specific technique
to recognize application-level transactions. A Layer-7 switch may include a web cache and
participate in a content delivery network.[9]

Configuration options

 Unmanaged switches — These switches have no configuration interface or options. They are
plug and play. They are typically the least expensive switches, found in home, SOHO, or small
businesses. They can be desktop or rack mounted.
 Managed switches — These switches have one or more methods to modify the operation of the
switch. Common management methods include: a serial console or command line interface (CLI
for short) accessed via telnet or Secure Shell, an embedded Simple Network Management
Protocol (SNMP) agent allowing management from a remote console or management station, or
a web interface for management from a web browser. Examples of configuration changes that
one can do from a managed switch include: enable features such as Spanning Tree Protocol, set
port speed, create or modify Virtual LANs (VLANs), etc. Two sub-classes of managed switches
are marketed today:

o Smart (or intelligent) switches — These are managed switches with a limited set of
management features. Likewise "web-managed" switches are switches which fall in a
market niche between unmanaged and managed. For a price much lower than a fully
managed switch they provide a web interface (and usually no CLI access) and allow
configuration of basic settings, such as VLANs, port-speed and duplex. [10]
o Enterprise Managed (or fully managed) switches — These have a full set of management
features, including Command Line Interface, SNMP agent, and web interface. They may
have additional features to manipulate configurations, such as the ability to display,
modify, backup and restore configurations. Compared with smart switches, enterprise
switches have more features that can be customized or optimized, and are generally
more expensive than "smart" switches. Enterprise switches are typically found in
networks with larger number of switches and connections, where centralized
management is a significant savings in administrative time and effort. A stackable switch
is a version of enterprise-managed switch.

Traffic monitoring on a switched network

Unless port mirroring or other methods such as RMON or SMON are implemented in a switch,
[11]
it is difficult to monitor traffic that is bridged using a switch because all ports are isolated
until one transmits data, and even then only the sending and receiving ports can see the traffic.
These monitoring features rarely are present on consumer-grade switches.

Two popular methods that are specifically designed to allow a network analyst to monitor traffic
are:

 Port mirroring — the switch sends a copy of network packets to a monitoring network
connection.
 SMON — "Switch Monitoring" is described by RFC 2613 and is a protocol for controlling facilities
such as port mirroring.

Another method to monitor may be to connect a Layer-1 hub between the monitored device and
its switch port. This will induce minor delay, but will provide multiple interfaces that can be used
to monitor the individual switch port.

The Open System Interconnection Reference


Model
(OSI Reference Model or OSI Model) is an abstract description for layered communications
and computer network protocol design. It was developed as part of the Open Systems
Interconnection (OSI) initiative.[1] In its most basic form, it divides network architecture into
seven layers which, from top to bottom, are the Application, Presentation, Session, Transport,
Network, Data Link, and Physical Layers. It is therefore often referred to as the OSI Seven
Layer Model.

A layer is a collection of conceptually similar functions that provide services to the layer above it
and receives service from the layer below it. On each layer an instance provides services to the
instances at the layer above and requests service from the layer below. For example, a layer that
provides error-free communications across a network provides the path needed by applications
above it, while it calls the next lower layer to send and receive packets that make up the contents
of the path.

.Description of OSI layers


OSI Model

Data unit Layer Function

7. Application Network process to application

Data 6. Presentation Data representation,encryption and decryption


Host
layers
5. Session Interhost communication

Segments 4. Transport End-to-end connections and reliability,Flow control

Packet 3. Network Path determination and logical addressing

Media
Frame 2. Data Link Physical addressing
layers

Bit 1. Physical Media, signal and binary transmission

Layer 1: Physical Layer

The Physical Layer defines the electrical and physical specifications for devices. In particular, it
defines the relationship between a device and a physical medium. This includes the layout of
pins, voltages, cable specifications, hubs, repeaters, network adapters, host bus adapters (HBAs
used in storage area networks) and more.To understand the function of the Physical Layer,
contrast it with the functions of the Data Link Layer. Think of the Physical Layer as concerned
primarily with the interaction of a single device with a medium, whereas the Data Link Layer is
concerned more with the interactions of multiple devices (i.e., at least two) with a shared
medium. Standards such as RS-232 do use physical wires to control access to the medium.

The major functions and services performed by the Physical Layer are:

 Establishment and termination of a connection to a communications medium.


 Participation in the process whereby the communication resources are effectively shared
among multiple users. For example, contention resolution and flow control.
 Modulation, or conversion between the representation of digital data in user equipment
and the corresponding signals transmitted over a communications channel. These are
signals operating over the physical cabling (such as copper and optical fiber) or over a
radio link.

Parallel SCSI buses operate in this layer, although it must be remembered that the logical SCSI
protocol is a Transport Layer protocol that runs over this bus. Various Physical Layer Ethernet
standards are also in this layer; Ethernet incorporates both this layer and the Data Link Layer.
The same applies to other local-area networks, such as token ring, FDDI, ITU-T G.hn and IEEE
802.11, as well as personal area networks such as Bluetooth and IEEE 802.15.4.

Layer 2: Data Link Layer

The Data Link Layer provides the functional and procedural means to transfer data between
network entities and to detect and possibly correct errors that may occur in the Physical Layer.
Originally, this layer was intended for point-to-point and point-to-multipoint media,
characteristic of wide area media in the telephone system. Local area network architecture,
which included broadcast-capable multi-access media, was developed independently of the ISO
work in IEEE Project 802. IEEE work assumed sub layering and management functions not
required for WAN use. In modern practice, only error detection, not flow control using sliding
window, is present in data link protocols such as Point-to-Point Protocol (PPP), and, on local
area networks, the IEEE 802.2 LLC layer is not used for most protocols on the Ethernet, and on
other local area networks, its flow control and acknowledgment mechanisms are rarely used.
Sliding window flow control and acknowledgment is used at the Transport Layer by protocols
such as TCP, but is still used in niches where X.25 offers performance advantages.

The ITU-T G.hn standard, which provides high-speed local area networking over existing wires
(power lines, phone lines and coaxial cables), includes a complete Data Link Layer which
provides both error correction and flow control by means of a selective repeat Sliding Window
Protocol.Both WAN and LAN service arrange bits, from the Physical Layer, into logical
sequences called frames. Not all Physical Layer bits necessarily go into frames, as some of these
bits are purely intended for Physical Layer functions. For example, every fifth bit of the FDDI bit
stream is not used by the Layer.

WAN Protocol architecture

Connection-oriented WAN data link protocols, in addition to framing, detect and may correct
errors. They are also capable of controlling the rate of transmission. A WAN Data Link Layer
might implement a sliding window flow control and acknowledgment mechanism to provide
reliable delivery of frames; that is the case for SDLC and HDLC, and derivatives of HDLC such
as LAPB and LAPD.

IEEE 802 LAN architecture

Practical, connectionless LANs began with the pre-IEEE Ethernet specification, which is the
ancestor of IEEE 802.3. This layer manages the interaction of devices with a shared medium,
which is the function of a Media Access Control sublayer. Above this MAC sublayer is the
media-independent IEEE 802.2 Logical Link Control (LLC) sublayer, which deals with
addressing and multiplexing on multiaccess media.

While IEEE 802.3 is the dominant wired LAN protocol and IEEE 802.11 the wireless LAN
protocol, obsolescent MAC layers include Token Ring and FDDI. The MAC sublayer detects but
does not correct errors.

Layer 3: Network Layer

The Network Layer provides the functional and procedural means of transferring variable length
data sequences from a source to a destination via one or more networks, while maintaining the
quality of service requested by the Transport Layer. The Network Layer performs network
routing functions, and might also perform fragmentation and reassembly, and report delivery
errors. Routers operate at this layer—sending data throughout the extended network and making
the Internet possible. This is a logical addressing scheme – values are chosen by the network
engineer. The addressing scheme is hierarchical.

Careful analysis of the Network Layer indicated that the Network Layer could have at least 3
sublayers. Subnetwork Access - that considers protocols that deal with the interface to networks,
such as X.25; Subnetwork Dependent Convergence, when it is necessary to bring the level of a
transit network up to the level of networks on either side; and Subnetwork Independent
Convergence, which handles transfer across multiple networks. The best example of this latter
case is CLNP, or IPv7 ISO 8473. It manages the connectionless transfer of data one hop at a
time, from end system to ingress router, router to router, and from egress router to destination
end system. It is not responsible for reliable delivery to a next hop, but only for the detection of
errored packets so they may be discarded. In this scheme IPv4 and IPv6 would have to be classed
with X.25 as Subnet Access protocols because they carry interface addresses rather than node
addresses.

A number of layer management protocols, a function defined in the Management Annex, ISO
7498/4, belong to the Network Layer. These include routing protocols, multicast group
management, Network Layer information and error, and Network Layer address assignment. It is
the function of the payload that makes these belong to the Network Layer, not the protocol that
carries them.

Layer 4: Transport Layer

The Transport Layer provides transparent transfer of data between end users, providing reliable
data transfer services to the upper layers. The Transport Layer controls the reliability of a given
link through flow control, segmentation/desegmentation, and error control. Some protocols are
state and connection oriented. This means that the Transport Layer can keep track of the
segments and retransmit those that fail.

Examples of Layer 4 are the Transmission Control Protocol (TCP) and User Datagram Protocol
(UDP).Of the actual OSI protocols, there are five classes of connection-mode transport protocols
ranging from class 0 (which is also known as TP0 and provides the least features) to class 4
(TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error
recovery, and was designed for use on network layers that provide error-free connections. Class
4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI
assigns to the Session Layer. Also, all OSI TP connection-mode protocol classes provide
expedited data and preservation of record boundaries, both of which TCP is incapable. Detailed
characteristics of TP0-4 classes are shown in the following table:[4]

Higher layers may have the equivalent of double envelopes, such as cryptographic presentation
services that can be read by the addressee only. Roughly speaking, tunneling protocols operate at
the Transport Layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over
an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation
(GRE) might seem to be a Network Layer protocol, if the encapsulation of the payload takes
place only at endpoint, GRE becomes closer to a transport protocol that uses IP headers but
contains complete frames or packets to deliver to an endpoint. L2TP carries PPP frames inside
transport packet.

Layer 5: Session Layer

The Session Layer controls the dialogues (connections) between computers. It establishes,
manages and terminates the connections between the local and remote application. It provides for
full-duplex, half-duplex, or simplex operation, and establishes checkpointing, adjournment,
termination, and restart procedures. The OSI model made this layer responsible for graceful
close of sessions, which is a property of the Transmission Control Protocol, and also for session
checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The Session
Layer is commonly implemented explicitly in application environments that use remote
procedure calls.

Layer 6: Presentation Layer

The Presentation Layer establishes a context between Application Layer entities, in which the
higher-layer entities can use different syntax and semantics, as long as the presentation service
understands both and the mapping between them. The presentation service data units are then
encapsulated into Session Protocol data units, and moved down the stack.

This layer provides independence from differences in data representation (e.g., encryption) by
translating from application to network format, and vice versa. The presentation layer works to
transform data into the form that the application layer can accept. This layer formats and
encrypts data to be sent across a network, providing freedom from compatibility problems. It is
sometimes called the syntax layer.The original presentation structure used the basic encoding
rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an
EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data
structures from and to XML.

Layer 7: Application Layer


The application layer is the OSI layer closest to the end user, which means that both the OSI
application layer and the user interact directly with the software application. This layer interacts
with software applications that implement a communicating component. Such application
programs fall outside the scope of the OSI model. Application layer functions typically include
identifying communication partners, determining resource availability, and synchronizing
communication. When identifying communication partners, the application layer determines the
identity and availability of communication partners for an application with data to transmit.
When determining resource availability, the application layer must decide whether sufficient
network or the requested communication exist. In synchronizing communication, all
communication between applications requires cooperation that is managed by the application
layer. Some examples of application layer implementations include Hypertext Transfer Protocol
(HTTP), File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP) and X.400 Mail...

Comparison with TCP/IP


In the TCP/IP model of the Internet, protocols are deliberately not as rigidly designed into strict
layers as the OSI model.[6] RFC 3439 contains a section entitled "Layering considered harmful."
However, TCP/IP does recognize four broad layers of functionality which are derived from the
operating scope of their contained protocols, namely the scope of the software application, the
end-to-end transport connection, the internetworking range, and lastly the scope of the direct
links to other nodes on the local network.

Even though the concept is different from the OSI model, these layers are nevertheless often
compared with the OSI layering scheme in the following way: The Internet Application Layer
includes the OSI Application Layer, Presentation Layer, and most of the Session Layer. Its end-
to-end Transport Layer includes the graceful close function of the OSI Session Layer as well as
the OSI Transport Layer. The internetworking layer (Internet Layer) is a subset of the OSI
Network Layer (see above), while the Link Layer includes the OSI Data Link and Physical
Layers, as well as parts of OSI's Network Layer. These comparisons are based on the original
seven-layer protocol model as defined in ISO 7498, rather than refinements in such things as the
internal organization of the Network Layer document.

The presumably strict peer layering of the OSI model as it is usually described does not present
contradictions in TCP/IP, as it is permissible that protocol usage does not follow the hierarchy
implied in a layered model. Such examples exist in some routing protocols (e.g., OSPF), or in the
description of tunneling protocols, which provide a Link Layer for an application, although the
tunnel host protocol may well be a Transport or even an Application Layer protocol in its own

Data link control protocol


A communication protocol that converts noisy (error-prone) data links into communication
channels free of transmission errors. Data is broken into frames, each of which is protected by
checksum. Frames are retransmitted as many times as needed to accomplish correct transmission.
A data link control protocol must prevent data loss caused by mismatched sending/receiving
capacities. A flow control procedure, usually a simple sliding window mechanism, provides this
function. Data link control protocols must provide transparent data transfer. Bit stuffing or byte
stuffing strategies are used to mask control patterns that occur in the text being transmitted.
Control frames are used to start/stop logical connections over links. Addressing may be provided
to support several virtual connections on the same physical link.

High-Level Data Link Control (HDLC)


It is a bit-oriented synchronous data link layer protocol developed by the International
Organization for Standardization (ISO). The original ISO standards for HDLC are:

 ISO 3309 — Frame Structure


 ISO 4335 — Elements of Procedure
 ISO 6159 — Unbalanced Classes of Procedure
 ISO 6256 — Balanced Classes of Procedure

The current standard for HDLC is ISO 13239, which replaces all of those standards.

HDLC provides both connection-oriented and connectionless service.

HDLC can be used for point to multipoint connections, but is now used almost exclusively to
connect one device to another, using what is known as Asynchronous Balanced Mode (ABM).
The original master-slave modes Normal Response Mode (NRM) and Asynchronous Response
Mode (ARM) are rarely used.

HDLC is based on IBM's SDLC protocol, which is the layer 2 protocol for IBM's Systems
Network Architecture (SNA). It was extended and standardized by the ITU as LAP, while ANSI
named their essentially identical version ADCCP.

Framing

HDLC frames can be transmitted over synchronous or asynchronous links. Those links have no
mechanism to mark the beginning or end of a frame, so the beginning and end of each frame has
to be identified. This is done by using a frame delimiter, or flag, which is a unique sequence of
bits that is guaranteed not to be seen inside a frame. This sequence is '01111110', or, in
hexadecimal notation, 0x7E. Each frame begins and ends with a frame delimiter. A frame
delimiter at the end of a frame may also mark the start of the next frame. A sequence of 7 or
more consecutive 1-bits within a frame will cause the frame to be aborted.

When no frames are being transmitted on a simplex or full-duplex synchronous link, a frame
delimiter is continuously transmitted on the link. Using the standard NRZI encoding from bits to
line levels (0 bit = transition, 1 bit = no transition), this generates one of two continuous
waveforms, depending on the initial state:

This is used by modems to train and synchronize their clocks via phase-locked loops. Some
protocols allow the 0-bit at the end of a frame delimiter to be shared with the start of the next
frame delimiter, i.e. '011111101111110'.

For half-duplex or multi-drop communication, where several transmitters share a line, a receiver
on the line will see continuous idling 1-bits in the inter-frame period when no transmitter is
active.Actual binary data could easily have a sequence of bits that is the same as the flag
sequence. So the data's bit sequence must be modified so that it doesn't appear to be a frame
delimiter

Synchronous Framing

On synchronous links, this is done with bit stuffing. Any time that 5 consecutive 1-bits appear in
the transmitted data, the data is paused and a 0-bit is transmitted. This ensures that no more than
5 consecutive 1-bits will be sent. The receiving device knows this is being done, and after seeing
5 1-bits in a row, a following 0-bit is stripped out of the received data. If the following bit is a 1-
bit, the receiver has found a flag.

This also (assuming NRZI with transition for 0 encoding of the output) provides a minimum of
one transition per 6 bit times during transmission of data, and one transition per 7 bit times
during transmission of flag, so the receiver can stay in sync with the transmitter. Note however,
that for this purpose encodings such as 8b/10b encoding are better suited.

HDLC transmits bytes of data with the least significant bit first (little-endian order).

ASynchronous Framing

When using asynchronous serial communication such as standard RS-232 serial ports, bits are
sent in groups of 8, and bit-stuffing is inconvenient. Instead they use "control-octet
transparency", also called "byte stuffing" or "octet stuffing". The frame boundary octet is
01111110, (7E in hexadecimal notation). A "control escape octet", has the bit sequence
'01111101', (7D hexadecimal). If either of these two octets appears in the transmitted data, an
escape octet is sent, followed by the original data octet with bit 5 inverted. For example, the data
sequence "01111110" (7E hex) would be transmitted as "01111101 01011110" ("7D 5E" hex).
Other reserved octet values (such as XON or XOFF) can be escaped in the same way if
necessary.

Structure

Flag Address Control Information FCS Flag


8 bits 8 or more bits 8 or 16 bits Variable length, 0 or more bits 16 or 32 bits 8 bits
Note that the end flag of one frame may be (but does not have to be) the beginning (start) flag of
the next frame.

Data is usually sent in multiples of 8 bits, but only some variants require this; others theoretically
permit data alignments on other than 8-bit boundaries.The frame check sequence (FCS) is a 16-
bit CRC-CCITT or a 32-bit CRC-32 computed over the Address, Control, and Information fields.
It provides a means by which the receiver can detect errors that may have been induced during
the transmission of the frame, such as lost bits, flipped bits, and extraneous bits. However, given
that the algorithms used to calculate the FCS are such that the probability of certain types of
transmission errors going undetected increases with the length of the data being checked for
errors, the FCS can implicitly limit the practical size of the frame.

If the receiver's calculation of the FCS does not match that of the sender's, indicating that the
frame contains errors, the receiver can either send a negative acknowledge packet to the sender,
or send nothing. After either receiving a negative acknowledge packet or timing out waiting for a
positive acknowledge packet, the sender can retransmit the failed frame.

The FCS was implemented because many early communication links had a relatively high bit
error rate, and the FCS could readily be computed by simple, fast circuitry or software. More
effective forward error correction schemes are now widely used by other protocols.

Types of Stations (Computers), and Data Transfer Modes

Synchronous Data Link Control (SDLC) was originally designed to connect one computer with
multiple peripherals. The original "normal response mode" is a master-slave mode where the
computer (or primary terminal) gives each peripheral (secondary terminal) permission to
speak in turn. Because all communication is either to or from the primary terminal, frames
include only one address, that of the secondary terminal; the primary terminal is not assigned an
address. There is also a strong distinction between commands sent by the primary to a
secondary, and responses sent by a secondary to the primary. Commands and responses are in
fact indistinguishable; the only difference is the direction in which they are transmitted.

Normal response mode allows operation over half-duplex communication links, as long as the
primary is aware that it may not transmit when it has given permission to a secondary.

Asynchronous response mode is an HDLC addition [1] for use over full-duplex links. While
retaining the primary/secondary distinction, it allows the secondary to transmit at any time.

Asynchronous balanced mode added the concept of a combined terminal which can act as both
a primary and a secondary. There are some subtleties about this mode of operation; while many
features of the protocol do not care whether they are in a command or response frame, some do,
and the address field of a received frame must be examined to determine whether it contains a
command (the address received is ours) or a response (the address received is that of the other
terminal).

HDLC Operations, and Frame Types


There are three fundamental types of HDLC frames.

 Information frames, or I-frames, transport user data from the network layer. In addition
they can also include flow and error control information piggybacked on data.
 Supervisory Frames, or S-frames, are used for flow and error control whenever
piggybacking is impossible or inappropriate, such as when a station does not have data to
send. S-frames do not have information fields.
 Unnumbered frames, or U-frames, are used for various miscellaneous purposes,
including link management. Some U-frames contain an information field, depending on
the type.

The general format of the control field is:

HDLC control fields


7 6 5 4 3 2 1 0
N(R) N(S)
P/F 0 I-frame
Receive sequence no. Send sequence no.
N(R)
P/F type 0 1 S-frame
Receive sequence no.
type P/F type 1 1 U-frame

There are also extended (2-byte) forms of I and S frames. Again, the least significant bit
(rightmost in this table) is sent first.

Extended HDLC control fields


15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
N(R) N(S)
P/F 0 Extended I-frame
Receive sequence no. Send sequence no.
N(R)
P/F 0 0 0 0 type 0 1 Extended S-frame
Receive sequence no.

The P/F bit

Poll/Final is a single bit with two names. It is called Poll when set by the primary station to
obtain a response from a secondary station, and Final when set by the secondary station to
indicate a response or the end of transmission. In all other cases, the bit is clear.

The bit is used as a token that is passed back and forth between the stations. Only one token
should exist at a time. The secondary only sends a Final when it has received a Poll from the
primary. The primary only sends a Poll when it has received a Final back from the secondary, or
after a timeout indicating that the bit has been lost.

 In NRM, possession of the poll token also grants the addressed secondary permission to
transmit. The secondary sets the F-bit in its last response frame to give up permission to
transmit. (It is equivalent to the word "Over" in radio voice procedure.)
 In ARM and ABM, the P bit forces a response. In these modes, the secondary need not
wait for a poll to transmit, so need not wait to respond with a final bit.
 If no response is received to a P bit in a reasonable period of time, the primary station
times out and sends P again.
 The P/F bit is at the heart of the basic checkpoint retransmission scheme that is required
to implement HDLC; all other variants (such as the REJ S-frame) are optional and only
serve to increase efficiency. Whenever a station receives a P/F bit, it may assume that any
frames that it sent before it last transmitted the P/F bit and not yet acknowledged will
never arrive, and so should be retransmitted.

When operating as a combined station, it is important to maintain the distinction between P and
F bits, because there may be two checkpoint cycles operating simultaneously. A P bit arriving in
a command from the remote station is not in response to our P bit; only an F bit arriving in a
response is.

N(R), the receive sequence number

Both I and S frames contain a receive sequence number N(R). N(R) provides a positive
acknowledgement for the receipt of I-frames from the other side of the link. Its value is always
the first frame not received; it acknowledges that all frames with N(S) values up to N(R)-1
(modulo 8 or modulo 128) have been received and indicates the N(S) of the next frame it expects
to receive.

N(R) operates the same way whether it is part of a command or response. A combined station
only has one sequence number space.

I-Frames (user data)

Information frames, or I-frames, transport user data from the network layer. In addition they
also include flow and error control information piggybacked on data. The sub-fields in the
control field define these functions.

The least significant bit (first transmitted) defines the frame type. 0 means an I-frame.

N(S) defines the sequence number of send frame. This is incremented for successive I-frames,
modulo 8 or modulo 128. Depending on the number of bits in the sequence number, up to 7 or
127 I-frames may be awaiting acknowledgment at any time.The P/F and N(R) fields operate as
described above. Except for the interpretation of the P/F field, there is no difference between a
command I frame and a response I frame; when P/F is 0, the two forms are exactly equivalent.

S-Frames (control)

Supervisory Frames, or S-frames, are used for flow and error control whenever piggybacking is
impossible or inappropriate, such as when a station does not have data to send. S-frames do not
have information fields.The S-frame control field includes a leading "10" indicating that it is an
S-frame. This is followed by a 2-bit type, a poll/final bit, and a sequence number. If 7-bit
sequence numbers are used, there is also a 4-bit padding field.

The first 2 bits mean it is an S-frame. All S frames include a P/F bit and a receive sequence
number as described above. Except for the interpretation of the P/F field, there is no difference
between a command S frame and a response S frame; when P/F is 0, the two forms are exactly
equivalent.

The 2-bit type field encodes the type of S frame.

Receive Ready (RR)

 Indicate that the sender is ready to receive more data (cancels the effect of a previous
RNR).
 Send this packet if you need to send a packet but have no I frame to send.
 A primary station can send this with the P-bit set to solicit data from a secondary station.
 A secondary terminal can use this with the F-bit set to respond to a poll if it has no data to
send.

Receive Not Ready (RNR)

 Acknowledge some packets and request no more be sent until further notice.
 Can be used like RR with P bit set to solicit the status of a secondary station.
 Can be used like RR with F bit set to respond to a poll if the station is busy.

Reject (REJ)

 Requests immediate retransmission starting with N(R).


 Sent in response to an observed sequence number gap. After seeing I1/I2/I3/I5, send
REJ4.
 Optional to generate; a working implementation can use only RR.

Selective Reject (SREJ)

 Requests retransmission of only the frame N(r).


 Not supported by all HDLC variants.
 Optional to generate; a working implementation can use only RR, or only RR and REJ.

U-Frames

Unnumbered frames, or U-frames, are used for link management, and can also be used to
transfer user data. They exchange session management and control information between
connected devices, and some U-frames contain an information field, used for system
management information or user data. The first 2 bits (11) mean it is a U-frame. The 5 type bits
(2 before P/F bit and 3 bit after P/F bit) can create 32 different types of U-frame

 Mode settings (SNRM, SNRME, SARM, SARME, SABM, SABME, UA, DM, RIM,
SIM, RD, DISC)
 Information Transfer (UP, UI)
 Recovery (FRMR, RSET)
o Invalid Control Field

o Data Field Too Long

o Data field not allowed with received Frame Type

o Invalid Receive Count

 Miscellaneous (XID, TEST)

Link Configurations

Link configurations can be categorized as being either:

 Unbalanced, which consists of one primary terminal, and one or more secondary
terminals.
 Balanced, which consists of two peer terminals.

The three link configurations are:

 Normal Response Mode (NRM) is an unbalanced configuration in which only the primary
terminal may initiate data transfer. The secondary terminal transmits data only in
response to commands from the primary terminal. The primary terminal polls the
secondary terminal(s) to determine whether they have data to transmit, and then selects
one to transmit.
 Asynchronous Response Mode (ARM) is an unbalanced configuration in which secondary
terminals may transmit without permission from the primary terminal. However, the
primary terminal still retains responsibility for line initialization, error recovery, and
logical disconnect.
 Asynchronous Balanced Mode (ABM) is a balanced configuration in which either station
may initiate the transmission.

An additional link configuration is Disconnected mode. This is the mode that a secondary station
is in before it is initialized by the primary, or when it is explicitly disconnected. In this mode, the
secondary responds to almost every frame other than a mode set command with a "Disconnected
mode" response. The purpose of this mode is to allow the primary to reliably detect a secondary
being powered off or otherwise reset..
HDLC Command and response repertoire

 Commands (BALA, I, RR, RNR, (SNRM or SARM or SABM) DISC


 Responses (I, RR, RNR, UA, DM, FRMR)

Basic Operations

 Initialization can be requested by either side. When the six-mode set-command is issued.
This command:
o Signals the other side that initialization is requested

o Specifies the mode, NRM, ABM, ARM

o Specifies whether 3 or 7 bit sequence numbers are in use.

The HDLC module on the other end transmits (UA) frame when the request is accepted. And if
the request is rejected it sends (DM) disconnect mode frame.

Functional Extensions (Options)

 For Switched Circuits


o Commands: ADD - XID

o Responses: ADD - XID, RD

 For 2-way Simultaneous commands & responses are ADD - REJ


 For Single Frame Retransmission commands & responses: ADD - SREJ
 For Information Commands & Responses: ADD - Ul
 For Initialization
o Commands: ADD - SIM

o Responses: ADD - RIM

 For Group Polling


o Commands: ADD - UP

 Extended Addressing
 Delete Response I Frames
 Delete Command I Frames
 Extended Numbering
 For Mode Reset (ABM only) Commands are: ADD - RSET
 Data Link Test Commands & Responses are: ADD - TEST
 Request Disconnect. Responses are ADD - RD
 32-bit FCS

HDLC Command/Response Repertoire

Type Of Command/ C-Field Format


Name Description Info
Frame Response 7654 3210
Information(I) C/R User exchange data N(R) P/F N(S) 0
Ready to
Supervisory Receive Positive
C/R receive I-frame N(R) P/F 0 0 0 1
(S) Ready (RR) Acknowledgement
N(R)
Receive Not Positive Not ready to
C/R N(R) P/F 0 1 0 1
Ready (RNR) Acknowledgement receive
Retransmit
Negative
Reject (REJ) C/R starting with N(R) P/F 1 0 0 1
Acknowledgement
N(R)
Selective
Negative Retransmit only
Reject C/R N(R) P/F 1 1 0 1
Acknowledgement N(R)
(SREJ)

frame. The first 1 or 2 bytes are a copy of the rejected control field, the next 1 or 2 contain the
current send and receive sequence numbers, and the following 4 or 5 bits indicate the reason for
the rejection.

The IBM Bisync Protocol


Bisync is an abbreviation for Binary Synchronous Communication (BSC), a data communication
protocol developed by IBM in 1967. It's primary purpose was to link System 360/370 processors
with the IBM 2780 and 3780 Remote Job Entry (RJE) terminals.Bisync is a Character-Oriented
Protocol (COP) designed for use over synchronous transmission facilities. Efficiency is gained
since no Start and Stop bits are used, as is the case with asynchronous facilities.

Bisync Control Characters


IBM Bisync uses certain control codes as part of its line protocol. These are summarized below:
 SYN

Synchronization characters. Hexadecimal "16" in ASCII mode, Hexadecimal "32" in


EBCDIC. May be used to "sync fill" in a middle of a block (or DLE SYN for transparent
blocks).
 SOH

Start Of Header. Defines the beginning of a block containing application control


information, such as addresses, message numbers, etc.

 STX

Start of Text. Identifies the end of a Header block and the beginning of a block of text.

 ETB

End of Transmission Block. Terminates SOH and/or STX blocks. A BCC character
always follows the ETB. ETB also requires a response from the remote end.

 ETX

End of Transmission Block. Terminates SOH and/or STX blocks. A BCC character
always follows an ETX. ETX requires a response from the remote end.

 EOT

End of Transmission. Causes receiving station to reset to Control mode. It is also the
response to polls when the transmitter has nothing to send. The transmitter may also send
an EOT as an abort signal if it can no longer transmit.

 ENQ Enquiry. ENQ is used to request retransmission of a block of data. ENQ also
indicates the end of a polling or select sequence for multipoint circuits, and is used to bid
for the line in point-to-point circuits.
 ACK0

This is a two character sequence used to acknowledge line bids on point-to-point circuits,
or the response to station selection on multipoint circuits. ACK0 and ACK1 will alternate
in positive acknowledgements to blocks of received data. Receipt of the wrong
(unanticipated) ACK sequence indicates a protocol error.

 ACK1

This is a two character sequence used in positive acknowledgements of received blocks


of data. ACK0 and ACK1 alternate in consecutive, positive acknowledgements of data.

 WACK

Wait-before-transmit Acknowledgment. This two-byte sequence indicates a temporary


problem with receiving data to the transmitter. The normal response to a WACK is for
the transmitter to send an ENQ, EOT, or DLE EOT.
 NAK

Negative Acknowledgement. Indicates that the last received block was in error. Also used
to indicate not ready conditions in point-to-point line bids, or multipoint station selection.

 DLE

Data Link Escape. Used as part of control sequences or to escape control characters (to
take control characters literally) when in transparent text mode.

 RVI

Reverse Interrupt. The receiver transmits this sequence to alert the transmitter that it has a
high priority message to send.

 TTD

Temporary Text Delay. The transmitter sends this two-byte code when it wishes to keep
the session active, but is not ready to send immediately. It is sent every two seconds to
avoid the 3 second receiver abort. The receiver response to a TTD is a NAK.

 DLE EOT

Switched Line Disconnect. This two-byte sequence indicates to the receiver that the
transmitter will be disconnecting the line (hangup).

The Media Access Control (MAC)


This data communication protocol sub-layer, also known as the Medium Access Control, is a
sublayer of the Data Link Layer specified in the seven-layer OSI model (layer 2). It provides
addressing and channel access control mechanisms that make it possible for several terminals or
network nodes to communicate within a multi-point network, typically a local area network
(LAN) or metropolitan area network (MAN). The hardware that implements the MAC is referred
to as a Medium Access Controller.The MAC sub-layer acts as an interface between the Logical
Link Control (LLC) sublayer and the network's physical layer. The MAC layer emulates a full-
duplex logical communication channel in a multi-point network. This channel may provide
unicast, multicast or broadcast communication service.

Addressing mechanism
The MAC layer addressing mechanism is called physical address or MAC address. A MAC
address is a unique serial number. Once a MAC address has been assigned to a particular piece
of network hardware (at time of manufacture), that device should be uniquely identifiable
amongst all other network devices in the world. This guarantees that each device in a network
will have a different MAC address (analogous to a street address). This makes it possible for data
packets to be delivered to a destination within a subnetwork, i.e. a physical network consisting of
several network segments interconnected by repeaters, hubs, bridges and switches, but not by IP
routers. An IP router may interconnect several subnets.

A MAC layer is not required in full-duplex point-to-point communication, but address fields are
included in some point-to-point protocols for compatibility reasons.

Channel access control mechanism


The channel access control mechanisms provided by the MAC layer are also known as a multiple
access protocol. This makes it possible for several stations connected to the same physical
medium to share it. Examples of shared physical media are bus networks, ring networks, hub
networks, wireless networks and half-duplex point-to-point links. The multiple access protocol
may detect or avoid data packet collisions if a packet mode contention based channel access
method is used, or reserve resources to establish a logical channel if a circuit switched or
channelization based channel access method is used. The channel access control mechanism
relies on a physical layer multiplex scheme.The most widespread multiple access protocol is the
contention based CSMA/CD protocol used in Ethernet networks. This mechanism is only
utilized within a network collision domain, for example an Ethernet bus network or a hub
network. An Ethernet network may be divided into several collision domains, interconnected by
bridges and switches.

A multiple access protocol is not required in a switched full-duplex network, such as today's
switched Ethernet networks, but is often available in the equipment for compatibility reasons.

Common multiple access protocols


Examples of common packet mode multiple access protocols for wired multi-drop networks are:

 CSMA/CD (used in Ethernet and IEEE 802.3)


 Token bus (IEEE 802.4)
 Token ring (IEEE 802.5)
 Token passing (used in FDDI)

Examples of common multiple access protocols that may be used in packet radio wireless
networks are:

 CSMA/CA (used in IEEE 802.11/WiFi WLANs)


 Slotted ALOHA
 Dynamic TDMA
 Reservation ALOHA (R-ALOHA)
 CDMA
 OFDMA

Token ring
This local area network (LAN) technology is a local area network protocol which resides at the
data link layer (DLL) of the OSI model. It uses a special three-byte frame called a token that
travels around the ring. Token ring frames travel completely around the loop.Token ring LAN
speeds of 4 Mbit/s and 16 Mbit/s were standardized by the IEEE 802.5 working group. An
increase to 100 Mbit/s was standardized and marketed during the wane of token ring's existence
while a 1000 Mbit/s speed was actually approved in 2001, but no products were ever brought to
market.[1]

Token frame
When no station is transmitting a data frame, a special token frame circles the loop. This special
token frame is repeated from station to station until arriving at a station that needs to transmit
data. When a station needs to transmit data, it converts the token frame into a data frame for
transmission. Once the sending station receives its own data frame, it converts the frame back
into a token. If a transmission error occurs and no token frame, or more than one, is present, a
special station referred to as the Active Monitor detects the problem and removes and/or reinserts
tokens as necessary (see Active and standby monitors). On 4 Mbit/s Token Ring, only one token
may circulate; on 16 Mbit/s Token Ring, there may be multiple tokens.

The special token frame consists of three bytes as described below (J and K are special non-data
characters, referred to as code violations).

Token priority
Token ring specifies an optional medium access scheme allowing a station with a high-priority
transmission to request priority access to the token.8 priority levels, 0-7, are used. When the
station wishing to transmit receives a token or data frame with a priority less than or equal to the
station's requested priority, it sets the priority bits to its desired priority. The station does not
immediately transmit; the token circulates around the medium until it returns to the station. Upon
sending and receiving its own data frame, the station downgrades the token priority back to the
original priority

Token ring frame format


A data token ring frame is an expanded version of the token frame that is used by stations to
transmit media access control (MAC) management frames or data frames from upper layer
protocols and applications.

Token Ring and IEEE 802.5 support two basic frame types: tokens and data/command frames.
Tokens are 3 bytes in length and consist of a start delimiter, an access control byte, and an end
delimiter. Data/command frames vary in size, depending on the size of the Information field.
Data frames carry information for upper-layer protocols, while command frames contain control
information and have no data for upper-layer protocols. Token ring can be connected to physical
rings via equipment such as 100Base-TX equipment and CAT5e UTP cable.

Data/Command Frame

SD AC FC DA SA PDU from LLC (IEEE 802.2) CRC ED FS

8 bits 8 bits 8 bits 48 bits 48 bits up to 18200x8 bits 32 bits 8 bits 8 bits

Token Frame

SD AC ED

8 bits 8 bits 8 bits

Abort Frame

SD ED

8 bits 8 bits

Starting Delimiter  consists of a special bit pattern denoting the beginning of the frame. The bits
from most significant to least significant are J,K,0,J,K,0,0,0. J and K are code violations. Since
Manchester encoding is self clocking, and has a transition for every encoded bit 0 or 1, the J and K
codings violate this, and will be detected by the hardware.

J K 0 J K 0 0 0

1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit

Access Control 
this byte field consists of the following bits from most significant to least significant bit order:
P,P,P,T,M,R,R,R. The P bits are priority bits, T is the token bit which when set specifies that this is
a token frame, M is the monitor bit which is set by the Active Monitor (AM) station when it sees
this frame, and R bits are reserved bits.

+ Bits 0–2 3 4 5-7

0 Priority Token Monitor Reservation

Frame Control 
is a one byte field that contains bits describing the data portion of the frame contents.Indicates
whether the frame contains data or control information. In control frames, this byte specifies
the type of control information.

+ Bits 0–2 Bits 3-7

0 Frame type Control Bits

Frame type - 01 indicates LLC frame IEEE 802.2 (data) and ignore control bits 00 indicates
MAC frame and control bits indicate the type of MAC control frame

Destination address  -a six byte field used to specify the destination(s) physical address .

Source address  -Contains physical addressa of sending station . It is six byte field that is either
the local assigned address (LAA) or universally assigned address (UAA) of the sending station
adapter.

Data  -a variable length field of 0 or more bytes, the maximum allowable size depending on ring
speed containing MAC management data or upper layer information.Maximum length of 4500
bytes

Frame Check Sequence  -a four byte field used to store the calculation of a CRC for frame
integrity verification by the receiver.

Ending Delimiter  -The counterpart to the starting delimiter, this field marks the end of the frame
and consists of the following bits from most significant to least significant: J,K,1,J,K,1,I,E. I is
the intermediate frame bit and E is the error bit.

J K 1 J K 1 I E
1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit

Frame Status   -a one byte field used as a primitive acknowledgement scheme on whether the frame
was recognized and copied by its intended receiver.

A C 0 0 A C 0 0

1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit 1 bit

A = 1 , Address recognized C = 1 , Frame copied

Abort Frame  -Used to abort transmission by the sending station

Active and standby monitors


Every station in a token ring network is either an active monitor (AM) or standby monitor (SM)
station. However, there can be only one active monitor on a ring at a time. The active monitor is
chosen through an election or monitor contention process.

The monitor contention process is initiated when

 a loss of signal on the ring is detected.


 an active monitor station is not detected by other stations on the ring.
 when a particular timer on an end station expires such as the case when a station hasn't
seen a token frame in the past 7 seconds.

When any of the above conditions take place and a station decides that a new monitor is needed,
it will transmit a "claim token" frame, announcing that it wants to become the new monitor. If
that token returns back to the sender, it is OK for it to become the monitor. If some other station
tries to become the monitor at the same time then the station with the highest MAC address will
win the election process. Every other station becomes a standby monitor. All stations must be
capable of becoming an active monitor station if necessary.

The active monitor performs a number of ring administration functions. The first function is to
operate as the master clock for the ring in order to provide synchronization of the signal for
stations on the wire. Another function of the AM is to insert a 24-bit delay into the ring, to
ensure that there is always sufficient buffering in the ring for the token to circulate. A third
function for the AM is to ensure that exactly one token circulates whenever there is no frame
being transmitted, and to detect a broken ring. Lastly, the AM is responsible for removing
circulating frames from the ring.
Token ring insertion process
Token ring stations must go through a 5-phase ring insertion process before being allowed to
participate in the ring network. If any of these phases fail, the token ring station will not insert
into the ring and the token ring driver may report an error.

 Phase 0 (Lobe Check) — A station first performs a lobe media check. A station is
wrapped at the MSAU and is able to send 2000 test frames down its transmit pair which
will loop back to its receive pair. The station checks to ensure it can receive these frames
without error.
 Phase 1 (Physical Insertion) — A station then sends a 5 volt signal to the MSAU to open
the relay.
 Phase 2 (Address Verification) — A station then transmits MAC frames with its own
MAC address in the destination address field of a token ring frame. When the frame
returns and if the address copied , the station must participate in the periodic (every 7
seconds) ring poll process. This is where stations identify themselves on the network as
part of the MAC management functions.
 Phase 3 (Participation in ring poll) — A station learns the address of its Nearest Active
Upstream Neighbour (NAUN) and makes its address known to its nearest downstream
neighbour, leading to the creation of the ring map. Station waits until it receives an AMP
or SMP frame with the ARI and FCI bits set to 0. When it does, the station flips both bits
(ARI and FCI) to 1, if enough resources are available, and queues an SMP frame for
transmission. If no such frames are received within 18 seconds, then the station reports a
failure to open and de-inserts from the ring. If the station successfully participates in a
ring poll, it proceeds into the final phase of insertion, request initialization.
 Phase 4 (Request Initialization) — Finally a station sends out a special request to a
parameter server to obtain configuration information. This frame is sent to a special
functional address, typically a token ring bridge, which may hold timer and ring number
information with which to tell the new station about

Carrier Sense Multiple Access With Collision


Detection (CSMA/CD), in computer networking, is a network access
method in which

 a carrier sensing scheme is used.


 a transmitting data station that detects another signal while transmitting a frame, stops
transmitting that frame, transmits a jam signal, and then waits for a random time interval
(known as "backoff delay" and determined using the truncated binary exponential backoff
algorithm) before trying to send that frame again.

CSMA/CD is a modification of pure Carrier Sense Multiple Access (CSMA).


Collision detection is used to improve CSMA performance by terminating transmission as soon
as a collision is detected, and reducing the probability of a second collision on retry.Methods for
collision detection are media dependent, but on an electrical bus such as Ethernet, collisions can
be detected by comparing transmitted data with received data. If they differ, another transmitter
is overlaying the first transmitter's signal (a collision), and transmission terminates immediately.
A jam signal is sent which will cause all transmitters to back off by random intervals, reducing
the probability of a collision when the first retry is attempted. CSMA/CD is a layer 2 access
method not a protocol OSI model.

The Jam signal is used in CSMA/CD and not in CSMA/CA

Even when it has nothing to transmit, the CSMA/CD MAC sublayer monitors the physical
medium for traffic by watching the CarrierSense signal provided by the PLS (Physical layer
signals to MAC layer). Whenever the medium is busy, the CSMA/CD MAC defers to the
passing frame by delaying any pending transmission of its own. After the last bit of the passing
frame (that is, when carrierSense changes from true to false), the CSMA/CD MAC continues to
proper transmission.

Collisions are detected by monitoring the collisionDetect signal provided by the Physical Layer.
When a collision is detected during a frame transmission, the transmission is not terminated
immediately. Instead, the transmission continues until additional bits specified by jamSize have
been transmitted (counting from the time collisionDetect went on). This collision enforcement or
jam guarantees that the duration of the collision is sufficient to ensure its detection by all
transmitting stations on the network.It shall be emphasized that the description of the MAC layer
in a computer language is in no way intended to imply that procedures shall be implemented as a
program executed by a computer. The implementation may consist of any appropriate
technology including hardware, firmware, software, or any combination. For example a NIC
(Network Interface card) may contain hardware for complete implementation of Physical and
MAC layers, hence it takes layer three packets from the Operating System and performs the rest
of activity described above on it own using its own hardware, or in another scenario the NIC can
be a dumb device leaving the MAC layer intelligence to the operating system , Here NIC just
gives the proper signal using its hardware to the operating system which does all the intelligent
functions of MAC layer Reference [IEEE Std 802.3TM-2002 (Revision of IEEE Std 802.3, 2000
Edition Part 3)]Ethernet is the classic CSMA/CD access method. However, CSMA/CD is no
longer used in the 10 Gigabit Ethernet specifications, due to the requirement of switches
replacing all hubs and repeaters. Similarly, while CSMA/CD operation (half duplex) is defined
in the Gigabit Ethernet specifications, few implementations support it and in practice it is
nonexistent. Also, in Full Duplex Ethernet, collisions are impossible since data is transmitted and
received on different wires, and each segment is connected directly to a switch. Therefore,
CSMA/CD is not used on Full Duplex Ethernet networks.

TCP/IP
This provides considerably more facilities for applications than UDP. Specifically, this includes
error recovery, flow control, and reliability. TCP is a connection-oriented protocol, unlike UDP,
which is connectionless. Most of theuser application protocols, such as Telnet and FTP, use TCP.
InterProcess Communication or IPC
UNIT II INTERNET WORKING
Bridges –Routers – Gateways – Open system with bridge configuration –
Open system with gateway configuration – Standard ETHERNET and
ARCNET configuration – Special requirement for networks used for
control.

Bridges:
Bridging is a forwarding technique used in packet-switched computer networks.
Unlike routing, bridging makes no assumptions about where in a network a
particular address is located. Instead, it depends on flooding and examination of
source addresses in received packet headers to locate unknown devices. Once a
device has been located, its location is recorded in a table where the MAC address
is stored so as to preclude the need for further broadcasting. The utility of bridging
is limited by its dependence on flooding, and is thus only used in local area
networks.

Bridging generally refers to Transparent bridging or Learning bridge operation


which predominates in Ethernet. Another form of bridging, Source route bridging,
was developed for token ring networks.

A Network Bridge connects multiple network segments at the data link layer
(Layer 2) of the OSI model. In Ethernet networks, the term Bridge formally means
a device that behaves according to the IEEE 802.1D standard. A bridge and switch
are very much alike; a switch being a bridge with numerous ports. Switch or Layer
2 switch is often used interchangeably with Bridge.

Transparent bridging operation

A bridge uses a forwarding database to send frames across network segments. The
forwarding database is initially empty and entries in the database are built as the
bridge receives frames. If an address entry is not found in the forwarding database,
the frame is flooded to all other ports of the bridge, forwarding the frame to all
segments except the source address. By means of these broadcast frames, the
destination network will respond and forwarding database entry will be created.

consider three hosts, A, B and C and a bridge. The bridge has three ports. A is
connected to bridge port 1, B is connected bridge port 2, C is connected to bridge
port 3. A sends a frame addressed to B to the bridge. The bridge examines the
source address of the frame and creates an address and port number entry for A in
its forwarding table. The bridge examines the destination address of the frame and
does not find it in its forwarding table so it floods it to all other ports: 2 and 3. The
frame is received by hosts B and C. Host C examines the destination address and
ignores the frame. Host B recognizes a destination address match and generates a
response to A. On the return path, the bridge adds an address and port number
entry for B to its forwarding table. The bridge already has A's address in its
forwarding table so it forwards the response only to port 1. Host C or any other
hosts on port 3 are not burdened with the response. Two-way communication is
now possible between A and B without any further flooding.

Filtering database

To translate between two segments, a bridge reads a frame's destination MAC


address and decides to either forward or filter. If the bridge determines that the
destination node is on another segment on the network, it forwards it (retransmits)
the packet to that segment. If the destination address belongs to the same segment
as the source address, the bridge filters (discards) the frame. As nodes transmit data
through the bridge, the bridge establishes a filtering database (also known as a
forwarding table) of known MAC addresses and their locations on the network.
The bridge uses its filtering database to determine whether a packet should be
forwarded or filtered.

Advantages of network bridges

 Self-configuring
 Simple bridges are inexpensive
 Isolate collision domain
 Reduce the size of collision domain by microsegmentation in non-switched
networks
 Transparent to protocols above the MAC layer
 Allows the introduction of management/performance information and access
control
 LANs interconnected are separate, and physical constraints such as number
of stations, repeaters and segment length don't apply
 Helps minimize bandwidth usage
Disadvantages of network bridges

 Does not limit the scope of broadcasts [broadcast domain cannot be


controlled]
 Does not scale to extremely large networks
 Buffering and processing introduces delays
 Bridges are more expensive than repeaters or hubs
 A complex network topology can pose a problem for transparent bridges.
For example, multiple paths between transparent bridges and LANs can
result in bridge loops. The spanning tree protocol helps to reduce problems
with complex topologies.

Bridging versus routing

Bridging and routing are both ways of performing data control, but work through
different methods. Bridging takes place at OSI Model Layer 2 (data-link layer)
while routing takes place at the OSI Model Layer 3 (network layer). This
difference means that a bridge directs frames according to hardware assigned MAC
addresses while a router makes its decisions according to arbitrarily assigned IP
Addresses. As a result of this, bridges are not concerned with and are unable to
distinguish networks while routers can.

When designing a network, one can choose to put multiple segments into one
bridged network or to divide it into different networks interconnected by routers. If
a host is physically moved from one network area to another in a routed network, it
has to get a new IP address; if this system is moved within a bridged network, it
doesn't have to reconfigure anything.

Router
It is used to forward data among computer networks beyond directly connected
devices. (The directly connected devices are said to be in a LAN, where data are
forwarded using Network switches.)Router is a networking device whose software
and hardware [in combination] are customized to the tasks of routing and
forwarding information. A router differs from an ordinary computer in that it needs
special hardware, called interface cards, to connect to remote devices through
either copper cables or Optical fiber cable. These interface cards are in fact small
computers that are specialized to convert electric signals from one form to another,
with embedded CPU or ASIC, or both. In the case of optical fiber, the interface
cards (also called ports) convert between optical signals and electrical signals.

Routers connect two or more logical subnets, which do not share a common
network address. The subnets in the router do not necessarily map one-to-one to
the physical interfaces of the router. The term "layer 3 switching" is used often
interchangeably with the term "routing". The term switching is generally used to
refer to data forwarding between two network devices that share a common
network address. This is also called layer 2 switching or LAN switching.

Conceptually, a router operates in two operational planes (or sub-systems):

 Control plane: where a router builds a table (called routing table) as how a
packet should be forwarded through which interface, by using either
statically configured statements (called statical routes) or by exchanging
information with other routers in the network through a dynamical routing
protocol;
 Forwarding plane: where the router actually forwards traffic (called packets
in IP) from ingress (incoming) interfaces to an egress (outgoing) interface
that is appropriate for the destination address that the packet carries with it,
by following rules derived from the routing table that has been built in the
control plane.

Forwarding plane

For pure Internet Protocol (IP) forwarding function, a router is designed to


minimize the state information on individual packets. A router does not look into
the actual data contents that the packet carries, but only at the layer 3 addresses to
make a forwarding decision, plus optionally other information in the header for
hint on, for example, QoS. Once a packet is forwarded, the router does not retain
any historical information about the packet, but the forwarding action can be
collected into the statistical data, if so configured.

Forwarding decisions can involve decisions at layers other than the IP internetwork
layer or OSI layer 3. A function that forwards based on data link layer, or OSI
layer 2, information, is properly called a bridge or switch. This function is referred
to as layer 2 switching, as the addresses it uses to forward the traffic are layer 2
addresses in the OSI layer model.Besides making decision as which interface a
packet is forwarded to, which is handled primarily via the routing table, a router
also has to manage congestion, when packets arrive at a rate higher than the router
can process. Three policies commonly used in the Internet are Tail drop, Random
early detection, and Weighted random early detection. Tail drop is the simplest and
most easily implemented; the router simply drops packets once the length of the
queue exceeds the size of the buffers in the router. Random early detection (RED)
probabilistically drops datagrams early when the queue is about to exceed a pre-
configured size of the queue. Weighted random early detection requires a weight
on the average queue size to act upon when the traffic is about to exceed the pre-
configured size, so that short bursts will not trigger random drops.

Another function a router performs is to decide which packet should be processed


first when multiple queues exist. This is managed through QoS (Quality of
Service), which is critical when VoIP (Voice over IP) is deployed, so that delays
between packets do not exceed 150ms to maintain the quality of voice
conversations.

The function a router performs is called "policy based routing" where special
rules are constructed to override the rules derived from the routing table when
packet forwarding decision is made.

These functions may or may not be performed through the same internal paths that
the packets travel inside the router. Some of the functions may be performed
through an ASIC to avoid overhead caused by multiple CPU cycles, and others
may have to be performed through the CPU as these packets need special attention
that cannot be handled by an ASIC.

Types of routers

Routers may provide connectivity inside enterprises, between enterprises and the
Internet, and inside Internet Service Providers (ISPs). The largest routers (for
example the Cisco CRS-1 or Juniper T1600) interconnect ISPs, are used inside
ISPs, or may be used in very large enterprise networks. The smallest routers
provide connectivity for small and home offices.

Routers for Internet connectivity and internal use

Routers intended for ISP and major enterprise connectivity almost invariably
exchange routing information using the Border Gateway Protocol (BGP).
Edge Router: An ER is placed at the edge of an ISP network. The router speaks
external BGP (EBGP) to a BGP speaker in another provider or large enterprise
Autonomous System(AS). This type of routers is also called PE (Provider Edge)
routers.

 Subscriber Edge Router: An SER is located at the edge of the subscriber's


network, it speaks EBGP to its provider's AS(s). It belongs to an end user
(enterprise) organization. This type of routers is also called CE (Customer
Edge) routers.
 Inter-provider Border Router: Interconnecting ISPs, this is a BGP
speaking router that maintains BGP sessions with other BGP speaking
routers in other providers' ASes.
 Core router: A Core router is one that resides within an AS as back bone to
carry traffic between edge routers.

Within an ISP: Internal to the provider's AS, such a router speaks internal
BGP (IBGP) to that provider's edge routers, other intra-provider core
routers, or the provider's inter-provider border routers.
"Internet backbone:" The Internet does not have a clearly identifiable
backbone, as did its predecessors. See default-free zone (DFZ).
Nevertheless, it is the major ISPs' routers that make up what many would
consider the core. These ISPs operate all four types of the BGP-speaking
routers described here. In ISP usage, a "core" router is internal to an ISP, and
used to interconnect its edge and border routers. Core routers may also have
specialized functions in virtual private networks based on a combination of
BGP and Multi-Protocol Label Switching (MPLS).[4]

Access routers, including SOHO, are located at customer sites such as branch
offices that do not need hierarchical routing of their own. Typically, they are
optimized for low cost.

Distribution routers aggregate traffic from multiple access routers, either at the
same site, or to collect the data streams from multiple sites to a major enterprise
location. Distribution routers often are responsible for enforcing quality of service
across a WAN, so they may have considerable memory, multiple WAN interfaces,
and substantial processing intelligence.

They may also provide connectivity to groups of servers or to external networks. In


the latter application, the router's functionality must be carefully considered as part
of the overall security architecture. Separate from the router may be a firewall or
VPN concentrator, or the router may include these and other security functions.

When an enterprise is primarily on one campus, there may not be a distinct


distribution tier, other than perhaps off-campus access. In such cases, the access
routers, connected to LANs, interconnect via core routers.

In enterprises, a core router may provide a "collapsed backbone" interconnecting


the distribution tier routers from multiple buildings of a campus, or large enterprise
locations. They tend to be optimized for high bandwidth.

When an enterprise is widely distributed with no central location(s), the function of


core routing may be subsumed by the WAN service to which the enterprise
subscribes, and the distribution routers become the highest tier.

A network gateway or protocol converters


Gateway is an internetworking system capable of joining together two networks
that use different base protocols. A network gateway can be implemented
completely in software, completely in hardware, or as a combination of both.
Depending on the types of protocols they support, network gateways can operate at
any level of the OSI model.

Default Gateway

In computer Networking , a default gateway is the device that passes traffic from
the local subnet to devices on other subnets. The default gateway often connects a
local network to the Internet, although internal Gateways for local networks also
exist.

Internet default Gateways are typically one of two types:

 On home or small business networks with a broadband router to share the


Internet connection, the home router serves as the default gateway.

 On home or small business networks without a router, such as for residences


with dialup Internet access, a router at the Internet Service Provider location
serves as the default gateway.
Default network Gateways can also be configured using an ordinary computer
instead of a router. These Gateways use two network adapters, one connected to
the local subnet and one to the outside network. Either routers or gateway
computers can be used to network local subnets such as those in larger businesses.

In telecommunications, the term gateway has the following meaning:

 In a communications network, a network node equipped for interfacing


with another network that uses different protocols.
o A gateway may contain devices such as protocol translators,
impedance matching devices, rate converters, fault isolators, or signal
translators as necessary to provide system interoperability. It also
requires the establishment of mutually acceptable administrative
procedures between both networks.
o A protocol translation/mapping gateway interconnects networks with
different network protocol technologies by performing the required
protocol conversions.
 Loosely, a computer configured to perform the tasks of a gateway. For a
specific case, see default gateway.

Gateways, also called, can operate at any layer of the OSI model.

Gateways work on all seven layers of OSI architecture . The main job of a gateway
is to convert protocols among communications networks. A router by itself
transfers, accepts and relays packets only across networks using similar protocols.
A gateway on the other hand can accept a packet formatted for one protocol (e.g.
AppleTalk) and convert it to a packet formatted for another protocol (e.g. TCP/IP)
before forwarding it. A gateway can be implemented in hardware, software or
both, but they are usually implemented by software installed within a router. A
gateway must understand the protocols used by each network linked into the
router. Gateways are slower than bridges, switches and (non-gateway) routers.

A gateway is a network point that acts as an entrance to another network. On the


Internet, a node or stopping point can be either a gateway node or a host (end-
point) node. Both the computers of Internet users and the computers that serve
pages to users are host nodes, while the nodes that connect the networks in
between are gateways. For example, the computers that control traffic between
company networks or the computers used by internet service providers (ISPs) to
connect users to the internet are gateway nodes.

A gateway is an essential feature of most routers, although other devices (such as


any PC or server) can function as a gateway.

A proxy server has many potential purposes, including:

 To keep machines behind it anonymous (mainly for security).[1]


 To speed up access to resources (using caching). Web proxies are commonly
used to cache web pages from a web server.[2]
 To apply access policy to network services or content, e.g. to block
undesired sites.
 To log / audit usage, i.e. to provide company employee Internet usage
reporting.
 To bypass security/ parental controls.
 To scan transmitted content for malware before delivery.
 To scan outbound content, e.g., for data leak protection.
 To circumvent regional restrictions.

A proxy server that passes requests and replies unmodified is usually called a
gateway or sometimes tunneling proxy.A proxy server can be placed in the user's
local computer or at various points between the user and the destination servers on
the Internet.

A reverse proxy is (usually) an Internet-facing proxy used as a front-end to control and protect
access to a server on a private network, commonly also performing tasks such as load-balancing,
authentication, decryption or caching.

Caching proxy server

A caching proxy server accelerates service requests by retrieving content saved


from a previous request made by the same client or even other clients. Caching
proxies keep local copies of frequently requested resources, allowing large
organizations to significantly reduce their upstream bandwidth usage and cost,
while significantly increasing performance. Most ISPs and large businesses have a
caching proxy. These machines are built to deliver superb file system performance
(often with RAID and journaling) and also contain hot-rodded versions of TCP.
Caching proxies were the first kind of proxy server.Important use of the proxy
server is to reduce the hardware cost. An organization may have many systems on
the same network or under control of a single server, prohibiting the possibility of
an individual connection to the Internet for each system. In such a case, the
individual systems can be connected to one proxy server, and the proxy server
connected to the main server.

Web proxy

A proxy that focuses on World Wide Web traffic is called a "web proxy". The most
common use of a web proxy is to serve as a web cache. Most proxy programs
provide a means to deny access to URLs specified in a blacklist, thus providing
content filtering. This is often used in a corporate, educational or library
environment, and anywhere else where content filtering is desired. Some web
proxies reformat web pages for a specific purpose or audience, such as for cell
phones and PDAs.

AOL dialup customers used to have their requests routed through an extensible
proxy that 'thinned' or reduced the detail in JPEG pictures. This sped up
performance but caused problems, either when more resolution was needed or
when the thinning program produced incorrect results. This is why in the early
days of the web many web pages would contain a link saying "AOL Users Click "
to bypass the web proxy and to avoid the bugs in the thinning software.

Content-filtering web proxy

A content-filtering web proxy server provides administrative control over the


content that may be relayed through the proxy. It is commonly used in both
commercial and non-commercial organizations (especially schools) to ensure that
Internet usage conforms to acceptable use policy. In some cases users can
circumvent the proxy, since there are services designed to proxy information from
a filtered website through a non filtered site to allow it through the user's proxy.

Some common methods used for content filtering include: URL or DNS blacklists,
URL regex filtering, MIME filtering, or content keyword filtering. A content
filtering proxy will often support user authentication, to control web access. It also
usually produces logs, either to give detailed information about the URLs accessed
by specific users, or to monitor bandwidth usage statistics. It may also
communicate to daemon-based and/or ICAP-based antivirus software to provide
security against virus and other malware by scanning incoming content in real time
before it enters the network.

Anonymizing proxy server

An anonymous proxy server (sometimes called a web proxy) generally attempts to


anonymize web surfing. There are different varieties of anonymizers. One of the
more common variations is the open proxy. Because they are typically difficult to
track, open proxies are especially useful to those seeking online anonymity, from
political dissidents to computer criminals. Some users are merely interested in
anonymity for added security, hiding their identities from potentially malicious
websites for instance, or on principle, to facilitate constitutional human rights of
freedom of speech, for instance. The server receives requests from the
anonymizing proxy server, and thus does not receive information about the end
user's address. However, the requests are not anonymous to the anonymizing proxy
server, and so a degree of trust is present between that server and the user. Many of
them are funded through a continued advertising link to the user.

Access control: Some proxy servers implement a logon requirement. In large


organizations, authorized users must log on to gain access to the web. The
organization can thereby track usage to individuals.

Some anonymizing proxy servers may forward data packets with header lines such
as HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which
may reveal the IP address of the client. Other anonymizing proxy servers, known
as elite or high anonymity proxies, only include the REMOTE_ADDR header with
the IP address of the proxy server, making it appear that the proxy server is the
client. A website could still suspect a proxy is being used if the client sends
packets which include a cookie from a previous visit that did not use the high
anonymity proxy server. Clearing cookies, and possibly the cache, would solve this
problem.

Hostile proxy

Proxies can also be installed in order to eavesdrop upon the dataflow between
client machines and the web. All accessed pages, as well as all forms submitted,
can be captured and analyzed by the proxy operator. For this reason, passwords to
online services (such as webmail and banking) should always be exchanged over a
cryptographically secured connection, such as SSL.
Intercepting proxy server

An intercepting proxy combines a proxy server with a gateway or router


(commonly with NAT capabilities). Connections made by client browsers through
the gateway are diverted to the proxy without client-side configuration (or often
knowledge). Connections may also be diverted from a SOCKS server or other
circuit-level proxies.

Intercepting proxies are also commonly referred to as "transparent" proxies, or


"forced" proxies, presumably because the existence of the proxy is transparent to
the user, or the user is forced to use the proxy regardless of local settings.

Transparent and non-transparent proxy server

The term "transparent proxy" is most often used incorrectly to mean "intercepting
proxy" (because the client does not need to configure a proxy and cannot directly
detect that its requests are being proxied). Transparent proxies can be implemented
using Cisco's WCCP (Web Cache Control Protocol). This proprietary protocol
resides on the router and is configured from the cache, allowing the cache to
determine what ports and traffic is sent to it via transparent redirection from the
router. This redirection can occur in one of two ways: GRE Tunneling (OSI Layer
3) or MAC rewrites (OSI Layer 2).

"A 'transparent proxy' is a proxy that does not modify the request or
response beyond what is required for proxy authentication and
identification".

"A 'non-transparent proxy' is a proxy that modifies the request or response in


order to provide some added service to the user agent, such as group
annotation services, media type transformation, protocol reduction, or
anonymity filtering".

Forced proxy

The term "forced proxy" is ambiguous. It means both "intercepting proxy"


(because it filters all traffic on the only available gateway to the Internet) and its
exact opposite, "non-intercepting proxy" (because the user is forced to configure a
proxy in order to access the Internet).
Forced proxy operation is sometimes necessary due to issues with the interception
of TCP connections and HTTP. For instance, interception of HTTP requests can
affect the usability of a proxy cache, and can greatly affect certain authentication
mechanisms. This is primarily because the client thinks it is talking to a server, and
so request headers required by a proxy are unable to be distinguished from headers
that may be required by an upstream server (esp authorization headers). Also the
HTTP specification prohibits caching of responses where the request contained an
authorization header.

Suffix proxy

A suffix proxy server allows a user to access web content by appending the name
of the proxy server to the URL of the requested content (e.g.
"en.wikipedia.org.6a.nl").

Suffix proxy servers are easier to use than regular proxy servers. The concept
appeared in 2003 in form of the IPv6Gate and in 2004 in form of the Coral Content
Distribution Network, but the term suffix proxy was only coined in October 2008
by "6a.nl"[citation needed].

Open proxy server

Because proxies might be used to abuse, system administrators have developed a


number of ways to refuse service to open proxies. Many IRC networks
automatically test client systems for known types of open proxy. Likewise, an
email server may be configured to automatically test e-mail senders for open
proxies.Groups of IRC and electronic mail operators run DNSBLs publishing lists
of the IP addresses of known open proxies, such as AHBL, CBL, NJABL, and
SORBS.

Reverse proxy server

A reverse proxy is a proxy server that is installed in the neighborhood of one or


more web servers. All traffic coming from the Internet and with a destination of
one of the web servers goes through the proxy server. There are several reasons for
installing reverse proxy servers:

 Encryption / SSL acceleration: when secure web sites are created, the SSL
encryption is often not done by the web server itself, but by a reverse proxy
that is equipped with SSL acceleration hardware. See Secure Sockets Layer.
Furthermore, a host can provide a single "SSL proxy" to provide SSL
encryption for an arbitrary number of hosts; removing the need for a
separate SSL Server Certificate for each host, with the downside that all
hosts behind the SSL proxy have to share a common DNS name or IP
address for SSL connections. This problem can partly be overcome by using
the SubjectAltName feature of X.509 certificates.
 Load balancing: the reverse proxy can distribute the load to several web
servers, each web server serving its own application area. In such a case, the
reverse proxy may need to rewrite the URLs in each web page (translation
from externally known URLs to the internal locations).
 Serve/cache static content: A reverse proxy can offload the web servers by
caching static content like pictures and other static graphical content.
 Compression: the proxy server can optimize and compress the content to
speed up the load time.
 Spoon feeding: reduces resource usage caused by slow clients on the web
servers by caching the content the web server sent and slowly "spoon
feeding" it to the client. This especially benefits dynamically generated
pages.
 Security: the proxy server is an additional layer of defense and can protect
against some OS and WebServer specific attacks. However, it does not
provide any protection to attacks against the web application or service
itself, which is generally considered the larger threat.
 Extranet Publishing: a reverse proxy server facing the Internet can be used to
communicate to a firewalled server internal to an organization, providing
extranet access to some functions while keeping the servers behind the
firewalls. If used in this way, security measures should be considered to
protect the rest of your infrastructure in case this server is compromised, as
its web application is exposed to attack from the Internet.

Tunneling proxy server

A tunneling proxy server is a method of defeating blocking policies implemented


using proxy servers. Tunneling proxy servers are used by people who have been
blocked from viewing a particular web site. Most tunneling proxy servers are also
proxy servers, of varying degrees of sophistication, which effectively implement
"bypass policies".A tunneling proxy server is a web-based page that takes a site
that is blocked and "tunnels" it, allowing the user to view blocked pages. The use
of tunneling proxy servers is usually safe with the exception that tunneling proxy
server sites run by an untrusted third party can be run with hidden intentions, such
as collecting personal information, and as a result users are typically advised
against running personal data such as credit card numbers or passwords through a
tunneling proxy server.

Internetworking is the practice of connecting a computer network with other


networks through the use of gateways that provide a common method of routing information
packets between the networks. The resulting system of interconnected networks is called an
internetwork, or simply an internet.

The most notable example of internetworking is the Internet, a network of networks based on
many underlying hardware technologies, but unified by an internetworking protocol standard, the
Internet Protocol Suite, often also referred to as TCP/IP.

Interconnection of networks

Internetworking started as a way to connect disparate types of networking technology, but it


became widespread through the developing need to connect two or more local area networks via
some sort of wide area network. The original term for an internetwork was catenet.The network
elements used to connect individual networks in the ARPANET, the predecessor of the Internet,
were originally called gateways, but the term has been deprecated in this context, because of
possible confusion with functionally different devices. Today the interconnecting gateways are
called Internet routers.

Another type of interconnection of networks often occurs within enterprises at the Link Layer of
the networking model, i.e. at the hardware-centric layer below the level of the TCP/IP logical
interfaces. Such interconnection is accomplished with network bridges and network switches.
This is sometimes incorrectly termed internetworking, but the resulting system is simply a larger,
single subnetwork, and no internetworking protocol, such as Internet Protocol, is required to
traverse these devices. However, a single computer network may be converted into an
internetwork by dividing the network into segments and logically dividing the segment traffic
with routers.

The Internet Protocol is designed to provide an unreliable (not guaranteed) packet service across
the network. The architecture avoids intermediate network elements maintaining any state of the
network. Instead, this function is assigned to the endpoints of each communication session. To
transfer data reliably, applications must utilize an appropriate Transport Layer protocol, such as
Transmission Control Protocol (TCP), which provides a reliable stream. Some applications use a
simpler, connection-less transport protocol, User Datagram Protocol (UDP), for tasks which do
not require reliable delivery of data or that require real-time service, such as video streaming.[1]

Networking models

Two architectural models are commonly used to describe the protocols and methods used in
internetworking.
The Open System Interconnection (OSI) reference model was developed under the auspices of
the International Organization for Standardization (ISO) and provides a rigorous description for
layering protocol functions from the underlying hardware to the software interface concepts in
user applications. Internetworking is implemented in the Network Layer (Layer 3) of the model.

The Internet Protocol Suite, also called the TCP/IP model of the Internet was not designed to
conform to the OSI model and does not refer to it in any of the normative specifications in
Requests for Comment and Internet standards. Despite similar appearance as a layered model, it
uses a much less rigorous, loosely defined architecture that concerns itself only with the aspects
of logical networking. It does not discuss hardware-specific low-level interfaces, and assumes
availability of a Link Layer interface to the local network link to which the host is connected.
Internetworking is facilitated by the protocols of its Internet Layer.

CONNECTION METHODOLOGIES

Computer networks can be classified according to the hardware and software technology that is
used to interconnect the individual devices in the network, such as optical fiber, Ethernet,
Wireless LAN, HomePNA, Power line communication or G.hn.

Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs,
switches, bridges and/or routers. Wireless LAN technology is designed to connect devices
without wiring. These devices use radio waves or infrared signals as a transmission medium.
ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines)
to create a high-speed (up to 1 Gigabit/s) local area network.

Wired technologies

 Twisted pair wire is the most widely used medium for telecommunication. Twisted-pair
wires are ordinary telephone wires which consist of two insulated copper wires twisted
into pairs and are used for both voice and data transmission. The use of two wires twisted
together helps to reduce crosstalk and electromagnetic induction. The transmission speed
ranges from 2 million bits per second to 100 million bits per second.

 Coaxial cable is widely used for cable television systems, office buildings, and other
worksites for local area networks. The cables consist of copper or aluminum wire
wrapped with insulating layer typically of a flexible material with a high dielectric
constant, all of which are surrounded by a conductive layer. The layers of insulation help
minimize interference and distortion. Transmission speed range from 200 million to more
than 500 million bits per second.

 Optical fiber cable consists of one or more filaments of glass fiber wrapped in protective
layers. It transmits light which can travel over extended distances without signal loss.
Fiber-optic cables are not affected by electromagnetic radiation. Transmission speed may
reach trillions of bits per second. The transmission speed of fiber optics is hundreds of
times faster than for coaxial cables and thousands of times faster than for twisted-pair
wire.
Wireless technologies

 Terrestrial Microwave – Terrestrial microwaves use Earth-based transmitter and receiver.


The equipment look similar to satellite dishes. Terrestrial microwaves use low-gigahertz
range, which limits all communications to line-of-sight. Path between relay stations
spaced approx. 30 miles apart. Microwave antennas are usually placed on top of
buildings, towers, hills, and mountain peaks.

 Communications Satellites – The satellites use microwave radio as their


telecommunications medium which are not deflected by the Earth's atmosphere. The
satellites are stationed in space, typically 22,000 miles (for geosynchronous satellites)
above the equator. These Earth-orbiting systems are capable of receiving and relaying
voice, data, and TV signals.

 Cellular and PCS Systems – Use several radio communications technologies. The
systems are divided to different geographic area. Each area has low-power transmitter or
radio relay antenna device to relay calls from one area to the next area.

 Wireless LANs – Wireless local area network use a high-frequency radio technology
similar to digital cellular and a low-frequency radio technology. Wireless LANs use
spread spectrum technology to enable communication between multiple devices in a
limited area. An example of open-standards wireless radio-wave technology is IEEE
802.11b.

 Bluetooth – A short range wireless technology. Operate at approx. 1Mbps with range
from 10 to 100 meters. Bluetooth is an open wireless protocol for data exchange over
short distances.

 The Wireless Web – The wireless web refers to the use of the World Wide Web through
equipments like cellular phones, pagers, PDAs, and other portable communications
devices. The wireless web service offers anytime/anywhere connection.

Ethernet
LANs

Networks are collections of separate computers that communicate with each other over a
common cable or radio medium. Local area networks (LANs) tend to be confined to one
geographical location, while wide area networks (WANs) tend to span many physical locations.

Protocols
Network protocols are standards that allow computers to communicate with each other. A
protocol defines how the computers should identify each other on the network, the form that the
data should take in transit, and how the information should be reconstructed once it reaches its
final destination. Protocols also define how to handle damaged transmissions. IPX, TCP/IP,
DECnet, AppleTalk, LAT, SMB, DLC, and NetBEUI are examples of network protocols.

Although each protocol is different, they all use the physical cabling in the same manner, which
allows them to peacefully coexist. This concept is known as "protocol independence", which
means that the physical network and the protocols are not directly connected.

Media and Topologies

One of the most important parts of designing and installing a network is deciding on which
cabling medium and wiring topology to use. There are four major types of media in use today:
Thickwire, thin coax, unshielded twisted pair (UTP), and fiber optic.

Ethernet media are used in two basic topologies called "bus" and "star". The topology defines
how a node (which is any device such as a computer, printer, or hub) is connected to the
network.

A bus topology consists of nodes connected together by a single long cable. Each node "taps"
into the bus and directly communicates with all other nodes on the bus. The major advantage of
this topology is the easy expansion, by adding extra "taps", and the lack a hub. The major
disadvantage is that any break in the cable will cause all nodes on the cable to loose connection
to the network.

A star topology links exactly two nodes together on the network. A hub is used to collection
point where many of the connections come together. The major advantage is any single break
only disables one host. The major disadvantage is the added cost of a hub.

Thickwire Ethernet

Thickwire, or 10BASE5 ethernet, was generally used to create large "backbones". A network
backbone joins many smaller network segments into one large LAN. Thickwire made an
excellent backbone because it can support many nodes in a bus topology and the segment can be
quite long. It can be run from workgroup to workgroup where smaller networks can then be
attached to the backbone. A thickwire segment can be up to 500 meters long and have as many
as 100 nodes attached. New nodes are connected to the cable by drilling into the media with a
device known as a "vampire tap". Nodes must be spaced exactly 2.5 meters apart to prevent
signals from interfering with one another.

Thickwire is being replaced by thin coax and fiber optic cabling in most cases. The expense of
the cable, coupled along with the expense of the vampire taps, has begun to eliminate this form
of cabling.

Thin Coax Ethernet

Thin coax, or 10BASE2 ethernet, offer the advantages of thicknet's bus topology, with reduced
cost and easier installation. Thin coaxial cable is thinner and more flexible than thickwire, but it
can support only 30 nodes per segment, and each node must be at least 1.5 meters apart.

A thin coax cable has BNC type connectors on both ends. You then connect the segments of
cable together with a "T" connector, and connect the third connection of the "T" to the node.
Each end of the long segment must be terminated with a 50 ohm resistor and grounded.

Twisted Pair Ethernet

Unshielded twisted pair (UTP), or 10BASE-T ethernet, cable is a 4 pair cable which is very
similar to telephone cable in both appearance and end connector appearance. It comes in a
variety of grades, with level 1 being the lowest quality and level 5 being the best.

Level 1 and 2 cabling should only be used for voice and low speed transmissions (less than 5
Mbps). Level 3 may be used for data speeds up to 16Mbps, while level 4 can handle speeds up to
20Mbps. The finest cable avaliable, level 5, can handle speeds up to 100Mbps.

A 10BASE-T ethernet network uses a star topology, with each node being connected directly to a
hub. The major limitation to this cable is a maximum calbe length of 100 meters, and that each
node must have its own connection to the hub.

Fiber Optic Ethernet

Fiber-optic, or 10BASE-FL ethernet, is similar to twisted pair. Fiber-optic cable can handle
100Mbps transmission speeds, but is not affected by electrical emissions or electro-magnetic
interference . Lighting strikes, which can be transmitted by other cabling types, is not transmitted
by Fiber-optic cable. The major advantage of fiber-optic cable is the 2 kilometer maximum
length. The disadvantage being the higher cost of cable and equipment.

Fast Ethernet

With the addition of large data steams such as real time video and audio, networks have begun to
require high transmission speeds. The new ethernet standard established to handle this
requirement is called Fast Ethernet, or 100BASE-T. It is defined by IEEE standard 802.3u,
which raises the maximum speed from 10 megabits per second to 100 megabits per second.
There are three types of Fast Ethernet currently avaliable:
100BASE-TX for use with level 5 UTP cable
100BASE-FX for use with fiber-optic cable
100BASE-T4 which has an extra two wires for use with level 3 UTP cable

Currently the 100BASE-TX standard has become the most popular due to its close compatilibity
with the 10BASE-T ethernet standard.

Hubs

A hub is a central point where multiple cables come together. A hub usually allows 8, 16, or 64
node connections to communicate. If any single connection disconnects or is having problems
the hub can partition (remove from the network) it and allow all other nodes to continue to
communicate.

Tranceivers

Transceivers, also known as Media Attachment Units (MAUs), are used to connect nodes to the
various ethernet media. Generally the transceiver allows the attachment of 10BASE-T or
10BASE-2 cable on one side, and the connection via a 15 pin D-shell connector, known as an
Application User Interface (AUI), on the other.

The user would connect the AUI connection to the computer and the 10BASE-T or 10BASE-2
connection to the network media.

Repeaters

Repeaters are used to connect two or more ethernet segments of any media type. They can be
used to extend a segment beyond its maximum length or maximum number of nodes by restoring
signal quality and timing. Repeaters can also be used to connect segments consisting of different
media types together into one larger segment.

It must be noted that a repeater counts as a node on every segment to which it is attached.

Bridges

The function of a bridge is to connect separate ethernets together. Bridges map the ethernet
addresses of the nodes residing on each network segment and then allow only the necessary
traffic to pass through the bridge. A bridge can also filter out certain traffic and prevent it from
passing through.When a packet is received by the bridge, the bridge determines the destination
and source segments. If the segments are the same, the packet is dropped("filtered"); if the
segments are different, the packet is forwarded to the proper segment. Additionally, bridges
prevent all bad or misaligned packets from spreading by not forwarding them.
Bridges are called "store-and-forward" devices because they look at the whole ethernet packet
before making their filtering or forwarding decisions.
Switches

An ethernet switch is a bridge which can connect more than two segments together. The idea
behind a switch is that it removes all unneeded traffic from each segment by only forwarding the
traffic needed on that segment, which provides better performance on the network.

Routers

Routers work in a manner similar to switches and bridges in that they filter out network traffic.
Rather than doing so by packet addresses they filter by specific protocol.
An IP router can divide a network into various subnets so that only traffic destined for particular
IP addresses can pass between segments. The price paid for this type of intelligent forwarding
and filtering is usually calculated in speed of the network, because this protocol filtering usually
takes more time than packet filtering.

ARCNET an acronym from Attached Resource Computer NETwork is a local area


network (LAN) protocol. ARCNET added a small delay on an inactive network as a sending
station waited to receive the token, but Ethernet's performance degraded drastically if too many
peers attempted to broadcast at the same time, due to the time required for the slower processors
of the day to process and recover from collisions. ARCNET had slightly lower best-case
performance (viewed by a single stream), but was much more predictable. ARCNET also has the
advantage that it achieved its best aggregate performance under the highest loading, approaching
asymptotically its maximum throughput. While the best case performance was less than Ethernet,
the general case was equivalent and the worst case was dramatically better. An Ethernet network
could collapse when too busy due to excessive collisions. An ARCNET would keep on going at
normal (or even better) throughput

ARCNET provides the sender with a concrete acknowledgment (or not) of successful delivery at
the receiving end before the token passes on to the next node, permitting much faster fault
recovery within the higher level protocols (rather than having to wait for a timeout on the
expected replies). ARCnet also doesn't waste network time transmitting to a node not ready to
receive the message, since an initial inquiry (done at hardware level) establishes that the
recipient is able and ready to receive the larger message before it is sent across the
bus.Advantage that ARCNET enjoyed over collision-based Ethernet is that it guarantees
equitable access to the bus by everyone on the network. Although it might take a short time to
get the token depending on the number of nodes and the size of the messages currently being
sent about, you will always receive it within a predictable maximum time; thus it is
deterministic. This made ARCNET an ideal real-time networking system, which explains its use
in the embedded systems and process control markets. Token Ring has similar qualities, but is
much more expensive to implement than ARCNET.
UNIT III - HART AND FIELD BUS
Evolution of signal standards – HART communication protocol –
Communication modes – HART networks – Control system interface –
HART and OSI model – Filed bus introduction – General field bus
architecture – Basic requirements of field bus standard – Field bus
topology – Inter operability.

Highway Addressable Remote Transducer-


HART
The HART (Highway Addressable Remote Transducer) Protocol is the global standard for
sending and receiving digital information across analog wires between smart devices and control
or monitoring system. There are several reasons to have a host communicate with smart devices.
These include:
• Device Configuration or re-configuration
• Device Diagnostics
• Device Troubleshooting
• Reading the additional measurement values provided by the device
• Device Health and Status
• Much more: There are many benefits of using HART technology, and more users are reporting
benefits in their projects on a continual basis
The most important performance features of the HART protocol include:
-- proven in practice, simple design, easy to maintain and operate
4 compatible with conventional analog instrumentation
4 simultaneous analog and digital communication
4 option of point-to-point or multidrop operation
4 flexible data access via up to two master devices
4 supports multivariable field devices
4 sufficient response time of approx. 500 ms
4 open de-facto standard freely available to any manufacturer or user

HART Communication Protocols


The HART Protocol makes use of the Bell 202 Frequency Shift Keying (FSK) standard to
superimpose digital communication signals at a low level on top of the 4-20mA.
This enables two-way field communication to take place and makes it possible for additional
information beyond just the normal process variable to be communicated to/from a smart field
instrument. The HART Protocol communicates at 1200 bps without interrupting the 4-20mA
signal and allows a host application (master) to get two or more digital updates per second from
a smart field device. As the digital FSK signal is phase continuous, there is no interference with
the 4-20mA signal.

HART technology is a master/slave protocol, which means that a smart field (slave) device only
speaks when spoken to by a master. The HART Protocol can be used in various modes such as
point-to-point or multidrop for communicating information to/from smart field instruments and
central control or monitoring systems.HART Communication occurs between two HART-
enabled devices, typically a smart field device and a control or monitoring system.
Communication occurs using standard instrumentation grade wire and using standard wiring and
termination practices.
The HART Protocol provides two simultaneous communication channels: the 4-20mA analog
signal and a digital signal. The 4-20mA signal communicates the primary measured value (in the
case of a field instrument) using the 4-20mA current loop - the fastest and most reliable industry
standard. Additional device information is communicated using a digital signal that is
superimposed on the analog signal. The digital signal contains information from the device
including device status, diagnostics, additional measured or calculated values, etc. Together, the
two communication channels provide a low-cost and very robust complete field communication
solution that is easy to use and configure. The HART Protocol provides for up to two masters
(primary and secondary). This allows secondary masters such as handheld communicators to be
used without interfering with communications to/from the primary master, i.e.
control/monitoring system

The HART Protocol permits all digital communication with field devices in either point-to-point
or multidrop network configurations:
Multidrop Configuration
There is also an optional "burst" communication mode where a single slave device can
continuously broadcast a standard HART reply message. Higher update rates are possible with
this optional burst communication mode and use is normally restricted to point-to-point
configuration.
The Benefits of HART Protocol Communication
Digital Capability
• Access to all instrument parameters & diagnostics
• Supports multivariable instruments
• On-line device status
Analog Compatibility
• Simultaneous analog & digital communication
• Compatible with existing 4-20 mA equipment & wiring Interoperability
• Fully open de facto standard
• Common Command and data structure
• Enhanced by Device Description Language
Availability
• Field proven technology with more than 1,400,000installations
• Large and growing selection of products
• Used by more smart instruments than any other in the industry

HART technology can Provide


• Leverage the capabilities of a full set of intelligent device data for operational improvements.
• Gain early warnings to variances in device, product or process performance.
• Speed the troubleshooting time between the identification and resolution of problems.
• Continuously validate the integrity of loops and control/automation system strategies.
• Increase asset productivity and system availability.
Increase Plant Availability
• Integrate devices and systems for detection of previously undetectable problems.
• Detect device and/or process connection problems real time.
• Minimize the impact of deviations by gaining new, early warnings.
• Avoid the high cost of unscheduled shutdowns or process disruptions.
Reduce Maintenance Costs
• Quickly verify and validate control loop and device configuration.
• Use remote diagnostics to reduce unnecessary field checks.
• Capture performance trend data for predictive maintenance diagnostics.
• Reduce spares inventory and device management costs.
Improve regulatory compliance
• Enable automated record keeping of compliance data.
• Facilitates automated safety shutdown testing.
• Raise SIL/safety integrity level with advanced diagnostics.
HART PROTOCOL - FORMAT

Fieldbus
. The major fieldbus area is divided into two major groups:
 WorldFIP (World Factory Instrumentation Protocol)
 ISP (Interoperable Systems Project)

Two standards bodies known as the IEC (International Electrotechnical Commission) and the
ISA (Industry Society of America ) are currently working on an international standard known as
SP50. This standard will hopefully allow the manufacturers of fieldbus equipment all around the
world to produce compatible instruments for industrial applications. WorldFIP, ISP and FF have
pledged that they will eventually evolve their products to meet the standard when it arrives.

World Factory Instrumentation Protocol


The World Factory Instrumentation Protocol (WorldFIP) was developed from an earlier French
National Standard known as NFC 46-600, or more commonly as FIP. It is a consortium of
companies producing field bus instruments that use a messaging system. Time critical options
are supposedly guaranteed in a WorldFIP implementation. WorldFIP plans to add a device
description tool, known as the WorldFIP Device Builder. The Device Builder will automatically
inform the control system what features and parameters each instrument connected to the bus
has. WorldFIP is divisional in nature with a UK, European and North American division. Each
division is motivated by similar goals and similar implementations, but each operates almost
autonomously from the others.

Interoperable Systems Project

The Interoperable Systems Project (ISP) implementation is based on the German National
Standard DIN STD19245, also known as Process Field Bus, or Profibus. Profibus is similar to
the token passing network commonly implemented on many networks today. The ISP extension
to Profibus is the Device Description Language (DDL). DDL allows an instrument added to the
bus system to communicate to a master control what its functions and capabilities are.

Fieldbus Foundation
On a positive note ISP and WorldFIP (North American division) have been working together
since late 1993 on a possible merger of their technology. A single solution has been what
industry has needed for a long time, so in June of 1994, the Fieldbus Foundation (FF) was set up
between ISP and WorldFIP (NA). However, at least 1 to 2 years of delay is expected before a
complete product can be produced.

Profibus-ISP

Effectively a breakaway group of the Profibus and ISP organisations, this group effectively
announced to the world that they will have their own fieldbus communications system ready in
approximately June/July 1994. Profibus-ISP is derived from the Profibus and ISP products, and
hence has the features of both with some small additions. At the time of writing, little
information on Profibus-ISP and the Fieldbus Foundation was available.

IEC/ISA SP50

The ISA/IEC are developing a standard with the working name of SP50. The standard will
follow the ISO/OSI seven layer model for data communications with an additional eighth layer
which focuses on the product interoperablility.

Current progress on the SP50 is as follows

 Physical - Completed. Specification includes


o 31.25 kbit/sec, 1 Mbit/sec and 2.5 Mbit/sec data transfer rates.

o Requirements for fieldbus component parts.


o Media and network configuration requirements for data integrity and
interoperability between devices.

Fieldbus Topology
Field bus technology consists of three parts:

1 – The Physical Layer;


2 – The Communication Stack
3 – The User Application.

The Physical Layer is OSI layer 1. The Data Link Layer (DLL) is OSI layer 2. The Field bus
Message Specification (FMS) is OSI layer 7. The Communication Stack is comprised of layers 2
and 7 in the OSI model.The field bus does not use the OSI layers 3, 4, 5 and 6. The Field bus
Access Sub layer (FAS) maps theFMS onto the DLL.The User Application is not defined by the
OSI model. The Field bus Foundation has specified user Application model. Each layer in the
communication system is responsible for a portion of the message that is transmitted on the
fieldbus.

Eight Bit OCTET to Transfer the data

The Physical Layer is defined by approved standards from the International Electrotechnical

Commission (IEC) and The International Society of


Measurement and Control (ISA).

The Physical Layer receives messages from the communication


stack and converts the messages

into physical signals on the fieldbus transmission medium and


vice-versa.Fieldbus signals are encoded using the well-known
Manchester Biphase-L technique. The signal is called
“synchronous serial” because the clock information is
embedded in the serial data stream.Data is combined with the
clock signal to create the fieldbus signal The receiver of the
fieldbus signal as shown in the figure below. The receiver of the
fieldbus signalinterprets a positive transition in the middle of a
bit time as a logical “O” and a negative transition
The Data Link Layer (DLL), controls transmission of messages onto the fieldbus. manages
access to the fieldbus through a deterministic centralized bus scheduler called the LinkActive
Scheduler (LAS).The DLL is a subset of the emerging IEC/ISA DLL standard.

Two types of devices are defined in the DLL specification:

• Basic Device

• Link Master-Link Master devices are capable of becoming the Link Active Scheduler (LAS).
Basic devices do not have the capability to become the LAS.
UNIT IV - MODBUS and PROFIBUS PA/DP/FMS AND FF

MODBUS protocol structure – function codes – troubleshooting


Profibus: Introduction – profibus protocol stack – profibus communication
model –communication objects – system operation – troubleshooting – review of
foundation fieldbus.

MODBUS

Figure 1 shows how devices might be interconnected in a hierarchy of networks that employ widely
differing communication techniques. In message transactions, the Modbus protocol imbedded into each
network’s packet structure provides the common language by which the devices can exchange data.

Transactions on Modbus Networks

Standard Modbus ports on Modicon controllers use an RS–232C compatible serial interface that defines
connector pin outs, cabling, signal levels, transmission baud rates, and parity checking. Controllers can be
networked directly or via modems. Controllers communicate using a master–slave technique, in which
only one device (the master) can initiate transactions (called ‘queries’). The other devices (the slaves)
respond by supplying the requested data to the master, or by taking the action requested in the query.
Typical master devices include host processors and programming panels. Typical slaves include
programmable controllers. The master can address individual slaves, or can initiate a broadcast message
to all slaves. Slaves return a message (called a ‘response’) to queries that are addressed to them
individually. Responses are not returned to broadcast queries from the master.
The Modbus protocol establishes the format for the master’s query by placing into it the device (or
broadcast) address, a function code defining the requested action, any data to be sent, and an error–
checking field. The slave’s response message is also constructed using Modbus protocol. It contains fields
confirming the action taken, any data to be returned, and an error–checking field. If an error occurred in
receipt of the message, or if the slave is unable to perform the requested action, the slave will construct an
error message and send it as its response.

Transactions on Other Kinds of Networks

In addition to their standard Modbus capabilities, some Modicon controller models can communicate over
Modbus Plus using built–in ports or network adapters, and over MAP, using network adapters.
On these networks, the controllers communicate using a peer–to–peer technique, in which any controller
can initiate transactions with the other controllers. Thus a controller may operate either as a slave or as a
master in separate transactions. Multiple internal paths are frequently provided to allow concurrent
processing of master and slave transactions.

Modbus Protocol

At the message level, the Modbus protocol still applies the master–slave principle even though the
network communication method is peer–to–peer. If a controller originates a message, it does so as a
master device, and expects a response from a slave device. Similarly, when a controller receives a
message it constructs a slave response and returns it to the originating controller.

The Query–Response Cycle

The Query: The function code in the query tells the addressed slave device what kind of action to
perform. The data bytes contain any additional information that the slave will need to perform the
function. For example, function code 03 will query the slave to read holding registers and respond with
their contents. The data field must contain the information telling the slave which register to start at and
how many registers to read. The error check field provides a method for the slave to validate the integrity
of the message contents.
The Response: If the slave makes a normal response, the function code in the response is an echo of the
function code in the query. The data bytes contain the data collected by the slave, such as register values
or status. If an error occurs, the function code is modified to indicate that the response is an error
response, and the data bytes contain a code that describes the error. The error check field allows the
master to confirm that the message contents are valid.

Modbus Message Framing

In either of the two serial transmission modes (ASCII or RTU), a Modbus message is placed by the
transmitting device into a frame that has a known beginning and ending point. This allows receiving
devices to begin at the start of the message, read the address portion and determine which device is
addressed (or all devices, if the message is broadcast), and to know when the message is completed.
Partial messages can be detected and errors can be set as a result. On networks like MAP or Modbus Plus,
the network protocol handles the framing of messages with beginning and end delimiters that are specific
to the network. Those protocols also handle delivery to the destination device, making the Modbus
address field imbedded in the message unnecessary for the actual transmission. (The Modbus address is
converted to a network node address and routing path by the originating controller or its network adapter.)

ASCII Framing
In ASCII mode, messages start with a ‘colon’ ( : ) character (ASCII 3A hex), and end with a ‘carriage
return – line feed’ (CRLF) pair (ASCII 0D and 0A hex). The allowable characters transmitted for all other
fields are hexadecimal 0–9, A–F. Networked devices monitor the network bus continuously for the
‘colon’ character. When one is received, each device decodes the next field (the address field) to
find out if it is the addressed device. Intervals of up to one second can elapse between characters within
the message. If a greater interval occurs, the receiving device assumes an error has occurred.
A typical message frame is shown below.

RTU Framing
In RTU mode, messages start with a silent interval of at least 3.5 character times. This is most easily
implemented as a multiple of character times at the baud rate that is being used on the network (shown as
T1–T2–T3–T4 in the figure below). The first field then transmitted is the device address.The allowable
characters transmitted for all fields are hexadecimal 0–9, A–F.Networked devices monitor the network
bus continuously, including during the ‘silent’ intervals. When the first field (the address field) is
received, each device decodes it to find out if it is the addressed device. Following the last transmitted
character, a similar interval of at least 3.5 character times marks the end of the message. A new message
can begin after this interval. The entire message frame must be transmitted as a continuous stream. If a
silent interval of more than 1.5 character times occurs before completion of the frame, the receiving
device flushes the incomplete message and assumes that the next byte will be the address field of a new
message. Similarly, if a new message begins earlier than 3.5 character times following a previous
message, the receiving device will consider it a continuation of the previous message. This will set an
error, as the value in the final CRC field will not be valid for the combined messages. A typical message
frame is shown below.
Error Checking Methods

Standard Modbus serial networks use two kinds of error checking. Parity checking(even or odd) can be
optionally applied to each character. Frame checking (LRCor CRC) is applied to the entire message. Both
the character check and message frame check are generated in the master device and applied to the
message contents before transmission. The slave device checks each character and the entire message
frame during receipt. The master is configured by the user to wait for a predetermined timeout interval
before aborting the transaction. This interval is set to be long enough for any slave to respond normally. If
the slave detects a transmission error, the message will not be acted upon. The slave will not construct a
response to the master. Thus the timeout will expire and allow the master’s program to handle the error.
Note that a message addressed to a nonexistent slave device will also cause a timeout.
Other networks such as MAP or Modbus Plus use frame checking at a level above the Modbus contents
of the message. On those networks, the Modbus message LRC or CRC check field does not apply. In the
case of a transmission error, the communication protocols specific to those networks notify the
originating device that an error has occurred, and allow it to retry or abort according to how it has been
setup. If the message is delivered, but the slave device cannot respond, a timeout error can occur which
can be detected by the master’s program.

Parity Checking

Users can configure controllers for Even or Odd Parity checking, or for No Parity checking. This will
determine how the parity bit will be set in each character. If either Even or Odd Parity is specified, the
quantity of 1 bits will be counted in the data portion of each character (seven data bits for ASCII mode, or
eight for RTU). The parity bit will then be set to a 0 or 1 to result in an Even or Odd total of 1 bits.
For example, these eight data bits are contained in an RTU character frame: 1100 0101

The total quantity of 1 bit in the frame is four. If Even Parity is used, the frame’s parity bit will be a 0,
making the total quantity of 1 bits still an even number (four).
If Odd Parity is used, the parity bit will be a 1, making an odd quantity (five).

When the message is transmitted, the parity bit is calculated and applied to the frame of each character.
The receiving device counts the quantity of 1 bits and sets an error if they are not the same as configured
for that device (all devices on the Modbus network must be configured to use the same parity check
method).
Note that parity checking can only detect an error if an odd number of bits are picked up or dropped in a
character frame during transmission. For example, if Odd Parity checking is employed, and two 1 bits are
dropped from a character containing three 1 bits, the result is still an odd count of 1 bits. If No Parity
checking is specified, no parity bit is transmitted and no parity check can be made. An additional stop bit
is transmitted to fill out the character frame.
LRC Checking

In ASCII mode, messages include an error–checking field that is based on a Longitudinal Redundancy
Check (LRC) method. The LRC field checks the contents of the message, exclusive of the beginning
‘colon’ and ending CRLF pair. It is applied regardless of any parity check method used for the individual
characters of the message. The LRC field is one byte, containing an 8–bit binary value. The LRC value is
calculated by the transmitting device, which appends the LRC to the message. The receiving device
calculates an LRC during receipt of the message, and compares the calculated value to the actual value it
received in the LRC field. If the two values are not equal, an error results. The LRC is calculated by
adding together successive 8–bit bytes of the message, discarding any carries, and then two’s
complementing the result. It is performed
on the ASCII message field contents excluding the ‘colon’ character that begins the message, and
excluding the CRLF pair at the end of the message. In ladder logic, the CKSM function calculates a LRC
from the message contents.

PROFIBUS

PROFIBUS(Process Fieldbus) is a fieldbus definition created in 1989 for a wide range of


applications in the field of factory automation and process automation. This technology was developed by
Siemens. It is suitable for time critical applications and complex communication tasks. The PROFIBUS is
oriented to the OSI reference model and uses only the physical, data link and application layers of the
OSI model.
The three different PROFIBUS versions are
1. PROFIBUS-DP
2. PROFIBUS-PA
3. PROFIBUS-FMS

Salient features of PROFIBUS are summarized – network speed is shown as 9.6kbit/s (lower limit)
to 12,000kbit/s (upper limit), although the network speed may vary depending on the medium or
technology used. For example, MBP-IS, the medium used for PROFIBUS PA, operates at a single baud
rate of 31.25kbit/s. An interesting feature of PROFIBUS is that the protocol is the same no matter what
transmission medium or technology is used. Thus, PROFIBUS DP (utilizing RS-485 copper, fiber optics,
infrared, and so on) and PROFIBUS PA (utilizing MBP-IS), use the same protocol and the devices can
exchange cyclic I/O data with a standard DP master.
Table 1-1 — PROFIBUS Features

PROFIBUS-DP

The PROFIBUS-DP(PROFIBUS-Decentralized Periphery) is the fastest fieldbus standard. DP refers the


distributed I/O devices connected via a fast serial data link with a central controller.
1.Uses RS-485 interface link
2. Controller is master and I/O devices are slaves
3. Data rate is 9.6kbps-12Mbps.
4. Uses bus topology with STP
5. Communication is half-duplex and uses NRZ coding

Cyclic communication

The PROFIBUS uses cyclic transfer of data between masters and slaves.The read and write operations of
class-1 master with slaves cyclically repeat and the cyclic communication forms the basis for automation.

Acyclic communication

PROFIBUS also allows for acyclic communication between class-2 masters and slaves.A class-1 master
will automatically detect the presence of a clas-2 master. When the class-1 master completes its polling
cycle, it will pass a token to the class-2 master granting it temporary access to the bus.Class-2 master
which currently holds the token has the opportunity to exchange data with all the slaves within a specific
period of time called the token hold-time.
PROFIBUS message structure: A PROFIBUS message called a telegram may contain up to 244 bytes of
data and 11 bytes of telegram header. The fields in the header include start delimiter(SD), data
length(LE), data length repeat(LEr), destination address, source address(SA), function code(FC),
destination service access point(DSAP), source service access point(SSAP), data units(DU), frame
checking sequence(FCS) and end delimiter(ED).

Format of PROFIBUS telegram

SD LE LEr SD DA SA FC DSAP SSAP DU FCS ED

1 1 1 1 1 1 1 1 1 variable 1 1

PROFIBUS is an open, vendor-independent protocol that became part of the international standard
IEC 61158 in 2000. Though the protocol is mature, it is not static Over time, it has been extended into
new application areas by working groups of employees from companies that have similar products and
target application areas. These extensions have always been developed under the requirement that they
ensure “backward compatibility.” This pocket guide is not the place to discuss the details of these
extensions and will concentrate primarily on the basic DPV0 operations; however, Figure 1-1 lists the
extensions that have been standardized in the past few years.

Figure 1-1 — PROFIBUS DP Extensions

Figure 1-1 shows, DPV0 is the foundation for PROFIBUS and was the first version after FMS. DPV0
came from optimizations to FMS, the original PROFIBUS protocol, to support fast I/O data exchange.
PROFIBUS DPV1 added extensions that allowed run-time reading/writing of parameters for more
sophisticated devices such as intelligent drives, for example, and PROFIBUS PA field instruments, such
as valve positioners, pressure transmitters, and so on. DPV2 added extensions primarily so that motion-
control applications can be performed directly across PROFIBUS rather than requiring a secondary
motion-control bus.
The four basic aspects of a PROFIBUS DP network:

• Master/Slave Concept
• Device and System Startup
• Cyclic I/O Data Exchange:
• Device Diagnostic Reporting:

Master/Slave Concept

PROFIBUS DP is a network that is made up of two types of devices connected to the bus: master devices
and slave devices. It is a bi-directional network, meaning that one device, a master, sends a request to a
slave, and the slave responds to that request. Thus, bus contention is not a problem because only one
master can control the bus at any time, and a slave device must respond immediately to a request from a
master. Since a request from a master to a slave device is heard by all devices attached to the bus, some
mechanism must exist for a slave device to recognize that a message is designated for it and then respond
to the sender. Hence, each device on a PROFIBUS network must have an assigned address. For
specifying the address, most devices have either rotary switches (decimal or hexadecimal) or DIP
switches. Some few devices require that their address be set across the bus using a configuration tool.
The PROFIBUS protocol supports addresses from 0 to 127. However, addresses 126 and 127 have special
uses and may not be assigned to operational devices. Address 0 has become something of a default
address that vendors assign to network configuration and/or programming tools attached to the bus. Thus,
the addresses that may be used in practice for operational devices – for example, PLCs, I/O nodes, drives,
encoders, and the like – are 1 to 125.
Device and System Startup

The user specifies which slave devices the master should find on the bus as well as what information is to
be transferred from the master to each slave during this startup phase. All of the information that the
master must know to start up the bus comes from a configuration database file that is generated by a
PROFIBUS configuration tool. Each vendor of PROFIBUS master devices offers a configuration tool for
generating the database file for their masters. However, once one has learned how to use any of these
tools, it is generally quite easy to transfer this knowledge to another tool because all PROFIBUS
configuration tools must share certain common functionality.A configuration tool for cyclic I/O operation
must be able to do the following:

• process GSD (device description) files and maintain a hardware catalog of devices to be configured on
the bus
• allow the PROFIBUS device address to be specified
• allow the specification of the input and output data to be transferred between master and slave
• allow certain startup parameters to be selected in order to activate specific operating modes or features
of the device
• allow selection of the system baud rate
• generate the database file so it can be used by the master

At the same time a vendor develops a slave device, it must develop a device description (GSD) file. This
file completely describes the PROFIBUS functionality of the device – for example, baud rates supported,
possible input. output data configurations, startup parameter choices, and so on. These GSD files can
typically be downloaded via the Internet either from www.profibus.com or from an individual vendor’s
web site. Once you “install” the GSD file for a device into the PROFIBUS configuration tool, it will
appear in the tool’s hardware catalog, which enables it to be configured for bus operation. The installation
process varies for different configuration tools, although it is extremely simple in any case. In some tools,
installation consists of nothing more than copying the GSD file into a “gsd subdirectory,” while in others
one simply “imports” the new GSD file into the hardware catalog by selecting that option from a menu.
Once all the appropriate GSD files are installed into the configuration tool, you can define a bus
configuration. This is a straightforward process. Pick the appropriate master from the master device list in
the hardware catalog and assign a PROFIBUS address. Select a slave device, assign the PROFIBUS
address, specify the I/O to be exchanged, and select the appropriate parameters for the desired operation
of the device. You then save this bus configuration file and generate the configuration database. The
mechanism for loading the database file into the master may vary with each vendor. The most common
mechanism is to download the file into the master via a serial port.The master has the information
necessary to start up all the devices in its configuration. This information is stored in retentive memory.

Functionality
The master must now determine if the devices at the assigned addresses contained within the
configuration database are physically on the bus and initialize them for “operational” or “data exchange
” mode. To get the devices into this mode a PROFIBUS master goes through a well defined sequence of
interactions with each of the slave devices in its bus configuration. Figure 1-3 shows the steps in the
startup sequence for a slave device. For instance, if the master device experiences a power loss, when it
powers back up it uses the configuration data- base in retentive memory to go through the startup
sequence with each device in its configuration to get all devices back into operational mode. If a slave
device fails and must be replaced, the master recognizes that a replacement device of the same type and
with the same PROFIBUS address has been attached to the bus.

Figure 1-3 — Slave Startup Sequence

Cyclic I/O Data Exchange

After the bus system has been “started up”, the normal interaction between a master and each of
its assigned slaves is to exchange I/O data, as shown in Figure 1-4. The master, a PLC with a PROFIBUS
interface, for example, sends output data to a slave device in its configuration. The addressed slave
immediately responds with its input data. It is important to grasp this concept of output data sent from the
master to the slave and input data returned from the slave to the master. This “directional” attribute of the
I/O is identical to I/O that is hardwired directly to backplane I/O in a PLC rack. It typically maps into the
input and output areas of PLC memory, as shown in Figure 1-5, and can generally be accessed by the
PLC logic program in exactly the same way as backplane I/O. This cyclic (repeated) I/O data exchange
takes place asynchronously to the control logic scan and is repeated as quickly as possible. Data exchange
takes place every cycle for every slave in a master’s configuration. At the most commonly used baud rate
of 1,500kbit/s, data exchange cycles are normally repeated many times during a single control-logic scan.
Figure 1-4 — Master/Slave Data Exchange

Figure 1-5 — Bus I/O Maps Into PLC Memory

Although 85 percent or more of PROFIBUS installations are single-master systems, multimaster systems,
as illustrated in Figure 1-6, exist and work quite well. In such a system, each master is given control of
the bus for a short time and during this time it exchanges I/O data with each of its assigned slaves. It then
passes control to the next master on the bus, via a short message called a “token”, and that master
exchanges I/O data with each of its slaves. Only the master holding the token is allowed to initiate
communication to its slaves. Once the last master in the “logical token ring” has gone through its data-
exchange cycle, it passes control back to the first master, and the overall operation starts again.

Figure 1-6 — Multi-Master/Slave Interactions


Device Diagnostic Reporting

The PROFIBUS protocol offers quite extensive diagnostic capabilities that device vendors can design into
their products. PROFIBUS offers the capability to diagnose an operations problem all the way down to,
for example, an overvoltage on an analog input or a broken wire on an output. During a data-exchange
cycle, a PROFIBUS slave device can indicate to the master that it has detected a diagnostic condition. In
the next data-exchange cycle, the master fetches the diagnostic information from the slave. A device can
report diagnostic information in four different formats: standard diagnostics, device-related diagnostics,
module-related diagnostics, and channel-related diagnostics Any PROFIBUS master must save any
diagnostic data from a slave in order for your control program to access it. Each master does it in a
slightly different way, so you need to familiarize yourself with your particular master. The standard
diagnostics (six bytes) that every slave device is required to report contain information that is generally
related to startup problems. For example, if the I/O configuration that was set up in the configuration tool
does not match what the slave expects, it will report a “configuration fault.” If configured a slave device
in your configuration file, but the slave actually found on the bus at that address is different, the device
will report a “parameterization fault.” The six standard diagnostic bytes are used to report faults that are
common across all slave devices. A vendor can use the device-related diagnostics format to report
information that may be specific to the particular device or application area, and that cannot be reported
using the standard module- related or channel-related diagnostic formats. The format of this type of
diagnostic information is defined by the vendor—its detailed structure is not covered in the PROFIBUS
standard. Module-related diagnostics are used to report diagnostics for a modular slave – that is, one that
consists of an “intelligent” head module plus plug-in modules. This format gives the head module the
capability to report that a particular plug-in module has a diagnostic. It does not tell what the diagnostic is
– just that a particular module has a problem. The format for module related diagnostic information is
defined in the PROFIBUS standard. This means that once a logic block is constructed to decode this type
of diagnostic information, the decode logic will work for any device from any vendor that reports
module-related diagnostics. The last type of diagnostic block is used to report channel-related diagnostics.
A device can use this format to report that an individual channel of a specific module has a problem – for
example, short circuit, wire break, overvoltage, and so on. This makes it very easy to diagnose the
problem right down to the wire level. The format for this type of diagnostic information is also
defined in the PROFIBUS standard.
Fail-Safe Operation

Fail-safe, or fail-to-known-state, operation is an optional feature available for implementation in a slave


device. This feature allows you to specify the states of slave outputs in the case of a bus failure, such as a
master failure, a break in the bus, and so on. On such an occurrence, non-failsafe devices usually clear
their outputs to zero, while fail-safe devices set their outputs to states that you define during the
configuration phase.

PROFIBUS –PA

PROFIBUS –PA( PROFIBUS –Process Automation) was developed for hazardous areas of process
plant. The 3 major differences between DP and PA are
1. PA supports the use of device in explosion-hazardous areas
2. Devices can be powered over the bus cable
The physical layer allows selection of desired bus topology and longer bus segments.
UNIT V - INDUSTRIAL ETHERNET AND WIRELESS
COMMUNICATION
Industrial Ethernet : Introduction – 10Mbps Ethernet, 100Mbps Ethernet.
Radio and wireless communication : Introduction – components of radio link –
the radiovspectrum and frequency allocation – radio modems.

Industrial Ethernet

The move to Ethernet on the factory floor has been growing steadily, as large enterprises move to
what is clearly now the network of choice on the factory floor. This migration to a single, standard based
technology is the natural progression to more open architectures, as manufacturers strive for more
efficient and cost-effective plant floor technologies. Backed by a wide array of plant equipment vendors,
industrial Ethernet–based systems allow manufacturers to standardize and consolidate different
manufacturing network architectures prevalent in many factories today. This natural convergence (already
completed in the front office a decade ago) provides process engineers with greater economies of scale,
vast technological innovation resources, and intelligent features that dramatically increase control over
the array of manufacturing devices linked by the control network. Widely deployed standard networking
technology, coupled with an open, accepted industrial application layer protocol—the Control and
Information Protocol, or CIP—provides tremendous flexibility and functionality to the shop floor in
Ethernet /IP solutions for a wide variety of applications. As a result, companies are able to reliably
transmit information intelligently and securely throughout the company. These capabilities will
eventually facilitate an end-to-end flow of information, letting manufacturers achieve tremendous
efficiencies and productivity.

Today, many manufacturing companies maintain separate networks. Over the years, these
networks took shape to respond to diverse information flows and control requirements.

1. The corporate information technology network supports traditional administrative functions, such as
human resources, accounting, and procurement. This network is based on the ubiquitous Ethernet
standard.
2. The control-level network connects control and monitoring devices, including programmable logic
controllers, PC-based controllers, I/O racks, and human-machine interfaces. This network, which has not
been Ethernet in the past, requires a router or, in most cases, a gateway to “translate” application-specific
protocols to Ethernet-based protocols. This translation allows information to pass between the control
network on the factory floor and the corporate network infrastructure, which otherwise are not connected.
3. The device-level network links the plant floor’s I/O devices, such as sensors (transducers, photoeyes,
flowmeters, etc.), and other automation and motion equipment, such as robotics, variable frequency
drives, and actuators. Interconnectivity among these devices has traditionally been achieved through
protocols that have been highly reliable but are somewhat limited in terms of data throughput and
interconnectivity with upper-level networks.These networks evolved to support different types and
streams of information as well as high noise and environmental conditions. There are, however, several
disadvantages to maintaining multiple networks based on various traditional fieldbuses.

Lower efficiency and productivity: Due to the nature of most traditional non- Ethernet control networks,
it is difficult to share data between the factory floor and the higher-level software entities—enterprise
resource planning (ERP), manufacturing execution systems—without agateway that mediates between the
corporate network and the factory floor. This gateway limits implementing realtime scheduling,
monitoring, and maintenance systems.

Higher costs: Traditional manufacturing systems usually have their own Layer 1 (cabling and signaling)
transmission architectures. These interfaces tend to have higher costs and fewer suppliers.

Higher complexity and training duplication: Most organizations have already developed skills around
Ethernet to support their corporate network. Ethernet based control systems can use this expertise and
concentrate on developing networking skills around a single network architecture shared by multiple
control equipment vendors, reducing the level of complexity created by multiple networking platforms.

Bandwidth and network addressing limitations: Shared bandwidth available today for a large part of
traditional control networks is normally measured in kilobits per second, with the fastest implementations
sharing 12 megabits per second (Mbps) between multiple nodes. This amount of bandwidth limits the
potential to transmit real-time data and makes it virtually impossible to incorporate devices (such as
digital cameras) that demand significantly more throughput. Even if greater bandwidth were available,
there is no link or structure that can associate a given device to an IP addressing scheme, which limits
polling data (diagnostics, health monitoring, etc.) to within the network itself.

Industrial Ethernet provides a more robust solution that lowers costs, boosts productivity, and
streamlines system complexity. It is based on an inexpensive and universally deployed data link standard.
Ethernet has long been considered an alternative for data transmission on the plant floor. Its acceptance,
however, grew dramatically in recent years, thanks to the emergence of impressive reliability. For
example, network consortia such as the Open DeviceNet Vendor Association are including details for
industrially hardened Ethernet cable systems in their specification. While data-oriented networks, such
as ones used in offices, were designed to maximize bandwidth, control- and device-level networks were
optimized to obtain a very deterministic performance with very low latency. Ethernet’s early, collision-
oriented implementations made the level of determinism unacceptable on the plant floor. Today’s
Ethernet installations (100Mbps, full duplex in a switched network) show latency measured in
microseconds—many orders of magnitude better than most factory floor reliability requirements.
Therefore, a multiple network structure is difficult to justify as current Ethernet implementations meet
and/or exceed the throughput, reliability, resilience, and determinism required by the factory floor.
Different approaches have been used while deploying Ethernet-based architectures. The model brings the
advantages of an Ethernet-based architecture to a major portion of the control network. This approach has
been more prevalent in implementations with I/O devices with very limited capabilities to deliver
information. The information gathered from production processes is optimized and more intelligent
devices are used at the I/O level.

Industrial Ethernet’s advantages


Ethernet-based control applications are perfectly suited for challenges manufacturers face today and far
into the future. Simply put, industrial Ethernet is able to unite a company’s administrative, control-level,
and device-level networks into a single system.

Enhanced productivity and efficiency: With a more integrated network, mission-critical information can
flow freely and in real time throughout the company. As a result, manufacturers experience great gains in
collaboration, efficiency, and work quality. In addition, companies may choose to share data from their
plant floor process with other business partners, turning their businesses into e-businesses. With an
Ethernet network supporting an IP addressing scheme, a company has the ability to collaborate
electronically with suppliers, customers, and contractors. The partners can enjoy far better access to
certain information such as order status and shipment dates, which they can access in real time right from
their own desktops.
Reduced costs: A standard Ethernet interface brings to the factory floor the economies of scale enjoyed
today by hundreds of millions of Ethernet users, lowering costs and increasing the number of potential
equipment vendors and products for a particular manufacturing application. In some instances, potential
cost reductions can reach an order of magnitude.

Greater bandwidth and overall functionality: Ethernet delivers shared bandwidth far in excess of today’s
networking systems—typically at full duplex 10 Mbps to 100 Mbps using switching technologies that can
guarantee the throughput to all nodes hooked into the network. This capability allows networks to deliver
substantive, actionable information. An Ethernet network, for instance, allows transmitting detailed
control information in real time to a company’s ERP system. With enough bandwidth, additional
applications can even be added to the network, including those requiring simultaneous data, video,
and voice transmission.

Streamlined network structure: A single network eliminates the need to implement, support, and maintain
three or more separate systems, reducing overall network costs and improving information access.
Manufacturers no longer need to endure the high costs and limited functionality of maintaining multiple
separate networks. Industrial Ethernet has the potential to deliver a single, high quality network
throughout the entire enterprise, substantially lowering costs and boosting capabilities companywide.

Intelligent services added

Companies should be careful to select Ethernet products that add vital intelligence to their networks.
Industrial Ethernet has many advantages over proprietary technology that exists in many manufacturing
environments. Specifically, an Ethernet-based solution should provide additional services that make the
network highly functional, manageable, and secure. For industrial environments, these intelligent services
should include the following:

• Advanced security: A variety of mechanisms exist to secure Ethernet networks. Manufacturers should
deploy robust security mechanisms to prevent outside intrusion, as well as ensure that internal
communications remain private.
• Virtual LAN: Support provides security and isolation by virtually segmenting factory floor data from
all other data and users.
• Port security and access control lists: At different layers, they provide granular and secure filtering.
This capability allows a network administrator to prevent/allow access to information based on its source,
destination, and type of application. Access can be based on physical parameters (for example, port
number or MAC address), IP address, or TCP/UDP port (essentially determining whether the packet is
from an application that should be running on the network).
• Fast Spanning Tree: The Spanning Tree protocol permits the rapid convergence of a network. If a
problem occurs on a network node, a redundant alternate link will automatically come back online. With
Fast Spanning Tree, networks converge very quickly, and node will become available again in less than 1
second. This is known as “subsecond convergence.” Previous Ethernet deployments lacked this feature
(leading to convergence times higher than 50 seconds).
• SNMP support: The simple network management protocol (SNMP) forms the basis of virtually every
major network management system. Intelligent Ethernet devices must support SNMP, allowing it to
interface with a company’s existing management system or with commercially available management
systems.
• Quality of service: An industrial Ethernet network may transmit many different types of traffic, from
routine data to mission-critical control information to bandwidth-hungry video or voice. The network
must be able to distinguish among and give priority to different types of traffic. They ensure all traffic
receives the required bandwidth, priority, and latency so the network runs smoothly and efficiently.
• IGMP snooping: Internet group management protocol (IGMP) snooping allows multicast traffic to be
easily managed in a switched network. Without this feature, the potential exists for flooding the control
network with multicast traffic. IGMP snooping becomes critical for control applications that use a
producer consumer model (a network element producing a stream of data used by one or more
consumers). For manufacturers, networks must be available, reliable, and secure. At the same time,
network elements must provide highly intelligent features that allow companies to take advantage of the
flow of information available today within their networks. Industrial Ethernet fulfills these requirements
—and more.

The main differences between industrial Ethernet and fieldbuses are as follows:

a. Network topology
b. Data rates
c. Ethernet allows devices with different data rates to be mixed in the system
d. Number of devices on the network
e. Device addressing

There are 3 major issues to be addresses for using Ethernet in industrial networks.
1. It requires definition of common application layer which should describe data
format,messaging,etc.
2. It requires industrial grade components
3. Industrial applications require determinism. Ethernet is not determininstic or repeatable.

Many industrial Ethernet protocols are present such as


i) MODBUS TCP/IP
ii) PROFInet
iii) EtherNet/IP
iv) High-Speed Ethernet FOUNDATION Fieldbus

MODBUS TCP/IP

It is designed to allow industrial equipments such as PLCs. PCS, HMI devices, field devices and other
types of physical I/O devices to communicate over Ethernet.

Client application SCADA,HMI


Application layer MODBUS messaging
Presentation layer NOT USED
Session layer
Transport layer TCP
Network layer IP
Data link layer Ethernet IEEE 802.3
(CSMA/CD)
Physical layer Coaxial, twisted pair
cable

OSI protocol layer MODBUS TCP/IP protocol layers

MODBUS Frame

Device address Function code Data Checksum

MODBUS TCP/IP uses the TCP/IP and Ethernet to carry the MODBUS messaging structure. In
the MODBUS TCP/IP data is transferred in hex format. The MODBUS master/slave architecture is
modified to TCP/IP’s client/server model in MODBUS NTCP/IP.As TCP is a connection oriented
protocol, before message is transferred via MODBUS TCP/IP connection should be established. Through
the same connection any number of user data is transferred in either direction as required.

Advantages

1.High performance
2.Transfer rate is above 1kb/s
3. Response time in the range of ms.
4. Simple maintenance

PROFInet

It is PROFIBUS on Ethernet. This provides network solutions for factory and process automation, for
safety applications and for clock-synchronized motion control applications. Communication is based on
Ethernet, UDP,TCP and IP protocols. There are 2 versions of PROFInet I/O and PROFINET Component
Based Automation. PROFInet I/O deals with the integration of simple distributed field devices and time
critical applications. The PROFInet CBA deals with integration of component-based distributed
automation systems.

PROFInet communication

PROFInet uses three different communications channels for exchange of data transfer with programmable
controllers and other field devices. The channels are

a. First is the standard channel which uses TCP/IP or UDP/IP protocol over Ethernet for
communication which is used for parameterization, configuration and read/write operations.
b. Second is the real time communication channel, PROFInet SRT which is used for communication
between programmable controller and I/O systems.
c. The third is isochronous real time channel, known as hard real-time communication channel
which is used for clock synchronized communication in motion control applications.

EtherNet/IP

EtherNet/IP was developed by ALLEN-Bradley Company and is now maintained by Open DeviceNet
Vendor Association(ODVA). The EtherNet/IP provides an integrated system for the sensor-actuator
network to the controller and enterprise network. It is ControlNet/DeviceNet objects on TCP/IP.

Application layer Control &


Presentation layer information protocol
(CIP)
Session layer
Transport layer UDP/TCP
Network layer IP
Data link layer Ethernet IEEE 802.3
(CSMA/CD)
Physical layer Coaxial, twisted pair
cable

OSI protocol layer EtherNet/IP protocol layers

The control portion of CIP is used for real time data transport employing implicit messaging. The
information portion of the CIP is used for transport of less time sensitive data such as diagnosis,
configuration and management messages employing explicit messaging connections.The UDP datagrams
are used for scheduled polling of slaves by masters for cyclic messsgaing of slave status and for event
messaging.

Вам также может понравиться