Вы находитесь на странице: 1из 10

Physical Layer Break down.

OSI Layer Interactions


The following sequence outlines the basics of processing at each layer and explains how each lower layer is providing a service to the next higher layer: 1. The physical layer (Layer 1) ensures bit synchronization and places the received binary pattern into a buffer (transfer across a medium). It notifies the data link layer that a frame was received after decoding the incoming signal into a bit stream. 2. The data link layer examines the frame check sequence (FCS) in the trailer to determine whether errors occurred in transmission (error detection). If an error has occurred, the frame is discarded. Some data link protocols perform error recovery, and some do not. The data link address(es) are examined so the receiving host can decide whether to process the data further. If the address is the receiving nodes MAC address, processing continues (physical addressing). The data between the Layer 2 header and trailer is given to the Layer 3 software on the receiving end. The data link layer delivers the data across the local link. 3. The network layer (Layer 3) destination address is examined. If the address is the receiving hosts address, processing continues (logical addressing) and the data after the Layer 3 header is given to the transport layer (Layer 4) software, providing the service of end-to-end delivery. 4. If error recovery was an option chosen for the transport layer (Layer 4), the counters identifying this piece of data are encoded in the Layer 4 header along with acknowledgement information (error recovery). After error recovery and reordering of the incoming data, the data is given to the session layer. 5. The session layer (Layer 5) can be used to ensure that a series of messages is completed. For example, this data might be meaningless if the next four exchanges are not completed. The Layer 5 header includes fields that signify that this session flow is a middle flow, not an ending flow, in a transaction (transaction tracking). After the session layer ensures that all flows are completed, it passes the data after the Layer 5 header to the Layer 6 software. 6. The presentation layer (Layer 6) defines and manipulates data formats. For example, if the data is binary instead of character oriented, the header will state the fact. The receiver will not attempt to convert the data using the default ASCII character set of Host B. Typically, this type of header is included only for initialization flows and not with every message being transmitted (data formats). After the data formats have been converted, the data (after the Layer 6 header) is then passed to the application layer (Layer 7) software . 7. The application layer (Layer 7) processes the final header and then examines the true end-user data. This header signifies agreement to operating parameters by the applications on the sending and receiving hosts. The headers are used to signal the values for all parameters; therefore, the header is typically sent and received at application initialization time only. For example, the screen size, colors supported, special characters, buffer sizes, and other parameters for terminal emulation are included in this header (application

parameters).

OSI Reference Model Devices ,Protocols, and PDUs


Layer Name
Application, Presentation, Session (L 5-7) Transport (L 4) Network (L 3) Data Link (L 2) Physical (L 1)

Protocols and Specs


Telnet, HTTP, FTP, SMTP, POP3,VOIP,SNMP TCP, UDP IP Ethernet IEEE 802.3, HDLC Frame Relay, PPP RJ-45, EIA/TIA-232, V.35, Ethernet IEEE 802.3

Devices
Firewall, IDS

PDU
User Data Segments

Router LAN Switch, WAP, Cable Modem, DSL Modem LAN Hub, Repeater

Packets Frames Bits

TCP/IP Model vs. OSI Model

TCP/IP and the DoD Model

The TCP/IP Protocol Suite

Key concepts of Host to Host Protocols


TCP:


UDP:

Sequenced Reliable Connection-oriented Virtual circuit Acknowledgments Windowing flow control

Unsequenced Unreliable Connectionless Low overhead No acknowledgment No windowing or flow control

Port Numbers Port number examples for TCP and UDP

Key Protocols and Port Numbers TCP: Telnet = 23 SMTP = 25 HTTP = 80

FTP = 21 DNS = 53 HTTPS = 443 UDP: SNMP = 161 TFTP = 69 DNS = 53

Data Encapsulation
1. User information creates the data (OSI Layers 5-7). 2. Data is converted to segments (OSI Layer 4). 3. Segments are converted to packets, or datagrams (OSI Layer 3). 4. Packets, or datagrams, are converted to frames (OSI Layer 2). 5. Frames are converted to bits (OSI Layer 1).

Each layer of the OSI model can be discussed in the form of Layer N PDU (Protocol Data Unit).

Layer 5 to 7 PDU (application, presentation, and session): User data Layer 4 PDU (transport): Segments Layer 3 PDU (network): Packets Layer 2 PDU (data link): Frames Layer 1 PDU (physical): Bits

Cisco Hierarchical Model


Defined by Cisco to simplify the design, implementation, and maintenance of responsive, scalable, reliable, and cost-effective networks.

Core layer

Also referred to as the backbone layer. It is responsible for transferring large amounts of traffic reliably and quickly switches traffic as fast as possible. A failure in the core can affect many users; hence fault tolerance is the main concern in this layer. The core layer should be designed for high reliability, high availability, high speed, and low convergence. Do not support workgroup access, implement access lists, VLAN routing, and packet filtering which can introduce latency to this layer. Also referred to as the backbone layer. It is responsible for transferring large amounts of traffic reliably and quickly switches traffic as fast as possible. A failure in the core can affect many users; hence fault tolerance is the main concern in this layer. The core layer should be designed for high reliability, high availability, high speed, and low convergence. Do not support workgroup access, implement access lists, VLAN routing, and packet filtering which can introduce latency to this layer. Also referred to as the desktop layer. Here is where end systems gain access to the network. The access layer (switches) handles traffic for local services (within a network) whereas the distribution layer (routers) handles traffic for remote services. It mainly creates separate collision domains. It also defines the access control policies for accessing the access and distribution layers.

Distribution layer

Access layer

In a hierarchical network, traffic on a lower layer is only allowed to be forwarded to the upper layer after it meets some clearly defined criteria. Filtering rules and operations restrict unnecessary traffic from traversing the entire network, which results in a more responsive (lower network congestion), scalable (easy to grow), and reliable (higher availability) network.
384 kilobits per second (Kbps) is the recommended maximum bandwidth for uncompressed data streams. The three categories of LAN transmission are as follows: Unicast One-to-one transmission Multicast One-to-many transmission Broadcast One-to-all transmission The four primary devices used in LANs include the following: Hubs Hubs operate at the physical layer (Layer 1) of the OSI model and are essentially multiport repeaters, repeating signals out all hub ports. Bridges Bridges create multiple collision domains. Bridges work at the physical layer (Layer 1) of the OSI

model and operate at the data link layer (Layer 2). Bridges forward data frames based on the destination MAC address. Bridges utilize the spanning tree algorithm for path determination. Switches LAN switches are essentially multiport bridges. LAN switches are used to connect common broadcast domains (hubs) and to provide frame-level filtering as well as dedicated port speed to end users. LAN switches are also used to create virtual LANs (VLANs). Like bridges, switches use the spanning tree algorithm for path determination. Routers Routers are typically found at the edge of a LAN, interfacing with a WAN, or in more complex LAN environments. Routers operate at the network layer (Layer 3) of the OSI model. The four types of bridges are as follows: Transparent bridges These create two or more LAN segments (collision domains). They are transparent to end devices. Source-route bridging Frames are sent from the source end device with the source-to-destination route, or path, included. Source-route translational, or mixed-media, bridging These are used when connecting networks of two different bridging types (transparent and source-route) or media types, such as Ethernet and Token Ring. Source-route transparent bridging This bridge will either source-route or transparently bridge a frame depending on the routing information indicator (RII) field.

Preamble (PRE) Consists of 7 bytes of 10101010. The preamble is an alternating pattern of ones and zeros that tells receiving hosts that a frame is coming. The preamble provides a means to synchronize the framereception portions of receiving physical layers with the incoming bit stream. Start-of-frame (SOF) delimiter Consists of 1 byte of 10101011. The start-of-frame is an alternating pattern of ones and zeros, ending with two consecutive 1-bits indicating that the next bit in the data stream is the left-most bit in the left-most byte of the destination address. Destination address (DA) Consists of 6 bytes. The destination address field identifies which host(s) should receive the frame. The left-most bit in the destination address field indicates whether the address is an individual, or unicast, address (indicated by a 0) or a group, or a multicast address (indicated by a 1). The second bit from the left indicates whether the destination address is globally administered (indicated by a 0) or locally administered (indicated by a 1). The remaining 46 bits are a uniquely assigned value that identifies a single host (unicast), a defined group of hosts (multicast), or all hosts on the network (broadcast). Source address (SA) Consists of 6 bytes. The source address field identifies the sending host. The source address is always an individual address, and the left-most bit in the source address field is always 0. - Exception: This bit is the RI bit used in source-route bridging. Length/Type Consists of 2 bytes. This field indicates either the number of LLC data bytes that are contained in the data field of the frame or the frame type ID if the frame is an Ethernet frame and not in 802.3 format. If the length/type field value is less than or equal to 1500, the number of LLC bytes in the data field is equal to the length/type field value. If the length/type field value is greater than 1536, the frame is an Ethernet II frame, and the length/type field value identifies the particular type of frame being sent or received. Data Is a sequence of n bytes of any value, where n is less than or equal to 1500 (1500 bytes = 12000 bits, or 12 Kb). If the length of the data field is less than 46, the data field must be extended by adding a filler, or pad, sufficient to bring the data field length to 46 bytes.

Frame check sequence (FCS) Consists of 4 bytes. This sequence contains a 32-bit cyclic redundancy check (CRC) value, which is created by the sending MAC and is recalculated by the receiving MAC to verify data integrity by checking for damaged frames. The FCS is generated over the DA, SA, length/type, and data fields.

Half-Duplex Transmission: The CSMA/CD Access Method The Carrier Sense Multiple Access with Collision Detect (CSMA/CD) protocol was originally developed as a means by which two or more hosts could share a common media in a switchless environment. In this shared environment, the CSMA/CD protocol does not require central arbitration, access tokens, or assigned time slots to indicate when a host will be allowed to transmit. Based on sensing a data carrier on the network medium, each Ethernet MAC adapter determines for itself when it will be allowed to send a frame. The CSMA/CD access rules are summarized by the protocols acronym: Carrier sense (CS) Each Ethernet LAN-attached host continuously listens for traffic on the medium to determine when gaps between frame transmissions occur. Multiple access (MA) LAN-attached hosts can begin transmitting any time they detect that the network is quiet, meaning that no traffic is travelling across the wire. Collision detect (CD) If two or more LAN-attached hosts in the same CSMA/CD network, or collision domain, begin transmitting at approximately the same time, the bit streams from the transmitting hosts will interfere (collide) with each other, and both transmissions will be unreadable. If that happens, each transmitting host must be capable of detecting that a collision has occurred before it has finished sending its respective frame. Each host must stop transmitting as soon as it has detected the collision and then must wait a random length of time as determined by a backoff algorithm before attempting to retransmit the frame. In this event, each transmitting host will transmit a 32-bit jam signal alerting all LAN-attached hosts of a collision before running the backoff algorithm. CAUTION Collisions will be discussed in greater detail later, but 40 percent congestion is the average maximum percentage you want to see on an Ethernet collision domain The maximum time that is required to detect a collision (the collision window, or "slot time") is approximately equal to twice the signal propagation time between the two most-distant hosts on the network:

where H1 and H2 are the two most distant hosts on the network. Slot time is the maximum time that can elapse between the first and last network host's receipt of a frame. To ensure that a network host, or node, can determine whether the frame it transmitted has collided with another frame, a frame must be longer than the number of bits that can be transmitted in the slot time. In Ethernet networks, this time interval is about half a microsecond, which is long enough to transmit at least 512 bits. This Collision Detect Time means that both the minimum frame length and the maximum collision diameter are directly related to the slot time. Longer minimum frame lengths translate to longer slot times and larger collision diameters; shorter minimum frame lengths correspond to shorter slot times and smaller collision diameters.

The 5-4-3 Rule The "5-4-3" rule states an Ethernet network has no more than five segments, four repeaters, and three active segments. This rule is enforced by network diameter and bit times.

The time it takes for a signal to propagate across the network medium is essentially constant for all transmission rates. The time required to transmit a frame is inversely related to the transmission rate. At 100 Mbps, a minimum length frame can be transmitted in approximately one-tenth of the defined slot time, and the transmitting hosts would not likely detect any collisions that could occur during this transmission. Therefore, the maximum network diameter specified for 10 Mbps networks could not be used for 100 Mbps (Fast Ethernet) networks. The solution for Fast Ethernet was to reduce the maximum network diameter by approximately a factor of 10, to a little more than 200 m. Runt Frame - If two hosts are so close to each other that the adapter will only send 96 bits, a runt frame has occurred Ethernet frame must be 512 bits (64bytes) long: 14 bytes of header plus 46 bytes of data plus 4 bytes of CRC. Ethernet frames must be 512 bits (64 bytes) in length because the farther apart that two nodes are, the longer it takes for a frame sent by one to reach the other. The network is vulnerable to a collision during this time.

Network diameter is directly related to frame size, as well as the bit time, as the following demonstrates:

RIP uses hop count for a metric, the maximum number of hops is 15. A hop count of 16 is considered to be infinity.

Вам также может понравиться