Вы находитесь на странице: 1из 17

1. Explain Access Layer, Distribution layer & Core layer in the hierarchical design model of CISCO?

The three-layer hierarchical design model helps organize the network. This model divides the network functionality into three distinct layers: access layer, distribution layer and core layer. Each layer is designed to meet specific functions. The access layer performs the following functions: Provides a connection point for end-user devices to the network. Allows multiple hosts to connect to other hosts through a network device such as a switch. Exists on the same logical network. Confines traffic destined for a host on the same network to the access layer. Passes traffic to the distribution layer for delivery if the message is destined for a host on another network. The distribution layer performs the following functions: Provides a connection point for separate local networks. Ensures that traffic between hosts on the same local network stays local. Passes on traffic destined for other networks. Filters incoming and outgoing traffic for security and traffic management purposes. Contains more powerful switches and routers than the access layer. Passes data to the core layer for delivery to a remote network if the destination network is not directly connected.

The core layer performs the following functions: Provides a high-speed backbone layer with redundant (backup) connections. Transports large amounts of data between multiple end networks. Includes very powerful, high speed switches and routers. Transports data quickly and reliably.

2. Explain Virtual Private Network (VPN)?

A virtual private network (VPN) is a computer network that uses a public telecommunication infrastructure such as the Internet to provide remote offices or individual users secure access to their organization's network. It aims to avoid an expensive system of owned or leased lines that can be used by only one organization. A much easier choice is to encrypt all traffic moving between the remote site and the enterprise network using a virtual private network (VPN). VPNs are often described as tunnels. Consider the analogy of an underground tunnel versus an open road between two points. Anything that uses the underground tunnel to travel between two points is surrounded and protected from view. The underground tunnel

represents the VPN encapsulation and virtual tunnel. When using a VPN, a virtual tunnel is created by linking the source and destination addresses. All data flow between the source and destination is encrypted and encapsulated using a secure protocol such a IPsec. This secure packet is transmitted across the network. When it arrives at the receiving end, it is deencapsulated and unencrypted. VPNs are a client/server application; therefore, telecommuters must install the VPN client on their computers to form a secure connection with the enterprise network.

3. Explain the different important types of documentation used in the enterprise network?
The different important types of documentation used in the enterprise network are: 1. Business Continuity Plan The business continuity plan (BCP) identifies the steps to be taken to continue business operation in the event of a natural or man-made disaster. The BCP helps to ensure business operations by defining procedures that must take place when a disaster strikes. IT support can include Off-site storage of backup data Alternate IT processing centers Redundant communication links 2. Business Security Plan The business security plan (BSP) prevents unauthorized access to organizational resources and assets by defining security policies. The BSP includes physical, system and organizational control measures. The overall security plan must include an IT portion that describes how an organization protects its network and information assets. The IT security plan can contain policies related to User authentication Permissible software Remote access Intrusion monitoring Incident handling 3. Network Maintenance Plan The network maintenance plan (NMP) minimizes downtime by defining hardware and software maintenance procedures. The NMP ensures business continuity by keeping the network up and running efficiently. Network maintenance must be scheduled during specific time periods, usually nights and weekends, to minimize the impact on business operations. The maintenance plan can contain Maintenance time periods Scheduled downtime

Staff on-call responsibilities Equipment and software to be maintained (OS, IOS, services) Network performance monitoring

4. Service-Level Agreement A service-level agreement (SLA) ensures service parameters by defining required service provider level of performance. The SLA is a contractual agreement the customer and a service provider or ISP, specifying items such as network availability and service response time. An SLA can include Connection speeds/bandwidth Network uptime Network performance monitoring Problem resolution response time On-call responsibilities

4. Compare & Contrast router and switch?

Router One important device in the distribution layer of an enterprise network is a router. Without the routing process, packets could not leave the local network. The router provides access to other private networks as well as to the Internet. All hosts on a local network specify the IP address of the local router interface in their IP configuration. This router interface is the default gateway. Routers play a critical role in networking by interconnecting multiple sites within an enterprise network, providing redundant paths, and connecting ISPs on the Internet. Routers can also act as a translator between different media types and protocols. For example, a router can re-encapsulate packets from an Ethernet to a serial encapsulation. Routers use the network portion of the destination IP address to route packets to the proper destination. They select an alternate path if a link goes down or traffic is congested. Routers also serve the following other beneficial functions: Provide broadcast containment: Routers in the distribution layer limit broadcasts to the local network where they need to be heard. Although broadcasts are necessary, too many hosts connected on the same local network generate excessive broadcast traffic and slow the network. Connect Remote locations: Routers in the distribution layer interconnect local networks at various locations of an organization that are geographically separated. Group users logically by application or department: Routers in the distribution layer logically group users, such as departments within a company, who have common needs or for access to resources. Provide enhanced security: Routers in the distribution layer separate and protect certain groups of computers where confidential information resides. Routers also hide the addresses of internal computers from the outside world to help prevent attacks and control who gets into or out for the local network.

Two methods exist for connecting a PC to a network device for configuration and monitoring tasks: out-of-band and in-band management. Switch Although all three layers of the hierarchical design model contain switches and routers, the access layer generally has more switches. The main function of switches is to connect hosts such as end-user workstations, servers, IP phones, web cameras, access points, and routers. This means that there are many more switches in an organization than routers. Switches come in many form factors: Small standalone models sit on a desk or mount on a wall. Integrated Routers include a switch built into the chassis that is rack mounted. High-end switches mount into a rack and are often a chassis-and-blade design to allow more blades to be added as the number of users increases. High-end enterprise and service provider switches support ports of varying speeds, from 100 MB to 10 GB. An enterprise switch in an MDF connects other switches from IDFs using Gigabit fiber or copper cable. An IDF switch typically needs both RJ-45 Fast Ethernet ports for device connectivity and at least one Gigabit port (copper or fiber) to uplink to the MDF switch. Some high-end switches have modular orts that can be changed if needed. For example, it might be necessary to switch from multimode fiber to single-mode fiber, which would require a different port. Like routers, switch ports are also designated using the controller/port or controller/slot/port convention. For example, using the controller/port convention, the first Fast Ethernet port on a switch is numbered as Fa0/1 (controller 0 and port 1). The second is Fa0/2. The first port on a switch that uses controller/slot/port is Fa0/0/1. Gigabit ports are designated as Gi0/1, Gi0/2, and so on. Port density on a switch is an important factor. In an enterprise environment where hundreds or thousands of users need switch connections, a switch with a 1RU height and 48 ports has a higher port density than a 1RU 24-port switch. The same in-band and out-of-band management techniques that apply to routers also apply to switch configuration.

5. Explain router commands: Show running-config Show version Show interfaces

1. Full Command: show running-config Abbreviation: sh run Purpose: Displays current config running in RAM. Includes host name, passwords, interface IP addresses, routing protocol activated, DHCP, and NAT configuration. Must be issued in EXEC mode.

2. Full Command: show version Abbreviation: sh ve Purpose: Displays IOS version, ROM version, router uptime system image file name, boot method, number and type of interfaces installed, and amount of RAM, NVRAM, and flash. Also the configuration register. 3. Full Command: show interfaces Abbreviation: sh int f0/0 Purpose: Displays one or all interfaces with line (protocol) status, bandwidth, delay, reliability, encapsulation, duplex, and I/O statistics.

6. Explain & compare different types of switching methods used in switch. Which one is best & Why?
The different types of switching methods are: 1. Store-and-Forward In store-and-forward switching, the entire frame is read and stored in memory before being sent to the destination device. The switch checks the integrity of the bits in the frame by recalculating the cyclic redundancy check (CRC) value. If the calculated CRC value is the same as the CRC field value in the frame, the switch forwards the frame out the destination port. The switch does not forward frames if the CRC values do not match. The CRC value is located within the frame check sequence (FCS) field of an Ethernet frame. 2. Cut-through Switching The other major method of switching is cut-through switching. Cut-through switching subdivides into two other methods: fast-forward switching and fragment-free switching. In both of these methods the switch forwards the frame before all of it is received. Because the switch does not calculate or check the CRC value before forwarding the frame, damaged frames can be switched. a) Fast-forward is the fastest method of switching. The switch forwards the frames out the destination port as soon as it reads the destination MAC address. This method has the lowest latency but also forwards collision fragments and damaged frames. This method of switching works best in a stable network with few errors. b) In fragment-free switching, the switch reads the first 64 bytes of the frame before it begins to forward it out the destination port. The shortest valid Ethernet frame is 64 bytes. Smaller frames are usually the result of a collision and are called runts. Checking the first 64 bytes ensures that the switch does not forward collision fragments. Store-and-forward has the highest latency and fast-forward has the lowest. The latency introduced by fragment-free switching is in the middle of these other methods. The fragmentfree switching method works best in the environment where many collisions occur. In a

properly constructed switched network, collisions are not a problem; therefore, fast-forward switching would be the preferred method. Each of these methods has distinct advantages as well as some disadvantages. With recent advances in the speed of switching hardware, store-and-forward techniques have become the standard in many networked environments.

7. What is spanning tree protocol? Explain the BPDU (Bridge Protocol Data Unit) structure with all the fields?
Spanning Tree Protocol (STP) provides a mechanism for disabling redundant links in a switched network. STP provides the redundancy required for reliability without creating switching loops. STP is an open standard protocol, used in a switched environment to create a loop-free logical topology. STP is relatively self-sufficient and requires little configuration. When switches are first powered up with STP enabled, they check the switched network for the existence of loops. Switches detecting a potential loop block some of the connecting ports, while leaving other ports active to forward frames. STP defines a tree that spans all the switches in an extended star switched network. Switches are constantly checking the network to ensure that no loops exist and that all ports function as required. To prevent switching loops, STP does the following: Forces certain interfaces into a blocked state Leaves other interfaces in a forwarding state Reconfigures the network by activating the appropriate path, if the forwarding path becomes unavailable

Bridge Protocol Data Units (BPDU) are frames that multicast every 2 seconds to all other switches. The root bridge communicates with the other switches using BPDU. BPDUs contain information such as Identify of the source switch Identify of the source port Cumulative cost to the root bridge Value of aging timers Value of the hello timer

The structure of the BPDU and the individual fields are


Protocol Identifier Bridge ID

Version Port ID

Message Type Message Age

Flags Max Age

Root ID Hello Time

Root Cost


Forward Delay

BPDU fields: Field


Description Always 0. Always 0. Specifies the type of BPDU (Configuration or Topology Change Notification) that the frame contains. Used to handle changes in active topology. Contains the bridge ID of the root bridge. This value is the same for all BPDUs in a bridged network after the network work has converged. Reflects the cumulative cost of all links leading to the root bridge. Contains the BID of the bridge that created the current BPDU. Contains a unique value for every port. For example, this field contains the value 0x8001 for port 1/1, whereas port contains 0x8002. Records the time since the root bridge originally generated the information from which the current BPDU is derived. Shows the maximum time that a BPDU is saved. This influence the bridge table aging timer during the Topology Change Notification process. Shows the time between periodic configuration BPDUs. Shows the time spent in the listening and learning states. This influences timers during the Topology Change Notification process.

Protocol Identifier 1-2 Version 3 Message Type 4 Flags Root ID 5 6-13

Root Path Cost Bridge ID Port ID

14-17 18-25 26-27

Message Age Max Age

28-29 30-31

Hello Time Forward Delay

32-33 34-35

8. Explain all the states of switch?

The different states of switch are as follows: 1. Blocking When a switch powers on, it first goes into a blocking state to immediately prevent the formation of a loop. In this state, the bridge receives BPDUs but discards all data frames. In this state, the bridge does not learn new addresses and the port status light is steady amber. This phase can last up to 20 seconds before the port transitions to the listening state. 2. Listening In the listening state, the switch continues to listen to BPDUs but does not forward data frames or lean addresses. During this state, the switch determines which ports can forward frames without creating a loop. If enabling the port would create a loop, the switch returns the port to the blocking state. If no loop would be created, the switch

transitions the port to the learning state. It takes 15 seconds to transition to the learning state, during which time the port indicator flashes amber. 3. Learning In the learning state, the switch receives and processes both BPDUs and data frames. It does not forward data frames while in this state but does learn MAC addresses from the data received. The port LED continues to flash amber during this state, which takes 15 seconds to complete before transitioning to the forwarding state. 4. Forward In the forwarding state, the switch continues to process both BPDUs and learn MAC addresses. It also now forwards data frames on the network. While in this state, the port status LED blinks green. 5. Disabled If the administrator shuts down a port, it is considered to be disabled. The port status indicator on a disabled port is off.

9. Explain the use of root bridge & How does the switch decide the root bridge?
For STP to function, the switches in the network determine a switch that is the focal point in that network. The STP uses this focal point, called a root bridge or root switch, to determine which ports to block and which ports to put into the forwarding state. The root bridge sends out BPDUs containing network topology information to all other switches. This information allows the network to reconfigure itself in the event of a failure. Only one root bridge exists on each switched network, and it is elected based on the bridgeID (BID). The bridge priority value plus the MAC address create the BID.

BID- 8 bytes Bridge Priority MAC

2 bytes

6 bytes

Bridge priority has a default value of 32,768. If a switch has a MAC address of AA-11-BB22-CC-33, the BID for that switch would be 32768: AA-11-BB-22-CC-33. The root bridge is based on the lowest BID value. Because switches typically use the same default priority value, the switch with the lowest MAC address becomes the root bridge.

10.What is VLAN? Explain Static & Dynamic VLAN?

A Virtual local-area network (VLAN) is logical broadcast domain that can span multiple physical LAN segments. It allows an administrator to group stations by logical function, by project teams, or by applications, without regard to physical location of the users. Broadcasts do not forward between VLANs; they are contained within the VLAN. Each VLAN functions

as a separate LAN. A VLAN spans one or more switches, which allows host device to behave as if they were on the same network segment. A VLAN has two major functions: It contains broadcasts and groups devices. Devices located on one VLAN are not visible to the devices located on another VLAN. To move traffic between different VLANs requires the use of a Layer 3 device. Static VLANs Static VLAN membership requires an administrator to manually assign each port to a specific VLAN. As an example, port Fa0/3 might be assigned to VLAN 20. Any device that plugs into port Fa0/3 automatically becomes the member of VLAN 20. This type of VLAN membership is the easiest to configure and is also the most popular; however, it requires the most administrative support for adds, moves and changes. For example, moving a host from one VLAN to another requires either the switch port to be manually reconfigured to the new VLAN or the workstation cable to be plugged into a different switch port on the new VLAN. Membership in a specific VLAN is transparent to the users. Users working on a device plugged into a switch port have no knowledge that they are members of a VLAN. Dynamic VLANs Dynamic VLAN membership requires a VLAN management policy server (VMPS). The VMPS contains a database that maps MAC addresses to VLAN assignments. When a device plugs into a switch port, the VMPS searches the database for a match of the MAC address and temporarily assigns that port to the appropriate VLAN. Dynamic VLAN membership requires more organization and configuration but creates a structure with much more flexibility than static VLAN membership. In dynamic VLAN, moves, adds, and changes are automated and do not require intervention from the administrator.

11.Design an accounting and sales VLAN on a switch and the assignment of ports fa0/7 & fa0/12 to the sales VLAN and ports fa0/1 to fa0/6 to the accounting VLAN. Explain this scenario with commands?
Switch (config)# vlan 101 Switch (config-vlan)# name sales Switch (config-vlan)# exit Switch (config)# vlan 102 Switch (config-vlan)# name accounting Switch (config-vlan)# exit Switch (config)# interface fa0/7 Switch (config-if)# switchport access vlan 101 Switch (config)# interface fa0/12 Switch (config-if)# switchport access vlan 101 Switch (config)# interface range fa0/1 - 6 Switch (config-if)# switchport access vlan 102 Switch (config-if)# exit

14.Explain the method of route summarization. Summarize the route of following addresses:
Route summarization groups contiguous subnets or networks using a single address. Route summarization is also known as route aggregation and occurs at a network boundary on a boundary router. Summarization decreases the number of entries in routing updates and reduces the number of entries in local routing tables. It also reduces bandwidth utilization for routing updates and results in faster routing table lookups. Route summarization and the term supernetting are used interchangeably. However, supernetting generally refers to joining multiple smaller contiguous classful networks, whereas summarization groups contiguous networks or subnets, whether they are classful or classless. Supernetting is the opposite of subnetting. If the network bits are greater than the default value for the class, this represents a subnet. If the network bits are less than the default value for the class value, this represents a supernet. Route Summarization of the given addresses: Calculating a route summary requires summarizing networks into a single address. This process is performed in three steps: Step 1: List the networks in binary format. 11000000 - 10101000 - 00110000 - 00000000 11000000 - 10101000 - 00110001 - 00000000 11000000 - 10101000 - 00110010 - 00000000 11000000 - 10101000 - 00110011 - 00000000 11000000 - 10101000 - 00110100 - 00000000

Step 2: Count the number of leftmost matching bits to determine the mask for the summary route. This number represents the network prefix or subnet mask for the summarized route.
Number of matching bits equals 21. Prefix is /21 or subnet is

Step 3: Determine the summarized network address. Copy the matching bits and then add 0 bits to the end. A quicker method is to use the lowest network value. 11000000 - 10101000 - 00110000 - 00000000

Therefore the given 5 networks are summarized as or

15.Explain the NAT & PAT?

Network Address Translation (NAT) translates internal private addresses into one or more public addresses for routing onto the Internet. NAT changes the private IP source address

inside each packet to a publicly registered IP address before sending it out onto the Internet. Organizations create huge LANs and WANs with private addressing and connect to the Internet using NAT. Small to medium organizations connect to their ISPs through a single connection. The local boundary or edge router configured with NAT connects to the ISP. Larger Organizations can have multiple ISP connections, and the boundary router at each of these locations performs NAT. NAT can be configured statically or dynamically. Static NAT maps a single inside local address to a single global, or public, address. The static NAT entry is configured manually and remains permanently in the table. This mapping ensures that a particular inside local address always associates with the same public address. Static NAT ensures that outside devices consistently reach an internal device. Dynamic NAT uses an available pool of Internet public addresses and assigns them to inside local addresses. Dynamic NAT assigns the first available IP addresses to an inside device. That host uses the assigned global IP address throughout the length of the session. When the session ends, the outside global address returns to the pool for use by another host. One of the more popular variations of dynamic NAT is known as Port Address Translation (PAT) also referred to as NAT overloaded. PAT dynamically translates multiple inside local addresses to a single public addresses. When a source host sends a message to a destination host, it uses an IP address and port number combination to keep track of each individual conversation. In PAT, the gateway router translates the local source address and port number combination to a single global IP address and a unique port number above 1024.

16.What is static and dynamic routing? Differentiate between static and dynamic routing?
Static Routes are routes that a network administrator manually configures. A static route includes the network address and subnet mask of the destination network, along with the exit interface or the IP address of the next-hop router. The routing table designates static routes with an S. Static routes have the lowest administrative distance, because static routes are more stable and reliable than routes learned dynamically. Dynamic Routing protocols also add remote networks to the routing table. Dynamic routing protocols enable routers to share information about the reachability and status of remote networks through network discovery. Each protocol sends and receives data packets while locating other routers and updating and maintaining routing tables. Routes learned through a dynamic routing protocol are identified by the protocol used (for example. R for RIP and D for EIGRP). They are assigned the administrative distance of the protocol. Comparing Static and Dynamic Routing: Static Configuration Increases with network size. Complexity Topology Changes Administrator intervention

Dynamic Generally independent of network size. Automatically adapts to network

Scaling Security Resources Usage Predictability

required. Scalable for simple topologies.

changes. Suitable for simple and complex topologies More secure. Less secure. No extra resources required. Uses CPU, memory, link bandwidth. Route to destination is always Route depends on current topology. the same.

17.Write short notes on the following: RIP v1 & RIP v2

RIPv1 was the first only IP routing protocol available in the early days of networking. RIPv1 does not send subnet mask information in its routing updates and, therefore, does not support variable-length subnet masking (VLSM) and classless interdomain routing (CIDR). RIPv1 automatically summarizes networks at the classful boundary, treating all networks as though they were default classes A, B, and C. As long as networks are contiguous, such as,, and so on, this feature may not pose a serious problem. By default, RIPv1 broadcasts its routing updates to all connected routers every 30 seconds. RIPv2 has many of the features of RIPv1 and includes important enhancements. RIPv2 is a classless routing protocol that supports VLSM and CIDR. A subnet mask field is included in v2 updates, which allows the use of discontiguous networks. RIPv2 also has the ability to turn off automatic summarization of routes. RIPv2 has an authentication mechanism, whereas RIPv1 does not. Limitations of RIP: Although RIP is easy to configure and requires minimal amounts of router resources to function, its simple hop-count metric is not an accurate way to determine the best path in complex networks. In addition, the RIP limitation of 15 hops can mark distant networks as unreachable. RIP issues periodic updates of its routing table, which consumers bandwidth, even when no network changes have occurred. Routers must accept these updates and process them to see whether they contain updated route information. This uses valuable router resources.

Enhanced Interior Gateway Routing Protocol. The limitation of RIP led to the development of more advanced protocols. Networking professionals required a protocol that would support VLSM and CIDR, scale easily, and provide faster convergence in complex enterprise networks. Cisco developed EIGRP as a proprietary distance vector routing protocol. It has enhanced capabilities that address many of the limitations of other distance vector protocols. EIGRP shares some features of RIP, while using many advanced features. They are: Supports VLSM and classless routing Uses a composite metric Uses the DUAL algorithm to prevent routing loops Uses bounded updates for fast convergence

Maintains multiple tables Forms neighbour adjacencies

Issues and Limitations of EIGRP: Although EIGRP is a powerful and sophisticated routing protocol, several considerations limit its use: Does not work in a multivendor environment because it is a Cisco proprietary protocol. Works best with a flat network design. Must share the same autonomous system among routers and cannot be subdivided into groups Can create very large routing tables, which requires large update packets and large amounts of bandwidth. Uses more memory and processor power than RIP. Works inefficiently when left on the default settings Requires administrators with advanced technical knowledge of the protocol and the network.

Open Shortest Path First (OSPF) Protocol is the most widely used link-state interior gateway routing protocol in enterprise networks. This section describes the operation of link-state protocols in general and OSPF specifically. The structure of OSPF networks, the metrics used, and the process of establishing neighbour adjacencies are covered. Advantages of OSPF include the following: Uses bandwidth as a metric. Converges quickly using triggered updates. Limits routing loops with consistent view of network topology. Routing decisions based on latest information. Minimizes link-state database for fewer SPF calculations. Maps the topology of an area on each router. Supports classless interdomain routing (CIDR) and variable-length subnet masking (VLSM). Hierarchical use of areas provides design flexibility. Disadvantages of OSPF include the following: Requires more router memory and processing power Requires more complex and expensive implementation Requires an administrator who understands the protocol Floods the network initially with LSAs, noticeably degrading network performance

Frame relay

Frame relay is a common Layer 2 WAN encapsulation. Frame Relay networks are multiaccess networks similar to Ethernet except that they do not forward broadcast traffic. Thus, Frame Relay is a nonbroadcast multi-access (NBMA) network. Frame Relay uses packet-switching technology with variable-length packets. It also makes use of STDM for optimum use of the available bandwidth. The router, or DTE device, normally connects to the service provider via a leased line. It connects via a frame relay switch, or DCE device, to the nearest point of presence of the service provider. This connection is referred to as an access link. The remote router at the destination end of the network is also a DTE device. The connection between the two DTE devices is a virtual circuit (VC). To function properly, Frame Relay must have a way to associate a specific VC with a destination Layer 3 network. It also requires a complete and accurate knowledge of the status of network VCs to reduce the chance that information is sent to a destination circuit that is no longer functioning. Inverse Address Resolution Protocol (IARP) and Level Management Interface (LMI) are mechanisms within Frame Relay to help ensure that information can be reliably sent to remote networks.

18.Differentiate between time division multiplexing & frequency division multiplexing?

TDM (Time Division Multiplexing) and FDM (Frequency Division Multiplexing) are two methods of multiplexing multiple signals into a single carrier. Multiplexing is the process of combining multiple signals into one, in such a manner that each individual signal can be retrieved at the destination. Since multiple signals are occupying the channel, they need to share the resource in some manner. 1. The primary difference between FDM and TDM is how they divide the channel. FDM divides the channel into two or more frequency ranges that do not overlap, while TDM divides and allocates certain time periods to each channel in an alternating manner. Due to this fact, we can say that for TDM, each signal uses all of the bandwidth some of the time, while for FDM, each signal uses a small portion of the bandwidth all of the time. 2. TDM provides greater flexibility and efficiency, by dynamically allocating more time periods to the signals that need more of the bandwidth, while reducing the time periods to those signals that do not need it. FDM lacks this type of flexibility, as it cannot dynamically change the width of the allocated frequency. 3. The advantage of FDM over TDM is in latency. Latency is the time it takes for the data to reach its destination. As TDM allocates time periods, only one channel can transmit at a given time, and some data would often be delayed, though its often only in milliseconds. Since channels in FDM can transmit at any time, their latencies would be much lower compared to TDM. FDM is often used in applications where latency is of utmost priority, such as those that require real-time information. 4. FDM and TDM are often used in tandem, to create even more channels in a given frequency range. The common practice is to divide the channel with FDM, so that you have a dedicated channel with a smaller frequency range. Each of the FDM channels is

then occupied by multiple channels that are multiplexed using TDM. This is what telecoms do to allow a huge number of users to use a certain frequency band.

19.Explain different types of switching?

The different types of switching are as follows: 1. Circuit Switching Circuit Switching establishes an end-to-end circuit between nodes before forwarding any data. A standard telephone call uses this type of connection. While the circuit is in place, it provides dedicated bandwidth between two points. Completion of the conversation releases the circuit for reuse. No other organizations can use the circuit until it is released. This method provides a level of security not available in packet-switching or cellswitching technology. With circuit- switching, the service provider assigns links to different connections as the need arises. Costs are incurred for the link only when the connection is active. The cost for circuit switching varies based on usage time and distance. It can become quite expensive if the circuit is used on a regular basis. 2. Packet Switching Packet switching uses bandwidth more efficiently than other types of data switching. The data is first segmented into packets, and an identifier is placed on each packet. The data is then released into the service provider network. The service provider accepts the data and switches the packet from one mode to another until the packet reaches its final destination. The circuit, or pathway, between the source and destination is often a preconfigured link, but it is not an exclusive link. The service provider switches packets from multiple organizations over the same physical links. The link between nodes is established as required, but there is no end-to-end physical connectivity between source and destination. Frame relay is an example of packet switching technology. 3. Cell switching Cell switching is a variation of packet switching. It is capable of transferring voice, video, and data through private and public networks at speeds in excess of 155 Mbps. An example of cell switching is Asynchronous Transfer Mode (ATM). ATM uses fixed length, 53-byte cells that have 48 bytes of data and a 5-byte header. The small uniform, size of the cells allows them to be switched quickly and efficiently between nodes. An advantage of ATM is that it prevents small messages from being held up behind larger messages. However, for networks handling mainly segmented data, ATM introduces a large amount of overhead and actually slows network performance. This is because each cell only carries 48 bytes of data and has a 5-byte header regardless of the payload. In packet-switched technology such as Frame Relay, the much larger frames require less overhead.

20.Compare HDLC and PPP protocol?


High-level Data Link Control (HDLC) is a standard bit-oriented data link layer encapsulation. HDLC uses synchronous serial transmission, which provides error-free communication between two points. HDLC defines a Layer 2 framing structure that allows for flow control and error control using acknowledgements and a windowing scheme. Each frame has the same format, whether it is a data frame or a control frame. The standard HDLC frame does not contain a field that identifies the type of protocol carried by the frame. For that reason, standards-based HDLC cannot handle multiple protocols across a single link. Flag Address Control Information FCS Flag 8 Bits 8 Bits 8 or 16 Bits Variable Length, 0 or more Bits, 16 or 32 8 Bits Multiples of 8 Bits Cisco HDLC incorporates an extra field, known as the Type field, which allows multiple network layer protocols to share the same link. Cisco HDLC is the default data link layer encapsulation type on Cisco serial links and should be used only when interconnecting Cisco devices. Flag Address Control Type (Protocol Information FCS Flag Code) 8 Bits 8 Bits 16 Bits 16 Bits Variable Length, 0 or more 16 8 Bits Bits, Multiples of 8 Bits PPP Point-to-Point Protocol (PPP), like HDLC, is a data link layer encapsulation for serial links. It uses a layered architecture to encapsulate and carry multiprotocol diagrams over a point-topoint link. Because PPP is standards based, it enables communication between equipment of different vendors. PPP is supported on the following interface types: Asynchronous serial Synchronous serial High-Speed Serial Interface (HSSI) ISDN PPP has two subprotocols: Link Control Protocol (LCP) and Network Control Protocol (NCP). PPP uses the LCP to establish, maintain, test, and terminate the point-to-point link. In addition, LCP negotiates and configures control options on the WAN link. PPP uses the NCP component to encapsulate multiple network layer protocols so that they operate on the same communications link. Every network layer protocol carried on the PPP link requires a separate NCP.

21.What is the difference between PAP and CHAP?

Password Authentication Protocol (PAP) PAP provides a simple method for a remote device to establish its identity. PAP uses a twoway handshake to send its username and password. The called device looks up the username of the calling device and confirms that the sent password matches what it has stored on its

database. If the two passwords match, authentication is successful. PAP sends the username/password across the link repeatedly in clear text until acknowledgment of the authentication or termination of the connection. This authentication method does not protect the username and password from being stolen using a packet sniffer. In addition the remote node is in control of the frequency and timing of the login attempts. Once authenticated, no further verification of the remote device occurs. Without ongoing verification, the link is vulnerable to hijacking of the authenticated connection and the possibility of a hacker gaining illegal authorized access to the router using a replay attack. Challenge Handshake Authentication Protocol (CHAP) CHAP is a more secure authentication process than PAP. CHAP does not send the password across the link. Authentication occurs both during initial link establishment and repeatedly during the time the link is active. The called device is in control of the frequency and timing of the authentication, making a hijack attack extremely unlikely. CHAP uses a three-way handshake. The following list outlines the CHAP handshake authentication process: 1. PPP establishes the link phase. 2. The local router sends a challenge message to the remote router. This is usually a randomly generated binary string. 3. The remote router uses the challenge, a shared secret password, and an encryption algorithm to generate a one-way hash is an encrypted string that cannot be decrypted. 4. The remote router sends back the one-way hash to the local router. 5. The local router checks the response against its own calculation, using the challenge and the same shared secret. If values match, the local router acknowledges authentication. If the values do not match, the local router immediately terminates the connection.