Вы находитесь на странице: 1из 35

Master of Computer Application (MCA) Semester 3

Computer Networks
Assignment Set 1

Que 1. Discuss the following Switching Mechanisms: a. Circuit switching b. Message switching c. Packet switching

Ans: a. Circuit switching A type of communications in which a dedicated channel (or circuit) is established for the duration of a transmission. The most ubiquitous circuit-switching network is the telephone system, which links together wire segments to create a single unbroken line for each telephone call. The other common communications method is packet switching, which divides messages into packets and sends each packet individually. The Internet is based on a packet-switching protocol, TCP/IP. Circuit-switching systems are ideal for communications that require data to be transmitted in real-time. Packet-switching networks are more efficient if some amount of delay is acceptable. Circuit-switching networks are sometimes called connection-oriented networks. Note, however, that although packet switching is essentially connectionless, a packet switching network can be made connection-oriented by using a higher-level protocol. TCP, for example, makes IP networks connection-oriented. A networking technology that provides a temporary, but dedicated, connection between two stations no matter how many switching devices the data are routed through. Circuit switching was originally developed for the analog-based telephone system in order to guarantee steady, consistent service for two people engaged in a phone conversation. Analog circuit switching (FDM) has given way to digital circuit switching (TDM), and the digital counterpart still maintains the connection until broken (one side

Computer Networks - MC0075

Roll No. 521150974

hangs up). This means bandwidth is continuously reserved and "silence is transmitted" just the same as digital audio.

b. Message switching A computer system used to switch data between various points. Computers have always been ideal switches due to their input/output and compare capabilities. It inputs the data, compares its destination with a set of stored destinations and routes it accordingly. Note: A "message" switch is a generic term for a data routing device, but a "messaging" switch converts mail and messaging protocols. message switching: A method of handling message traffic through a switching center, either from local users or from other switching centers, whereby the message traffic is stored and forwarded through the system.

Every input from the terminal receives a response. Most responses are preceded by indicators where the letters before OK represent the first character of each of the CMSG options (except CANCEL) as follows: D DATE E ERRTERM H HEADING I ID M MSG O OPCLASS P PROTECT R ROUTE S SEND T TIME

Computer Networks - MC0075

Roll No. 521150974

These indicators identify the options that have been processed and that are currently in effect. Errors may occur because of: Syntax (for example, misspelled option, unbalanced parentheses, terminal identifier more than 4 characters, invalid option separator, and message and destination not provided). Specification (for example, the specified terminal has not been defined to CICS). Operation (for example, operator not currently signed on to the s ystem). Syntax errors within an option cause it to be rejected by the message-switching routine. To correct a known error, reenter the option before typing the SEND keyword.

c. Packet switching Refers to protocols in which messages are divided into packets before they are sent. Each packet is then transmitted individually and can even follow different routes to its destination. Once all the packets forming a message arrive at the destination, they are recompiled into the original message. Most modern Wide Area Network (WAN) protocols, including TCP/IP, X.25, and Frame Relay, are based on packet-switching technologies. In contrast, normal telephone service is based on a circuit-switching technology, in which a dedicated line is allocated for transmission between two parties. Circuitswitching is ideal when data must be transmitted quickly and must arrive in the same order in which it's sent. This is the case with most real-time data, such as live audio and video. Packet switching is more efficient and robust for data that can withstand some delays in transmission, such as e-mail messages and Web pages. A new technology, ATM, attempts to combine the best of both worlds -- the guaranteed delivery of circuit-switched networks and the robustness and efficiency of packet-switching networks. Packet switching is the dividing of messages into packets before they are sent, transmitting each packet individually, and then reassembling them into the original message once all of them have arrived at the intended destination. Packets are the fundamental unit of information transport in all modern computer networks, and increasingly in other communications networks as well. Each packet, which can be of fixed or variable size depending on the protocol, consists of a header, body (also called a payload) and a trailer. The body contains a segment of the message being transmitted. This contrasts with circuit switching, in which a dedicated, but temporary, circuit is established for the duration of the transmission of each message. The most familiar circuit-switching network is the

Computer Networks - MC0075

Roll No. 521150974

telephone system when used for voice communications. Circuit-switching is ideal when data must be transmitted quickly and must arrive in the same order in which it is sent, as is the case with most realtime data, such as live audio and video. Packet switching is used to optimize the use of the bandwidth available in a network, to minimize the transmission latency (i.e. the time it takes for data to pass across the network) and to increase robustness of communication. It is more efficient and robust for data that can withstand some delays in transmission, such as web pages and e-mail messages.

Que 2. Discuss the following IEEE standards o Ethernets o Fast Ethernet o Gigabit Ethernet o IEEE 802.3 frame format.

Ans: Ethernets Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are major differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived. From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today powers the vast majority of local computer networks. The coaxial cable was later replaced with point-to-point links connected together by hubs and/or switches in order to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. Star LAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twistedpair network. Above the physical layer, Ethernet stations communicate by sending each other data packets, small blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used both to specify the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique

Computer Networks - MC0075

Roll No. 521150974

address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses. Fast Ethernet Fast Ethernet is a collective term for a number of Ethernet standards that carry traffic at the nominal rate of 100 Mbit/s, against the original Ethernet speed of 10 Mbit/s. Of the 100 megabit Ethernet standards 100baseTX is by far the most common and is supported by the vast majority of Ethernet hardware currently produced. Full duplex fast Ethernet is sometimes referred to as "200 Mbit/s" though this is somewhat misleading as that level of improvement will only be achieved if traffic patterns are symmetrical. Fast Ethernet was introduced in 1995 and remained the fastest version of Ethernet for three years before being superseded by gigabit Ethernet. A fast Ethernet adaptor can be logically divided into a medium access controller (MAC) which deals with the higher level issues of medium availability and a physical layer interface (PHY). The MAC may be linked to the PHY by a 4 bit 25 MHz synchronous parallel interface known as MII. Repeaters (hubs) are also allowed and connect to multiple PHYs for their different interfaces. 100BASE-T is any of several Fast Ethernet standards for twisted pair cables. 100BASE-TX (100 Mbit/s over two-pair Cat5 or better cable), 100BASE-T4 (100 Mbit/s over four-pair Cat3 or better cable, defunct), 100BASE-T2 (100 Mbit/s over two-pair Cat3 or better cable, also defunct). The segment length for a 100BASE-T cable is limited to 100 metres. Most networks had to be rewired for 100-megabit speed whether or not they had supposedly been CAT3 or CAT5 cable plants. The vast majority of common implementations or installations of 100BASE-T are done with 100BASE-TX. 100BASE-TX is the predominant form of Fast Ethernet, and runs over two pairs of category 5 or above cable. A typical category 5 cable contains 4 pairs and can therefore support two 100BASE-TX links. Each network segment can have a maximum distance of 100 metres. In its typical configuration, 100BASE-TX uses one pair of twisted wires in each direction, providing 100 Mbit/s of throughput in each direction (full-duplex). The configuration of 100BASE-TX networks is very similar to 10BASE-T. When used to build a local area network, the devices on the network are typically connected to a hub or switch, creating a star network. Alternatively it is possible to connect two devices directly using a crossover cable. In 100BASE-T2, the data is transmitted over two copper pairs, 4 bits per symbol. First, a 4 bit symbol is expanded into two 3-bit symbols through a non-trivial scrambling procedure based on a linear feedback shift register.
Computer Networks - MC0075 Roll No. 521150974

100BASE-FX is a version of Fast Ethernet over optical fiber. It uses two strands of multi-mode optical fiber for receive (RX) and transmit (TX). Maximum length is 400 metres for half-duplex connections or 2 kilometers for full-duplex. 100BASE-SX is a version of Fast Ethernet over optical fiber. It uses two strands of multi-mode optical fiber for receive and transmit. It is a lower cost alternative to using 100BASE-FX, because it uses short wavelength optics which are significantly less expensive than the long wavelength optics used in 100BASE-FX. 100BASE-SX can operate at distances up to 300 metres. 100BASE-BX is a version of Fast Ethernet over a single strand of optical fiber (unlike 100BASE-FX, which uses a pair of fibers). Single-mode fiber is used, along with a special multiplexer which splits the signal into transmit and receive wavelengths. Gigabit Ethernet Gigabit Ethernet (GbE or 1 GigE) is a term describing various technologies for transmitting Ethernet packets at a rate of a gigabit per second, as defined by the IEEE 802.3-2005 standard. Half duplex gigabit links connected through hubs are allowed by the specification but in the marketplace full duplex with switches is the norm. Gigabit Ethernet was the next iteration, increasing the speed to 1000 Mbit/s. The initial standard for gigabit Ethernet was standardized by the IEEE in June 1998 as IEEE 802.3z. 802.3z is commonly referred to as 1000BASE-X (where -X refers to either -CX, -SX, -LX, or -ZX). IEEE 802.3ab, ratified in 1999, defines gigabit Ethernet transmission over unshielded twisted pair (UTP) category 5, 5e, or 6 cabling and became known as 1000BASE-T. With the ratification of 802.3ab, gigabit Ethernet became a desktop technology as organizations could utilize their existing copper cabling infrastructure. Initially, gigabit Ethernet was deployed in high-capacity backbone network links (for instance, on a highcapacity campus network). Fiber gigabit Ethernet has recently been overtaken by 10 gigabit Ethernet which was ratified by the IEEE in 2002 and provided data rates 10 times that of gigabit Ethernet. Work on copper 10 gigabit Ethernet over twisted pair has been completed, but as of July 2006, the only currently available adapters for 10 gigabit Ethernet over copper requires specialized cabling.

Computer Networks - MC0075

Roll No. 521150974

InfiniBand connectors and is limited to 15 m. However, the 10GBASE-T standard specifies use of the traditional RJ-45 connectors and longer maximum cable length. Different gigabits Ethernet are listed in table Name 1000BASE-T 1000BASE-SX 1000BASE-LX 1000BASE-CX 1000BASE-ZX medium unshielded twisted pair multi-mode fiber single-mode fiber balanced copper cabling single-mode fiber

Que: 3. Describe the classification of computer networks based on: o Transmission Technologies o Scalability o Geographical Distance covered Ans: Transmission Technology Broadcast links Point-to-point links Broadcast networks have a single communication channel that is shared by all the users on the network. Short messages are commonly called as packets or frames (in certain context). The user on the network sends packets. All other machines receive these packets. An address field within the packet or frame specifies the address of the destination machine. So upon receiving the packet, all machines check the address field. Only intended user uses or processes the packet or frame and others neglect and discard it. As an example in a class of 50 students, the teacher puts question to say X student (where X is the name of the student). All the students hear to the question but will not answer as the question is intended to X only. Hence only X will analyze the question and others will not respond.

Computer Networks - MC0075

Roll No. 521150974

Broadcast system generally allows the possibility of addressing a packet to all the destinations by using a special code in the address field. When this code is transmitted, it is received and processed by every machine on the network. Again considering the above example: A teacher put forth the question in a class to all students. That is the teacher does not ask to a specific student by any unique name. Then, all are supposed to analyze the question and answer. This mode of operation is referred to broadcasting. Some broadcasting systems also support transmission to a subset of the users, which is a group of users. This mode is called as multicasting. In contrast the point-to-point network consists of many connections between individual pairs of machines. A packet to be sent from source to destination may have to first visit one or more intermediate machines. Usually different routes of different length are possible. So finding the best path or route is important in point-to-point networks. This type of transmission with one sender and one receiver is also referred to as unicasting. Geographically localized networks or smaller networks tend to use broadcasting where as larger networks usually are point-to-point networks. Scalability We classify multiple processor system based on physical size. At the top we have personal area networks (PAN), networks meant for a single person. For example a wireless network connecting a computer with its mouse, keyboard and a printer can constitute a personal area network. Beyond the personal area network we have longer-range networks which are broadly classified networks as LAN MAN WAN We will see these three networks in detail later. Finally the connection of two or more networks is called an inter-network. The world wide Internet is a well known example of inter-network. Distance is important as a classification metric as different techniques are used at different scales.

Computer Networks - MC0075

Roll No. 521150974

Geographical Distance covered


Local Area Networks

LANs using (a) Bus topology (b) Ring topology Local Area Networks are generally called LANs. They are privately owned networks within a single building or campus of up to few kilometers in size. Most of LANs use Bus or ring topology for connection and is illustrated as shown in fig. 1.5. They are used to connect personal computers and workstations in company offices and factories to share resources and exchange information. Traditional LANs run at speeds of 10Mbps to 100Mbps, have low delay (microseconds and nanoseconds) and make very few errors. Newer LANs operate at 10Gbps. Various topologies are possible for broadcast LANs. Metropolitan Area Networks

MAN based on cable TV A Metropolitan Area Networks, referred as MANs covers a city. The best known example is cable television network available in many cities. Earlier these were used for TV reception only but with changes a two way internet service could be provided. A MAN might look something like the system shown in figure 1.6. In this system both television signals and internet being fed into centralized head end for distribution to peoples home. Cable television is not the only MAN. Recent developments in high speed wireless internet access also resulted in MAN.

Computer Networks - MC0075

Roll No. 521150974

Wide Area Network

WAN system A wide area network is referred as WAN. WAN spans a large geographical area often a continent or country. WAN contains a collection of machines, traditionally called as hosts. As illustrated in figure 1.7, these hosts can be on LANs and are connected by a subnet or also called communication subnet. The hosts are owned by customers or are personal computers. The communication subnets are owned by a telephone company or internet service provider. The subnet carries the messages from hosts to hosts, just as telephone system carries words from speaker to listener. Each host is connected to a LAN on which a router is present. Sometimes a host may be connected directly to a router. The collection of communication lines and routers is called a communication subnet.

In most WANs, the network contains many transmission lines each connecting a pair of routers. As illustrated in figure 1.8, a packet is sent from one router to another via one or more intermediate routers. The packet is received at each intermediate router in its entirety. That is store the packet in full until the required output line is free, and then forwards it. A subnet that works according to this principle is called store and forward or packet switched subnet. Not all WANs are packet switched. A second possibility for a WAN is a satellite system. Satellite networks are inherently broadcast networks.

Computer Networks - MC0075

Roll No. 521150974

Que 4. Explain the different classes of IP addresses with suitable examples. Ans: Different classes of IP addresses In order to provide the flexibility required to support different size networks. The designers decided that the IP address space should be divided into five different address classes. They are 1. Class A 2. Class B 3. Class C 4. Class D 5. Class E Class A Networks (/8 Prefixes) Each Class A network address has an 8-bit network-prefix with the highest order bit set to 0 and a seven-bit network number, followed by a 24-bit host-number. Today, it is no longer considered 'modern' to refer to a Class A network. Class A networks are now referred to as "/8s" (pronounced "slash eight" or just "eights") since they have an 8-bit network-prefix. A maximum of 126 (2 7 -2) /8 networks can be defined as shown in figure 2.1(b). The calculation requires that the 2 is subtracted because the /8 network 0.0.0.0 is reserved for use as the default route and the /8 network 127.0.0.0 (also written 127/8 or 127.0.0.0/8) has been reserved for the "loop back" function. Each /8 supports a maximum of 16,777,214 (2 24 -2) hosts per network. The host calculation requires that 2 is subtracted because the all-0s ("this network") and all-1s ("broadcast") host-numbers may not be assigned to individual hosts. Since the/8 address block contains 231 (2,147,483,648) individual addresses and the IPv4 address space contains a maximum of 2 32 (4,294,967,296) addresses, the /8 address space is 50% of the total IPv4 unicast address space. Class B Networks (/16 Prefixes) Each Class B network address has a 16-bit network-prefix with the two highest order bits set to 1-0 and a 14-bit network number, followed by a 16-bit host-number as illustrated in figure 2.1(b). Class B networks are now referred to as"/16s" since they have a 16-bit network-prefix.

Computer Networks - MC0075

Roll No. 521150974

A maximum of 16,384 (2 14) /16 networks can be defined with up to 65,534 (2 16 -2) hosts per network. Since the entire /16 address block contains 2 30 (1,073,741,824) addresses, it represents 25% of the total IPv4 unicast address space. Class C Networks (/24 Prefixes) Each Class C network address has a 24-bit network-prefix with the three highest order bits set to 1-1-0 and a 21-bit network number, followed by an 8-bit host-number as shown in figure 2.1(b). Class C networks are now referred to as "/24s" since they have a 24-bit network-prefix. A maximum of 2,097,152 (2 21) /24 networks can be defined with up to 254 (2 8 -2) hosts per network. Since the entire /24 address block contains 2 29 (536,870,912) addresses, it represents 12.5% (or 1/8th) of the total IPv4 unicast address space. Class D Networks These addresses have their leading four-bits set to 1-1-1-0 and the remaining 28 bits are used to support IP Multicasting. Class E addresses They have their leading four-bits set to 1-1-1-1 and are reserved for experimental use or future use.

Computer Networks - MC0075

Roll No. 521150974

Que 5. Discuss the following with respect to Internet Control Message Protocols: a. Congested and Datagram Flow control b. Route change requests from routers c. Detecting circular or long routes Ans: a. Congested and Datagram Flow control IP implementations are required to support this protocol. ICMP is considered an integral part of IP, although it is architecturally layered upon IP. ICMP provides error reporting, flow control and first-hop gateway redirection.

Some of ICMP's functions are to: Announce network errors. Such as a host or entire portion of the network being unreachable, due to some type of failure. A TCP or UDP packet directed at a port number with no receiver attached is also reported via ICMP. Announce network congestion. When a router begins buffering too many packets, due to an inability to transmit them as fast as they are being received, it will generate ICMP Source Quench messages. Directed at the sender, these messages should cause the rate of packet transmission to be slowed. Of course, generating too many Source Quench messages would cause even more network congestion, so they are used sparingly. Assist Troubleshooting. ICMP supports an Echo function, which just sends a packet on a round--trip between two hosts. Ping, a common network management tool, is based on this feature. Ping will transmit a series of packets, measuring average round--trip times and computing loss percentages. Announce Timeouts. If an IP packet's TTL field drops to zero, the router discarding the packet will often generate an ICMP packet announcing this fact. TraceRoute is a tool which maps network routes by sending packets with small TTL values and watching the ICMP timeout announcements.

Computer Networks - MC0075

Roll No. 521150974

An ICMP error message is never generated inresponse to: A datagram whose source address does not define a single host (address cannot be zero, loopback, broadcast, multicast). A datagram whose destination address is an IP broadcast address. A datagram sent as a link-layer broadcast A fragment other than the first one of a datagram.

(b)

Route change requests from routers:

Network Address Translation (NAT) is a standard IP service which allows for the translation of one IP address into another IP address. NAT has been enhanced to provide a set of advanced services called SuperNAT. SuperNAT includes a powerful Proxy Service, Port Address translation (sometimes called PAT) and Application Specific Gateways (ASGs) as well as other capabilities defines below. Up to 32 internal to external host IP address mappings SuperNAT allows local hosts to be excluded from external services. SuperNAT Thin Proxy allows single IP for unlimited local hosts. SuperNAT allows NAT translations plus a nominated IP address to be used as a Thin Proxy for all other hosts Port Maps (PAT) allow support of multiple types of servers on a single IP Context sensitive support for active (PORT) or Passive (PASV) FTP modes. Automatic support for remote NETBIOS (WINS) networks and remote DHCP servers Proxy DNS Feature simplifies re-configuration. User definable NAT route(s) allow router to be used in LAN to LAN, LAN to WAN, WAN to WAN configurations. NAT services are defined at the 'Logical Route' level. It is possible to define any Route to use NAT services. To illustrate, assume an Intranet where WarpTwo is being used as an concentrator for a group of LAN and remote Hosts (PCs). These IP addresses communicate with each without using a NAT service (an Intranet) when external communication is required WarpTwo forwards the traffic to another LAN router. This LAN to LAN route is defined as the NAT route and uses a NAT service. There are many other network scenarios where this capability can be used to both increase efficiency and to provide flexible responses to network needs.

Computer Networks - MC0075

Roll No. 521150974

(c) Detecting circular or long routes: IP networks are structured similarly. The whole Internet consists of a number of proper networks, called autonomous systems. Each system performs routing between its member hosts internally so that the task of delivering a datagram is reduced to finding a path to the destination host's network. As soon as the datagram is handed to any host on that particular network, further processing is done exclusively by the network itself. Identifying critical nodes in a graph is important to understand the structural characteristics and the connectivity properties of the network. In this paper, we focus on detecting critical nodes, or nodes whose deletion results in the minimum pair-wise connectivity among the remaining nodes. This problem, known as the critical node problem. IP uses a table for this task that associates networks with the gateways by which they may be reached. A catch-all entry (the default route) must generally be supplied too; this is the gateway associated with network 0.0.0.0. All destination addresses match this route, since none of the 32 bits are required to match, and therefore packets to an unknown network are sent through the default route. On sophus, the table might look like this: If you need to use a route to a network that sophus is directly connected to, you don't need a gateway; the gateway column here contains a hyphen. The process for identifying whether a particular destination address matches a route is a mathematical operation. The process is quite simple, but it requires an understanding of binary arithmetic and logic: A route matches a destination if the network address logically ANDed with the netmask precisely equals the destination address logically ANDed with the netmask. Translation: a route matches if the number of bits of the network address specified by the netmask (starting from the left-most bit, the high order bit of byte one of the address) match that same number of bits in the destination address.

We depend on dynamic routing to choose the best route to a destination host or network based on the number of hops. Hops are the gateways a datagram has to pass before reaching a host or network. The shorter a route is, the better RIP rates it. Very long routes with 16 or more hops are regarded as unusable and are discarded. RIP manages routing information internal to your local network, but you have to run gated on all hosts. At boot time, gated checks for all active network interfaces. If there is more than one active interface (not counting the loopback interface), it assumes the host is switching packets between several

Computer Networks - MC0075

Roll No. 521150974

networks and will actively exchange and broadcast routing information. Otherwise, it will only passively receive RIP updates and update the local routing table.

Que: 6. Discuss the architecture and applications of E-mail. Ans: E-Mail Electronic mail or e-mail, as it is known by its fans b ecame known to the public at large and its use grew exponentially. The first e-mail systems consisted of file transfer protocols, with the convention that the first line of the message contained the recipients address. It is a store and forward method of composing, sending, storing, and receiving messages over electronic communication systems. The term e-mail applies both to the Internet e-mail system based on the Simple Mail Transfer Protocol (SMTP) and to intranet systems allowing users within one organization to e-mail each other. Often workgroup collaboration organizations may use the Internet protocols for internal e-mail service. E-mail is often used to deliver bulk unwanted messages, or spam, but filter programs exist which can automatically delete most of these. E-mail systems based on RFC 822 are widely used. Architecture : E-mail system normally consists of two sub systems 1. the user agents 2. the message transfer agents The user agents allow people to read and send e-mails. The message transfer agents move the messages from source to destination. The user agents are local programs that provide a command based, menu-based, or graphical method for interacting with e-mail system. The message transfer agents are daemons, which are processes that run in background. Their job is to move datagram e-mail through system. A key idea in e-mail system is the distinction between the envelope and its contents. The envelope encapsulates the message. It contains all the information needed for transporting the message like destinations address, priority, and security level, all of which are distinct from the message itself. The message transport agents use the envelope for routing. The message inside the envelope consists of two major sections:
Computer Networks - MC0075 Roll No. 521150974

The Header: The header contains control information for the user agents. It is structured into fields such as summary, sender, receiver, and other information about the e-mail. Body: The body is entirely for human recipient. The message itself as unstructured text; sometimes containing a signature block at the end Header format The header is separated from the body by a blank line. consists of following fields From: The e-mail address, and optionally name, of the sender of the message. To: one or more e-mail addresses, and optionally name, of the receivers of the message. Subject: A brief summary of the contents of the message. Date: The local time and date when the message was originally sent.

Applications of E-mail Basic services: E-mail systems support five basic functions. These basic functions are: 1. Composition: It refers to the process of creating messages and answers. Any text editor can be used for the body of the message, the system itself can provide assistance with addressing and the numerous header fields attached to each message. For example: when answering a message, the e-mail system can extract the originators address from the incoming e-mail and automatically insert it into the proper place in the reply. 2. Transfer:

Computer Networks - MC0075

Roll No. 521150974

It refers to moving messages from the originator to the recipient. This requires establishing a connection to the destination or some intermediate machine, outputting the message, and finally releasing the connection. E-mail does it automatically without bothering the user. 3. Reporting: It refers to acknowledging or telling the originator what happened to the message. Was the message delivered? Was it rejected? Numerous applications exist in which confirmation of delivery is important and may even have a legal significance. E-mail system is not very reliable. 4. Displaying The incoming message has to be displayed so that people can read their e-mail. Sometimes conversation is required or a special viewer must be invoked. For example: if message is a postscript file or digitized voice. Simple conversations and formatting are sometimes attempted. 5. Disposition It is the final step and concerns what the recipient does with the message after receiving it. Possibilities include throwing it away before reading, throwing it away after reading, saving it, and so on. It should be possible to retrieve and reread saved messages, forward them or process them in other ways. Advanced services: In addition to these basic services, some e-mail systems provide a variety of advanced features. When people move or when they are away for some period of time, they want their e-mail to be forwarded, so the system should do it automatically. Most systems allow user to create mailboxes to store incoming e-mails. Commands are needed to create and destroy mailboxes, inspect the contents of mailboxes, insert and delete messages from the mailboxes. Corporate managers often need to send messages to each of their subordinates, customers, or suppliers. This gives rise to the idea of mailing list, which is a list of e-mail addresses. When a message is sent to the mailing list, identical copies are delivered to everyone on the list. Carbon copies, blind Carbon copies, high priority e-mail, secret e-mail, alternative recipients if primary one is not currently available, and the ability for secretaries to read and answer their bosses email. E-mail is now widely used within an industry for intra company communication. It allows far-flung employees to cooperate on projects.

Computer Networks - MC0075

Roll No. 521150974

Master of Computer Application (MCA) Semester 3

Computer Networks
Assignment Set 2

Que: 1. Discuss the following design issues of DLL: a. Framing b. Error control c. Flow control Ans: a. Framing
Software design is a process of problem-solving and planning for a software solution. After the purpose and specifications of software is determined, software developers will design or employ designers to develop a plan for a solution. It includes low-level component and algorithm implementation issues as well as the architectural view. The software requirements analysis (SRA) step of a software development process yields specifications that are used in software engineering. A software design may be platform-independent or platform-specific, depending on the availability of the technology called for by the design.

Design is a meaningful engineering representation of something that is to be built. It can be traced to a customer's requirements and at the same time assessed for for quality against a set of predefined criteria for 'good' design. In the software engineering context, design focuses on four major areas of concern, data, architecture, interfaces, and components. Designing software is an exercise in managing complexity. The complexity exits within the software design itself, within the software organization of the company, and within the industry as a whole. Software design is very similar to systems design. It can span multiple technologies and often involves multiple sub-disciplines. Software specifications tend to be fluid, and change rapidly and often, usually while the design process is still going on. Software development teams also tend to be fluid, likewise often changing in the middle of the design process. In many ways, software bears more resemblance to complex social or organic systems than to hardware. All of this makes software design a difficult and error prone process.

Computer Networks - MC0075

Roll No. 521150974

Software design documentation may be reviewed or presented to allow constraints, specifications and even requirements to be adjusted prior to programming. Redesign may occur after review of a programmed simulation or prototype. It is possible to design software in the process of programming, without a plan or requirement analysis, but for more complex projects this would not be considered a professional approach.

Frame Technology is a language-neutral system that manufactures custom software from reusable, machine-adaptable building blocks, called frames. FT is used to reduce the time, effort, and errors involved in the design, construction, and evolution of large, complex software systems. Fundamental to FT is its ability to stop the proliferation of similar but subtly different components, an issue plaguing software engineering, for which programming language constructs (subroutines, classes, or templates/generics) or add-in techniques such as macros and generators failed to provide a practical, scalable solution. A number of implementations of FT exist. Netron Fusion specializes in constructing business software and is proprietary. XVCL is a general-purpose, open-source implementation of FT. Paul G. Bassett invented the first FT in order to automate the repetitive, error-prone editing involved in adapting (generated and hand-written) programs to changing requirements and contexts. Independent comparisons of FT to alternative approaches confirm that the time and resources needed to build and maintain complex systems can be substantially reduced. One reason: FT shields programmers from softwares inherent redundancies: FT has reproduced COTS object-libraries from equivalent XVCL frame libraries that are two-thirds smaller and simpler; custom business applications are routinely specified and maintained by Netron FusionSPC frames that are 5% - 15% of the size of their assembled source files.

(b) Error control: Error control (error management, error handling) The employment, in a computer system or in a communication system, of error-detecting and/or error-correcting codes with the intention of removing the effects of error and/or recording the prevalence of error in the system. The effects of errors may be removed by correcting them in all but a negligible proportion of cases. Error control aims to cope with errors owing to noise or to equipment malfunction in which case it overlaps with fault tolerance (see fault-tolerant system) but not usually with the effects of errors in the design of hardware or software. An important aspect is the prevention of mistakes by users. Checking of data by software as it is entered is an essential feature of the design of reliable application programs.

Computer Networks - MC0075

Roll No. 521150974

Error control is expensive: the balance between the cost and the benefit (measured by the degree of protection) has to be weighed within the technological and financial context of the system being designed.

Software Quality Control is the set of procedures used by organizations (1) to ensure that a software product will meet its quality goals at the best value to the customer, and (2) to continually improve the organizations ability to produce software products in the future. Software quality control refers to specified functional requirements as well as non-functional requirements such as supportability, performance and usability. It also refers to the ability for software to perform well in unforeseeable scenarios and to keep a relatively low defect rate.

(c) Flow control: In computer networking, flow control is the process of managing the rate of data transmission between two nodes to prevent a fast sender from outrunning a slow receiver. It provides a mechanism for the receiver to control the transmission speed, so that the receiving node is not overwhelmed with data from tranceiving nodes. Flow control should be distinguished from congestion control, which is used for controlling the flow of data when congestion has actually occurred. Flow control mechanisms can be classified by whether or not the receiving node sends feedback to the sending node.

Flow control is important because it is possible for a sending computer to transmit information at a faster rate than the destination computer can receive and process them. This can happen if the receiving computers have a heavy traffic load in comparison to the sending computer, or if the receiving computer has less processing power than the sending computer. In common RS 232 there are pairs of control lines: RTS flow control, RTS (Request To Send)/CTS (Clear To Send) and DTR flow control, DTR (Data Terminal Ready)/DSR (Data Set Ready), which are usually referred to as hardware flow control. Oppositely, XON/XOFF is usually referred to as software flow control. In the old mainframe days, modems were called "data sets.

Computer Networks - MC0075

Roll No. 521150974

Que 2. Discuss the following with respect to Routing algorithms: a. Shortest path algorithm b. Flooding c. Distance vector routing. Ans: A) Shortest path algorithm Dijkstra's algorithm, when applied to a graph, quickly finds the shortest path from a chosen source to a given destination. (The question "how quickly" is answered later in this article.) In fact, the algorithm is so powerful that it finds all shortest paths from the source to all destinations! This is known as the single-source shortest paths problem. In the process of finding all shortest paths to all destinations, Dijkstra's algorithm will also compute, as a side-effect if you will, a spanning tree for the graph. While an interesting result in itself, the spanning tree for a graph can be found using lighter (more efficient) methods than Dijkstra's. How It Works First let's start by defining the entities we use. The graph is made of vertices (or nodes, I'll use both words interchangeably), and edges which link vertices together. Edges are directed and have an associated distance, sometimes called the weight or the cost. The distance between the vertex u and the vertex v is noted [u, v] and is always positive.

Dijkstra's algorithm partitions vertices in two distinct sets, the set of unsettled vertices and the set of settled vertices. Initially all vertices are unsettled, and the algorithm ends once all vertices are in the settled set. A vertex is considered settled, and moved from the unsettled set to the settled set, once its shortest distance from the source has been found. B) Routing: How do we get packets from one end point to another? Here's what would be nice for a routing algorithm: correctness, simplicity, robustness, stability, fairness, optimality. Robustness The world changes, software changes, use changes, topology and hardware change, things go wrong in lots of different ways. How well does the routing algorithm handle all this?

Computer Networks - MC0075

Roll No. 521150974

Stability Does the algorithm find a routing table quickly (convergence)? How does it adapt to abrupt changes in topology or state of the routers? Is it possible to have oscillations? Fairness & Optimality May be at odds with one another. What might be fair for a single link may hurt throughput. Must decide on what is meant by optimality before thinking about algorithms. For example, optimal could be for an individual packet (least amount of time in transit) or could be for the system as a whole (greatest throughput). Often times number of hops is chosen as the metric to minimize as this represents both in some sense. Algorithms may be static, i.e. the routing decisions are made ahead of time, with information about the network topology and capacity, then loaded into the routers, or dynamically, where the routers make decisions based on information they gather, and the routes change over time, adaptively.

Optimality principle and sink trees Without regard to topology we can say: If a router J is on the optimal path from router I to router K, then the optimal path from J to K also follows the same route. Proof: if there was a better way from J to K, then you could use that with the path from I to J for a better path from I to K, so your starting point (the path from I to K was optimal) is contradicted. If you apply the optimality principle then you can form a tree by taking the optimal path from every other router to a single router, B. The tree is rooted at B. Since it is a tree you don't have loops, so you know that each frame will be delivered in a finite number of hops. Of course finding the set of optimal trees is a lot harder in practice than in theory, but it still provides a goal for all real routing algorithms.

C) Distance vector routing: Distance Vector Routing is one of the two types of routing types. (The other type is Link State Routing). Basically, Distance Vector protocols determine best path on how far the destination is, while LinkState protocols are capable of using more sophisticated methods taking into consideration link variables, such as bandwidth, delay, reliability and load. Distance Vector protocols judge best path on how far it is. Distance can be hops or a combination of metrics calculated to represent a distance value. The IP

Computer Networks - MC0075

Roll No. 521150974

Distance Vector routing protocols still in use today are: Routing Information Protocol (RIP v1 and v2) and Interior Gateway Routing Protocol (IGRP C developed by Cisco).

A very simple distance-vector routing protocol works as follows: 1.Initially, the router makes a list of which networks it can reach, and how many hops it will cost. In the outset this will be the two or more networks to which this router is connected. The number of hops for these networks will be 1. This table is called a routing table. 2.Periodically the routing table is shared with other routers on each of the connected networks via some specified inter-router protocol. This information is only shared inbetween physically connected routers ("neighbors"), so routers on other networks are not reached by the new routing tables yet. 3.A new routing table is constructed based on the directly configured network interfaces, as before, with the addition of the new information received from other routers. 4.Bad routing paths are then purged from the new routing table. If two identical paths to the same network exists, only the one with the smallest hop-count is kept. 5.The new routing table is then communicated to all neighbors of this router. This way the routing information will spread and eventually all routers know the routing path to each network, which router it shall use to reach this network, and to which router it shall route next. Distance-vector routing protocols are simple and efficient in small networks, and require little, if any management. However, they do not scale well, and have poor convergence properties, which has led to the development of more complex but more scalable link-state routing protocols for use in large networks.

Computer Networks - MC0075

Roll No. 521150974

Que: 3. Discuss the following with respect to Wireless transmission: o Electromagnetic spectrum o Radio transmission o Microwave transmission Ans: Electro magnetic spectrum

EM spectrums There are basically two types of configurations for wireless transmission: directional and omni directional. For the directional configuration, the transmitting antenna puts out a focused electromagnetic beam; the transmitting and receiving antennas must therefore be carefully aligned. In the omni-directional case, the transmitted signal spreads out in all directions and can be received by many antennas. In general, the higher the frequency of a signal, the more it is possible to focus into a directional beam. EM spectrum is as shown in figure 5.6. Three general ranges of frequencies are of interest for wireless transmission. 1. Frequencies in the range of about 2GHz (giga hertz=109 Hz) to 40GHz are referred to as microwave frequencies. At these frequencies, highly directional beams are possible, and microwave is quite suitable for pint-topoint transmission. Microwave is also used for satellite communications.

Computer Networks - MC0075

Roll No. 521150974

2. Frequencies in the range of 30MHz to 1GHz are suitable for omni directional applications. We will refer to this range as the broadcast radio range. The table given below summarizes the characteristics of unguided transmission at various frequency bands. Microwave covers part of UHF and the entire SHF band, and broadcast radio covers the VHF and part of the UHF band. 3. Another important frequency range, for local applications, is the infrared portion of the spectrum. This covers, roughly, from 3x1011 to 2x1014Hz. Infrared is useful to local point-to-point and multipoint applications within confined areas, such as a single room Radio transmission Radio is a transmission medium with a large field of applications, and a medium that provides the user with great flexibility (for example, cordless telephones). Radio can be used locally, intercontinental, and for fixed as well as mobile communication between network nodes or between users and network nodes. In this subsection, we deal with radio link and satellite connections. The radio spectrum The radio spectrum, from 3 kHz to 300 GHz, is one range of the electromagnetic spectrum (infrared, visible and ultraviolet light, and X-ray frequencies are other ranges). The radio spectrum is divided into eight frequency bands as shown by Fig. from VLF (very low frequency) to EHF (extremely high frequency).

Eight-frequency bands of the radio spectrum The propagation of a radio wave depends on its frequency. Radio waves with frequencies below 30 MHz are reflected against different layers of the atmosphere and against the ground, allowing them to

Computer Networks - MC0075

Roll No. 521150974

be used for maritime radio, telegraphy and telex traffic. The capacity is limited to some tens or hundreds of bit/s. Above 30 MHz, the frequencies are too high to be reflected by the ionized layers in the atmosphere. The VHF and UHF frequency bands, which are used for TV, broadcasting and mobile telephony, belong to this group. Frequencies above 3 GHz suffer severe attenuation caused by objects (such as buildings) and therefore require a free "line of sight" between the transmitter and the receiver. Radio link systems use frequencies between 2 and 40 GHz, and satellite systems normally use frequencies between 2 and 14 GHz. The capacity is in the magnitude of 10-150 Mbit/s. Radio link In radio link connections, transmission is effected via a chain of radio transmitters and radio receivers. The radio link is used for analog as well as for digital transmission.

Fig. Radio link connection At regular intervals, the signal is received and forwarded to the next link station. See Fig. The link station may be either active or passive. An active link station amplifies or regenerates the signal. A passive link station generally consists of two directly interconnected parabolic antennas without any amplifying electronics between them. Each radio link needs two radio channels: one for each direction. A few MHz spacing is needed between the transmitter frequency and the receiver frequency. The same parabolic antenna and waveguide are used for both directions. The distance between the link stations - also called the hop length - is dependent on output power, antenna type and climate, as well as on the frequency. The higher is the carrier frequency, the shorter is the range. For example, a 2 GHz system has a range of approximately 50 kilometers, and an 18 GHz system has a range of 5-10 km.

Computer Networks - MC0075

Roll No. 521150974

Microwave transmission Microwave transmission refers to the technique of transmitting information over a Microwave link. Since microwaves are highly susceptible to attenuation by the atmosphere (especially during wet weather), the use of microwave transmission is limited to a few contexts. Properties It is only suitable over Line of Sight transmission links. Provides good bandwidth. Affected by rain, vapour, dust, snow, cloud, mist and fog, heavy moisture. Not suitable for links where an obstacle is in between the transmitter and receiver Uses Backbone carriers in cellular networks. Used to link BTS-BSC and BSC-MSC. Communication with satellites Microwave relay links for telephone service providers.

Que: 4. Describe the following: a. IGP b. OSPF c. OSPF Message formats

Ans: (a)IGP An interior gateway protocol (IGP) is a routing protocol that is used within an autonomous system (AS).In contrast an Exterior Gateway Protocol (EGP) is for determining network reach ability between autonomous systems and makes use of IGPs to resolve routes within an AS. The interior gateway protocols can be divided into two categories: 1) Distance-vector routing protocol and 2) Link-state routing protocol.

Computer Networks - MC0075

Roll No. 521150974

Autonomous System like Internet (TCP/IP) terminology for a collection of gateways (routers) that fall under one administrative entity and cooperate using a common Interior Gateway Protocol (IGP). IGP repository is an advanced digital preservation archive designed for critical, demanding, long-term data archiving for a wide range of organization requirements. IGP repository successfully isolates content and content management from technology and technology obsolescence enabling the modern enterprise for a data-certain future. It is purpose designed for: Document management including images, office documents, maps, etc. Asset management including images, audio and video Records management with statutory compliance requirements Archiving cultural artifacts (as digital surrogates) for museums and formal archives Maintaining large data sets, including mixed datasets The design is a faithful execution of the OAIS Reference Model for digital archives. The benchmark for information system archives. IGP repository complies with a number of international standards for document and records management . It is designed specifically as a content management foundation to empower any organization to institute a best practices business model related to the

(b) OSPF:

(Open Shortest Path First) A routing protocol that determines the best path for routing IP traffic over a TCP/IP network based on distance between nodes and several quality parameters. OSPF is an interior gateway protocol (IGP), which is designed to work within an autonomous system. It is also a link state protocol that provides less router to router update traffic than the RIP protocol (distance vector protocol) that it was designed to replace. Open Shortest Path First OSPF is widely deployed in IP networks to manage intra-domain routing. OSPF is a link-state protocol, in which routers reliably flood "Link State Advertisements" LSAs, enabling each to build a consistent, global view of the routing topology. Reliable performance hinges on routing stability, OSPF Open Shortest Path First is a widely used intra-domain routing protocol in IP networks. Internal processing delays in OSPF implementations impact the speed at which updates propagate in the network, the load on individual routers, and the time needed for both intra-domain and inter-domain routing

Computer Networks - MC0075

Roll No. 521150974

Improving IP control plane routing robustness is critical to the creation of reliable and stable IP services. Yet very few tools exist for effective IP route monitoring and management. This paper describes the architecture, design and deployment of a monitoring system for OSPF, an IP intra-domain routing protocol in wide. Many recent router architectures decouple the routing engine from the forwarding engine, allowing packet forwarding to continue even when the routing process is not active. This opens up the possibility of using the forwarding capability of a router even when its routing process is brought down for software upgrade. Due to the growing commercial importance of the Internet, resilience is becoming a key design issue for future IP-based networks. Reconfiguration times on the order of a few hundred milliseconds are required in the case of network element failures - far away from the slow rerouting of current implementations

(c) OSPF Message Formats:

OSPF is an interior gateway protocol (IGP), which is designed to work within an autonomous system. It is also a link state protocol that provides less router to router update traffic than the RIP protocol (distance vector protocol) that it was designed to replace. Open Shortest Path First OSPF is widely deployed in IP networks to manage intra-domain routing. OSPF is a link-state protocol, in which routers reliably flood "Link State Advertisements" LSAs, enabling each to build a consistent, global view of the routing topology. Reliable performance hinges on routing stability, OSPF Open Shortest Path First is a widely used intra-domain routing protocol in IP networks OSPF uses five different types of messages to communicate both link-state and general information between routers within an autonomous system or area. To help illustrate better how the OSPF messages are used, it's worth taking a quick look at the format used for each of these messages.

OSPF Common Header Format Naturally, each type of OSPF message includes a slightly different set of information otherwise, they wouldn't be different message types! However, they all share a similar message structure, beginning with a shared 24-byte header. This common header allows certain standard information to be conveyed in a consistent manner, such as the number of the version of OSPF that generated the message. It also allows a device receiving an OSPF message to quickly determine which type of message it has received, so it knows whether or not it needs to bother examining the rest of the message
Computer Networks - MC0075 Roll No. 521150974

Que: 5 Describe the following with respect to Internet Security: a. Cryptography b. DES Algorithm Ans: a. Cryptography Until modern times cryptography referred almost exclusively to encryption, which is the process of converting ordinary information (plaintext) into unintelligible gibberish (i.e., ciphertext). Decryption is the reverse, in other words, moving from the unintelligible ciphertext back to plaintext. A cipher (or cypher) is a pair of algorithms which create the encryption and the reversing decryption. The detailed operation of a cipher is controlled both by the algorithm and in each instance by a key. This is a secret parameter (ideally known only to the communicants) for a specific message exchange context. Keys are important, as ciphers without variable keys can be trivially broken with only the knowledge of the cipher used and are therefore less than useful for most purposes. Historically, ciphers were often used directly for encryption or decryption without additional procedures such as authentication or integrity checks. In colloquial use, the term "code" is often used to mean any method of encryption or concealment of meaning. However, in cryptography, code has a more specific meaning. It means the replacement of a unit of plaintext (i.e., a meaningful word or phrase) with a code word (for example, apple pie replaces attack at dawn). Codes are no longer used in serious cryptography except incidentally for such things as unit designations (e.g., Bronco Flight or Operation Overlord) - since properly chosen ciphers are both more practical and more secure than even the best codes and also are better adapted to computers. The most ancient and basic problem of cryptography is secure communication over an insecure channel. Party A wants to send to party B a secret message over a communication line which may be tapped by an adversary. In the computer industry, refers to techniques for ensuring that data stored in a computer cannot be read or compromised by any individuals without authorization. Most security measures involve data encryption and passwords. Data encryption is the translation of data into a form that is unintelligible without a deciphering mechanism. A password is a secret word or phrase that gives a user access to a particular program or system. Modern cryptography abandons the assumption that the Adversary has available infinite computing resources, and assumes instead that the adversary's computation is resource bounded in some reasonable way. In particular, in these notes we will assume that the adversary is a probabilistic algorithm who runs in polynomial time. Similarly, the encryption and decryption algorithms designed are probabilistic and run in polynomial time. The running time of the encryption, decryption, and the adversary algorithms are all measured as a function of a security parameter k which is a parameter which is fixed at the time the cryptosystem is setup. Thus, when we

Computer Networks - MC0075

Roll No. 521150974

say that the adversary algorithm runs in polynomial time, we mean time bounded by some polynomial function in k. Accordingly, in modern cryptography, we speak of the infeasibility of breaking the encryption system and computing information about exchanged messages where as historically one spoke of the impossibility of breaking the encryption system and finding information about exchanged messages. We note that the encryption systems which we will describe and claim secure" with respect to the new adversary are not secure" with respect to a computationally unbounded adversary in the way that the one-time pad system was secure against an unbounded adversary. But, on the other hand, it is no longer necessarily true that the size of the secret key that A and B meet and agree on before remote transmission must be as long as the total number of secret bits ever to be exchanged securely remotely. In fact, at the time of the initial meeting, A and B do not need to know in advance how many secret bits they intend to send in the future. We will show how to construct such encryption systems, for which the number of messages to be exchanged securely can be a polynomial in the length of the common secret key. How we construct them brings us to another fundamental issue, namely that of cryptographic, or complexity, assumptions.

(b) Data Encryption Standard (DES):

The Data Encryption Standard (DES) is the quintessential block cipher. Even though it is now quite old, and on the way out, no discussion of block ciphers can really omit mention of this construction. DES is a remarkably well-engineered algorithm which has had a powerful influence on cryptography. It is in very widespread use, and probably will be for some years to come. Every time you use an ATM machine, you are using DES. Brief history In 1972 the NBS (National Bureau of Standards, now NIST, the National Institute of Standards and Technology) initiated a program for data protection and wanted as part of it an encryption algorithm that could be standardized. They put out a request for such an algorithm. In 1974, IBM responded with a design based on their \Lucifer" algorithm. This design would eventually evolve into the DES. DES has a key-length of k = 56 bits and a block-length of n = 64 bits. It consists of 16 rounds of what is called a \Feistel network." We will describe more details shortly. After NBS, several other bodies adopted DES as a standard, including ANSI (the American National Standards Institute) and the American Bankers Association.

Computer Networks - MC0075

Roll No. 521150974

The standard was to be reviewed every five years to see whether or not it should be re-adopted. Although there were claims that it would not be re-certified, the algorithm was re-certified again and again. Only recently did the work for finding a replacement begin in earnest, in the form of the AES (Advanced Encryption Standard) Construction The DES algorithm is depicted in Figure 4.1. It takes input a 56-bit key K and a 64 bit plaintext M. The key-schedule KeySchedule produces from the 56-bit key K a sequence of 16 subkeys, one for each of the rounds 50 Goldwasser and Bellare

The algorithm as a standard Despite the criticisms, DES was approved as a federal standard in November 1976, and published on 15 January 1977 as FIPS PUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally superseded by the Advanced Encryption Standard (AES), following a public competition. On 19 May 2005, FIPS 46-3 was officially withdrawn, but NIST has approved Triple DES through the year 2030 for sensitive government information.

The algorithm is also specified in ANSI X3.92, NIST SP 800-67 and ISO/IEC 18033-3 (as a component of TDEA).

Another theoretical attack, linear cryptanalysis, was published in 1994, but it was a brute force attack in 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a replacement algorithm. These and other methods of cryptanalysis are discussed in more detail later in the article.

The introduction of DES is considered to have been a catalyst for the academic study of cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about DES, DES is the archetypal block cipher an algorithm that takes a fixed-length string of plaintext bits and transforms it through a series of complicated operations into another ciphertext bitstring of the same length. In

Computer Networks - MC0075

Roll No. 521150974

the case of DES, the block size is 64 bits. DES also uses a key to customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checking parity, and are thereafter discarded. Hence the effective key length is 56 bits, and it is usually quoted as such. Like other block ciphers, DES by itself is not a secure means of encryption but must instead be used in a mode of operation. FIPS-81 specifies several modes for use with DES. Further comments on the usage of DES are contained in FIPS-74.

Que: 6. What are Digital Signatures? Discuss their merits and drawbacks.

Ans: Digital signatures In cryptography, a digital signature or digital signature scheme is a type of asymmetric cryptography used to simulate the security properties of a signature in digital, rather than written, form. Digital signature schemes normally give two algorithms, one for signing which involves the user's secret or private key, and one for verifying signatures which involves the user's public key. The output of the signature process is called the "digital signature." Digital signatures, like written signatures, are used to provide authentication of the associated input, usually called a "message." Messages may be anything, from electronic mail to a contract, or even a message sent in a more complicated cryptographic protocol. Digital signatures are used to create public key infrastructure (PKI) schemes in which a user's public key (whether for public-key encryption, digital signatures, or any other purpose) is tied to a user by a digital identity certificate issued by a certificate authority. PKI schemes attempt to unbreakably bind user information (name, address, phone number, etc.) to a public key, so that public keys can be used as a form of identification. Digital signatures are often used to implement electronic signatures, a broader term that refers to any electronic data that carries the intent of a signature, but not all electronic signatures use digital signatures.

Benefits of digital signatures These are common reasons for applying a digital signature to communications: Authentication

Computer Networks - MC0075

Roll No. 521150974

Although messages may often include information about the entity sending a message, that information may not be accurate. Digital signatures can be used to authenticate the source of messages. When ownership of a digital signature secret key is bound to a specific user, a valid signature shows that the message was sent by that user. The importance of high confidence in sender authenticity is especially obvious in a financial context. For example, suppose a bank's branch office sends instructions to the central office requesting a change in the balance of an account. If the central office is not convinced that such a message is truly sent from an authorized source, acting on such a request could be a grave mistake. Integrity In many scenarios, the sender and receiver of a message may have a need for confidence that the message has not been altered during transmission. Although encryption hides the contents of a message, it may be possible to change an encrypted message without understanding it. (Some encryption algorithms, known as nonmalleable ones, prevent this, but others do not.) However, if a message is digitally signed, any change in the message will invalidate the signature. Furthermore, there is no efficient way to modify a message and its signature to produce a new message with a valid signature, because this is still considered to be computationally infeasible by most cryptographic hash functions. Drawbacks of digital signatures Despite their usefulness, digital signatures do not alone solve all the problems we might wish them to. Non-repudiation: In a cryptographic context, the word repudiation refers to the act of disclaiming responsibility for a message. A message's recipient may insist the sender attach a signature in order to make later repudiation more difficult, since the recipient can show the signed message to a third party (eg, a court) to reinforce a claim as to its signatories and integrity. However, loss of control over a user's private key will mean that all digital signatures using that key, and so ostensibly 'from' that user, are suspect. Nonetheless, a user cannot repudiate a signed message without repudiating their signature key.

Computer Networks - MC0075

Roll No. 521150974

Вам также может понравиться