Академический Документы
Профессиональный Документы
Культура Документы
Theory: Bit stuffing is the mechanism of inserting one or more non-information bits into a
message to be transmitted, to break up the message sequence, for synchronization purpose.
It is widely used in network and communication protocols, in which bit stuffing is a required part
of the transmission process. Bit stuffing is commonly used to bring bit streams up to a common
transmission rate or to fill frames. Bit stuffing is also used for run-length limited coding
Program
#include<stdio.h>
#include<conio.h>
void main()
{
int a[20], b[30], i=0,j=0,n=0,count=0,k=0;
clrscr();
printf("Enter the frame size:");
scanf("%d",&n);
if(a[i]==1)
{
b[j]=a[i];
count++;
j++;
if(count==5)
{
b[j]=0;
j++;
count=0;
}
}
else if(a[i]==0)
{
b[j]=a[i];
j++;
count=0;
}
else
{
printf("Error, please input 1 or 0 only");
}
k=j;
}
Theory: Bit De-stuffing is the process of removing non information bits from data so that it is
converted into its original form.
Program:
#include<stdio.h>
#include<conio.h>
void main()
{
int a[20], b[30], i=0,j=0,n=0,count=0,k=0;
clrscr();
printf("Enter the frame size:");
scanf("%d",&n);
if(count==5)
{
i++;
count=0;
}
}
else if(a[i]==0)
{
b[j]=a[i];
j++;
count=0;
}
else
{
printf("Error, please input 1 or 0 only");
}
k=j;
}
}
getche();
}
Output
Theory: An Internet Protocol address (IP address) is a numerical label assigned to each device
connected to a computer network that uses the Internet protocol for communication.
With an IPv4 IP address, there are five classes of available IP ranges: Class A, Class B, Class C,
Class D and Class E, while only A, B, and C are commonly used. Each class allows for a range
of valid IP addresses, shown in the following table.
Class A 1.0.0.1 to Supports 16 million hosts on each of 127 networks.
126.255.255.254
Class B 128.1.0.1 to Supports 65,000 hosts on each of 16,000 networks.
191.255.255.254
Class C 192.0.1.1 to Supports 254 hosts on each of 2 million networks.
223.255.254.254
Class D 224.0.0.0 to Reserved for multicast groups.
239.255.255.255
Class E 240.0.0.0 to Reserved for future use, or research and development
254.255.255.254 purposes.
Program
#include<stdio.h>
#include<conio.h>
void main()
{
int ip;
clrscr();
printf("Enter IP Address");
scanf("%d",&ip);
else
{
if(ip<192 && ip>127)
printf("IP is of CLASS B");
else
{
if(ip<224 && ip>191)
printf("IP is of CLASS C");
else
{
if(ip<240 && ip>223)
printf("IP is of CLASS D");
else
{
if(ip<255 && ip>239)
printf("IP is of CLASS E");
}}}}
getch();
}
Output
Program
#include<stdio.h>
#include<conio.h>
void DECTOBIN(int dec)
{
int i;
if(i!=3)
printf(".");
}
getch();
}
Output
Theory: A computer network is a set of computers connected together for the purpose of
sharing resources. The most common resource shared today is connection to the Internet. Other
shared resources can include a printer or a file server. The Internet itself can be considered a
computer network.
Routing is a process which is performed by layer 3 (or network layer) devices in order to deliver
the packet by choosing an optimal path from one network to another.
There are 3 types of routing:
1)Static Routing
2)Default routing
3)Dynamic routing
Procedure/Implementation
Design the Network First. Connect all the devices with each other.
Assign IP Addresses to all the PC’S. Double Click on the PC and click on the Desktop menu
item and click IP configuration. Assign the IP Addresses. Do the same for all the connected
PC’S.
Assign IP Addresses to interfaces of routers. Double click on the router and access the Command
prompt of router. Assign IP addresses to the interfaces being used. Turn up the administratively
down interfaces.
Enable EIGRP which is a two step process:
1. Enable EIGRP routing protocol from global configuration mode
2. Tell EIGRP which interfaces we have to include
Perform PING test to verify that all the devices can communicate with each other.
Network Scenario
Result: The Network is created and using EIGRP routing protocol the communication is
established between all the devices.
EXPERIMENT-6
Aim: Configure VLAN in Cisco Packet Tracer
Theory:
Virtual LAN (VLAN) is a concept in which we can divide the devices logically on layer 2 (data
link layer). Generally, layer 3 devices divides broadcast domain but broadcast domain can be
divided by switches using the concept of VLAN.
A broadcast domain is a network segment in which if a device broadcast a packet then all the
devices in the same broadcast domain will receive it. The devices in the same broadcast domain
will receive all the broadcast packet but it is limited to switches only as routers don’t forward out
the broadcast packet. To forward out the packets to different VLAN (from one VLAN to
another) or broadcast domain, inter VLAN routing is needed. Through VLAN, different small
size sub networks are created which are comparatively easy to handle
Procedure/Implementation:
Design the Network first. Connect all the End devices with the Switch
Configure VLAN on the Switch. We Create two Logical group of users(Accounts and Finance)
by function and we will use a static method to Assign VLAN membership. Switchport access
VLAN is used to assign VLAN to the interface. In the next step we must configure the IP
addresses and Subnet Mask for each End device. To verify that the End devices that are in the
same VLAN can communicate with each other we will perform a ping test. Similarly, we verify
that the End devices in different VLAN won’t communicate with each other.
Network Scenario:
Result: The VLAN is successfully configured.
EXPERIMENT-7
Aim: Configure DHCP DNS and Email Server in Cisco Packet Tracer.
Theory: A DHCP Server is a network server that automatically provides and assigns IP
addresses, default gateways and other network parameters to client devices. It relies on the
standard protocol known as Dynamic Host Configuration Protocol or DHCP to respond to
broadcast queries by clients.
A DNS server is a computer server that contains a database of public IP addresses and their
associated hostnames, and in most cases serves to resolve, or translate, those names to IP
addresses as requested. DNS servers run special software and communicate with each other
using special protocols.
A mail server (also known as a mail transfer agent or MTA, a mail transport agent, a mail router
or an Internet mailer) is an application that receives incoming e-mail from local users (people
within the same domain) and remote senders and forwards outgoing e-mail for delivery. A
computer dedicated to running such applications is also called a mail server.
Procedure/Implementation:
Design the network first and assign the IP addresses to the router and the servers only. Configure
the first server as the DHCP and EMAIL Server and the second server as the DNS server. In the
configuration tab Turn on the DHCP,E-mail and DNS Service. Modify the Pool on both the
servers to meet your requirements.
Gateway will be the Router IP Address i.e. 192.168.1.1. Assign the DHCP server address i.e
192.168.1.2. Assign the DNS Server address i.e. 192.168.1.3. Assign the start IP address as
192.168.1.4 as the three addresses have been already used for the router and the servers
Network Scenario
Result: The DHCP DNS and E-mail Server are successfully configured.
EXPERIMENT-8
Aim: Configure Wireless network in Cisco Packet Tracer
Theory:
A wireless router is a device that enables wireless network packet forwarding and routing, and
serves as an access point in a local area network. It works much like a wired router but replaces
wires with wireless radio signals to communicate within and to external network environments.
It can function as a switch and as an Internet router and access point
A wireless router is the router found in a wireless local area network (WLAN) for home and
small office networks. It enables Internet and local network access. Typically, the wireless router
is directly connected to a wired or wireless WAN. Users connected to the wireless router are able
to access the LAN as well as the external WAN, such as the Internet.
Theory: A virtual LAN (VLAN) is any broadcast domain that is partitioned and isolated in a
computer network at the data link layer ((BSI layer 2). LAN is an abbreviation of local area
network.
To subdivide a network into virtual LANs, one configures a network switch or router. Simpler
network devices can only partition per physical port (if at all), in which case each VLAN is
connected with a dedicated network cable (and VLAN connectivity is limited by the number of
hardware ports available). More sophisticated devices can mark packets through tagging. so that
a single interconnect (trunk) may be used to transport data for multiple VLANs. Since VLANs
share bandwidth, a VLAN trunk might use link aggregation and/or quality of service
prioritization to route data efficiently.
VLANs allow network administrators to group hosts together even if the hosts are not on the
same network switch. This can greatly simplify network design and deployment, because VLAN
membership can be configured through software. Without VLANs, grouping hosts according to
their resource needs necessitates the labor of relocating nodes or rewiring data links
VLANs can be used to partition a local network into several distinctive segments, for example:
1. Production
2. Voice over IP
3. Network management
4. Storage area network (SAN)
5. Guest network
6. Demilitarized zone (DMZ)
7. Client separation (1SP, in a large facility, or in a datacenter)
A common infrastructure shared across VLAN trunks can provide a very high level of security
with great flexibility for a comparatively low cost. Quality of service schemes can optimize
traffic on trunk links for real-time (e.g. VoIP) or low-latency requirements (e.g. SAN).
In cloud computing VLANs, IP addresses, and MAC addresses on them are resources which end
users can manage. Placing cloud-based virtual machines on VLANs may be preferable to placing
them directly on the Internet to avoid security issues.
VLANs can logically group networks to decouple the users' network location from their physical
location. Technologies that can implement VLANs are:
• Asynchronous Transfer Mode (ATM)
• Fiber Distributed Data Interface (FDDI)
•Ethernet
•Hiper Sockets
•Infini Band
Procedure:
1. Connect the switch with the coniputers.
2. Login to the PC.
3. Set the IP address of the switch and PC.
a. PC 1- 192.168.1.14
b. PC2- 192.168.1.15
c. PC3- 192.168.1.16
d. PC4- 192.168.1.17
4. Connect PCs to VLAN ports.
5. Ping to other ports.
6. Change the group of matrix and ping again to other ports.
7. Observe the pinging statistics and port status for all three configuration.
Host ID: Port A 192.168.1.3
Ports 1 2 3 4 5 6 7 8 9
1
2
3
4
5
6
7
8
9
CPU
Port 1 2 3 4 5 6 7 8 9
6
7
8
9
CPU
Ports 1 2 3 4 5 6 7 8 9
1
2
3
4
5
6
7
8
9
CPU
Theory: Telnet is an application protocol used on the Internet or local area network to provide a
bidirectional interactive text-oriented communication facility using a virtual terminal connection.
User data is interspersed in-band with Telnet control information in an 8-bit byte oriented data
connection over the Transmission Control Protocol (TCP).
Telnet was developed in 1969 beginning with RFC 15, extended in RFC 855, and standardized
as Internet Engineering Task Force (IETF) Internet Standard STD 8, one of the first Internet
standards. The name stands for "teletype network".
The term telnet is also used to refer to the software that implements the client part of the
protocol. Telnet client applications are available for virtually all computer platforms. Telnet is
used to establish a connection using the Telnet protocol, either with a command line client or
with a graphical interface.
Procedure/Implementation:
Design the Network First. Connect all the devices with each other.
Assign IP Addresses to all the PC’S. Double Click on the PC and click on the Desktop menu
item and click IP configuration. Assign the IP Addresses. Do the same for all the connected
PC’S.
Assign IP Addresses to interfaces of routers. Double click on the router and access the Command
prompt of router. Assign IP addresses to the interfaces being used.
If you want to manage the routers via telnet session you need to configure the enable password
or enable secret
You can shift between your console and remote console once connection is set up
Network Scenario
Configuration Telnet
Router>en
Router#conf t
Router(config)#line vty 0 4
Router(config-line)#password telnet
Router(config-line)#login
Configuring Enable Password
Router> en
Router#conf t
Router(config)#enable password cisco
Configuring Enable Secret
Router>en
Router#conft
Router(config)#enable secret class
Testing Telnet Session
Router#telnet 192.1.12.x
Where x is your partner router
Shifting between your Console and Remote Console
Press CTRL-SHIFT-6-X
Theory: Network Address Translation (NAT) is the process where a network device, usually a
firewall, assigns a public address to a computer (or group of computers) inside a private network.
The main use of NAT is to limit the number of public IP addresses an organization or company
must use, for both economy and security purposes.
The most common form of network translation involves a large private network using addresses
in a private range (10.0.0.0 to 10.255.255.255, 172.16.0.0 to 172.31.255.255, or 192.168.0 0 to
192.168.255.255). The private addressing scheme works well for computers that only have to
access resources inside the network, like workstations needing access to file servers and printers.
Routers inside the private network can route traffic between private addresses with no trouble.
However, to access resources outside the network, like the Internet, these computers have to
have a public address in order for responses to their requests to return to them. This is where
NAT comes into play.
Internet requests that require Network Address Translation (NAT) are quite complex but happen
so rapidly that the end user rarely knows it has occurred. A workstation inside a network makes a
request to a computer on the Internet. Routers within the network recognize that the request is
not for a resource inside the network, so they send the request to the firewall. The firewall sees
the request from the computer with the internal IP. It then makes the same request to the Internet
using its own public address, and returns the response from the Internet resource to the computer
inside the private network. From the perspective of the resource on the Internet, it is sending
information to the address of the firewall. From the perspective of the workstation, it appears that
communication is directly with the site on the Internet. When NAT is used in this way, all users
inside the private network access the Internet have the same public IP address when they use the
Internet. That means only one public addresses is needed for hundreds or even thousands of
users.
Network Scenario
Initial IP Configuration in R1
Router>enable
Router# configure terminal
Router(config)#hostname R1
R1(config)#interface FastEthernet0/0
R1(config-if)#ip address 10.0.0.1 255.0.0.0
R1(config-if)#no shutdown
R1(config-if)#exit
R1#configure terminal
R1(config)#interface Serial0/0/0
R1(config-if)#ip address 100.0.0.1 255.0.0.0
R1(config-if)#clock rate 64000
R1(config-if)#no shutdown
R1(config-if)#exit
Initial IP Configuration in R2
Router>enable
Router#configure terminal
Router(config)#hostname R2
R2(config)#interface FastEthernet0/0
R2(config-if)#ip address 192.168.1.1 255.255.255.0
R2(config-if)#no shutdown
R2(config-if)#exit
R2(config)#interface Serial0/0/0
R2(config-if)#ip address 100.0.0.2 255.0.0.0
R2(config-if)#no shutdown
R2(config-if)#exit
Theory: The version of the protocol (now called "Pure ALOHA", and the one implemented in
ALOHAnet) was quite simple:
• If, while you are transmitting data, you receive any data from another station, there has
been a message collision. All transmitting stations will need to try resending "later".
To assess Pure ALOHA, there is a need to predict its throughput, the rate of (successful)
transmission of frames.
For any frame-time, the probability of there being k transmission-attempts during that frame-time
is:
The average amount of transmission-attempts for 2 consecutive frame-times is 2G. Hence, for
any pair of consecutive frame-times, the probability of there being k transmission-attempts
during those two frame-times is:
Therefore, the probability ( ) of there being zero transmission-attempts between t-
T and t+T (and thus of a successful transmission for us) is:
An improvement to the original ALOHA protocol was "Slotted ALOHA", which introduced
discrete timeslots and increased the maximum throughput. A station can start a transmission only
at the beginning of a timeslot, and thus collisions are reduced. In this case, only transmission-
attempts within 1 frame-time and not 2 consecutive frame-times need to be considered, since
collisions can only occur during each timeslot. Thus, the probability of there being zero
transmission-attempts by other stations in a single timeslot is
.
The throughput is:
Procedure:
• Click on the MAC experiment icon twice from the desktop on both PC’s.
• Click on the configuration button in both the PC’s.
• After setting the configuration menu, click OK button and download the driver to the
NIU using the BOOT button command. Booting any one of the application is enough.
• Run the experiment by click the ! buttn or by choosing RUN->Start from each application
• View the statistics window for results.
• Note down the readings once the experiment is completed.
• Repeat the above steps for various values.
• Calculate the practical offered load from given below formula and plot the graph between
practical offered load and throughput.
• Repeat the experiment for various values of packet length, Node, data rate.
S.no IPD TX1 CX1 TX2 CX2 TX3 CX3 TX4 CX4
1
2
3
4
5
6
7
8
9
10
S.NO X G
1
2
3
4
5
6
7
8
9
10
Result: The ALOHA protocol for packet communication between a no of nodes connected to a
common bus has been implemented.
Inference: As we can see from the graph, the offered load(G) decreases, the throughput
values(X) increases since as G decreases, the interpackets delay(IPD) increases and to a point
where the IPD reaches very high value, eventually the numbers transmitted packets also reduces
and hence the throughput(X) also decreases.
EXPERIMENT-13
Aim: Implement the CSMA protocol for packet communication between a number
of nodes connected to a communication bus.
Theory: Carrier-sense multiple access (CSMA) is a media access control (MAC) protocol in
which a node verifies the absence of other traffic before transmitting on a shared transmission
medium, such as an electrical bus or a band of the electromagnetic spectrum. A transmitter
attempts to determine whether another transmission is in progress before initiating a transmission
using a carrier-sense mechanism. That is, it tries to detect the presence of a carrier signal from
another node before attempting to transmit. If a carrier is sensed, the node waits for the
transmission in progress to end before initiating its own transmission. Using CSMA, multiple
nodes may, in turn, send and receive on the same medium. Transmissions by one node are
generally received by all other nodes connected to the medium.
Procedure:
• Click on MAC experiment icon twice from the desktop on both PC’s.
• Click the configuration button in the window in both the PC’s.
𝑁∗𝑃
𝐺 = 𝐶∗ 𝑡𝑎
• Click ok button and download the driver to the NIU. Using the BOOT button command.
• Run the experiment by clicking the ! button.
• View the statistics window for results.
• Only Tx packets and collision count are taken into account for MAC calculations, Note
down the readings.
• Repeat the above steps for various values of ta.
• Calculate the practical offered load from the formula and plot the graph between the
practical offered load and throughtput.
Result: The CSMA protocol for packet communication between a number of nodes connected to
a common bus has been implemented.
Inference: We can see from the graph. The throughput increases and the throughput load offered
in it will start decreasing. Along with it as inter packet delay increases, throughput also increases
but after a certain value of ipd the throughput will start decreasing.
We can say that CSMA is better than ALOHA as it sense that the channel is free or not before
data transmission and so avoids collision.
Experiment-14
Aim: To study reliable data transfer between two nodes over an unreliable network
using stop and wait protocol.
Typically the transmitter adds a redundancy check number to the end of each frame. The receiver
uses the redundancy check number to check for possible damage. If the receiver sees that the
frame is good, it sends an ACK. If the receiver sees that the frame is damaged, the receiver
discards it and does not send an ACK—pretending that the frame was completely lost, not
merely damaged.
One problem is when the ACK sent by the receiver is damaged or lost. In this case, the sender
doesn't receive the ACK, times out, and sends the frame again. Now the receiver has two copies
of the same frame, and doesn't know if the second one is a duplicate frame or the next frame of
the sequence carrying identical DATA.
Another problem is when the transmission medium has such a long latency that the sender's
timeout runs out before the frame reaches the receiver. In this case the sender resends the same
packet. Eventually the receiver gets two copies of the same frame, and sends an ACK for each
one. The sender, waiting for a single ACK, receives two ACKs, which may cause problems if it
assumes that the second ACK is for the next frame in the sequence.
To avoid these problems, the most common solution is to define a 1 bit sequence number in the
header of the frame. This sequence number alternates (from 0 to 1) in subsequent frames. When
the receiver sends an ACK, it includes the sequence number of the next packet it expects. This
way, the receiver can detect duplicated frames by checking if the frame sequence numbers
alternate. If two subsequent frames have the same sequence number, they are duplicates, and the
second frame is discarded. Similarly, if two subsequent ACKs reference the same sequence
number, they are acknowledging the same frame.
Stop-and-wait ARQ is inefficient compared to other ARQs, because the time between packets, if
the ACK and the data are received successfully, is twice the transit time (assuming the
turnaround time can be zero). The throughput on the channel is a fraction of what it could be. To
solve this problem, one can send more than one packet at a time with a larger sequence number
and use one ACK for a set. This is what is done in Go-Back-N ARQ and the Selective Repeat
ARQ.
Procedure:
• Click on the stop and wait icon from desktop on both PC’s
• Click the configuration button in the window in both PC’s
• Set the IPD to 400ms
• Click Ok button and Download the deliver to NIU using BOOT button command.
Booting from any one of applications is enough.
• Run the experiment by clicking ! button.
• Set the timeout value to 500 ms.
• Note down the no of successfully transmitted packets.
• Repeat above steps for various time out values and plot the graph between timeout value
and throughput. Find the optimum value from the plot.
Result: Stop and wait protocol for transferring data between two nodes over an unreliable
network has been studied and implemented.
Inference: It is observed from the graph, as the throughput value increases the throughput also
increases and so it is having a direct relation, since as the timeout value increases, the
successfully Tx packets will also increase but after sometime at certain values of timeout, the
throughput will become constant and so that value of timeout will be the most preferable value.
In this experiment, the most preferable value of timeout will be 1500 ms.
Experiment-15
Aim: Provide reliable data transfer between two nodes over an unreliable network
using the sliding window selective repeat protocol.
Apparatus Required: Benchmark LAN TRAINER KIT, Connecting wires, PC interface.
Theory: Selective Repeat is part of the automatic repeat-request (ARQ). With selective repeat,
the sender sends a number of frames specified by a window size even without the need to wait
for individual ACK from the receiver as in Go-Back-N ARQ. The receiver may selectively reject
a single frame, which may be retransmitted alone; this contrasts with other forms of ARQ, which
must send every frame from that point again. The receiver accepts out-of-order frames and
buffers them. The sender individually retransmits frames that have timed out.
It may be used as a protocol for the delivery and acknowledgement of message units, or it may
be used as a protocol for the delivery of subdivided message sub-units.
When used as the protocol for the delivery of messages, the sending process continues to send a
number of frames specified by a window size even after a frame loss. Unlike Go-Back-N ARQ,
the receiving process will continue to accept and acknowledge frames sent after an initial error;
this is the general case of the sliding window protocol with both transmit and receive window
sizes greater than 1.
The receiver process keeps track of the sequence number of the earliest frame it has not received,
and sends that number with every acknowledgement (ACK) it sends. If a frame from the sender
does not reach the receiver, the sender continues to send subsequent frames until it has emptied
its window. The receiver continues to fill its receiving window with the subsequent frames,
replying each time with an ACK containing the sequence number of the earliest missing frame.
Once the sender has sent all the frames in its window, it re-sends the frame number given by the
ACKs, and then continues where it left off.
The size of the sending and receiving windows must be equal, and half the maximum sequence
number (assuming that sequence numbers are numbered from 0 to n−1) to avoid
miscommunication in all cases of packets being dropped. To understand this, consider the case
when all ACKs are destroyed. If the receiving window is larger than half the maximum sequence
number, some, possibly even all, of the packets that are present after timeouts are duplicates that
are not recognized as such. The sender moves its window for every packet that is
acknowledged.[1]
When used as the protocol for the delivery of subdivided messages it works somewhat
differently. In non-continuous channels where messages may be variable in length, standard
ARQ or Hybrid ARQ protocols may treat the message as a single unit. Alternately selective
retransmission may be employed in conjunction with the basic ARQ mechanism where the
message is first subdivided into sub-blocks (typically of fixed length) in a process called packet
segmentation. The original variable length message is thus represented as a concatenation of a
variable number of sub-blocks. While in standard ARQ the message as a whole is either
acknowledged (ACKed) or negatively acknowledged (NAKed), in ARQ with selective
transmission the ACK response would additionally carry a bit flag indicating the identity of each
sub-block successfully received. In ARQ with selective retransmission of sub-divided messages
each retransmission diminishes in length, needing to only contain the sub-blocks that were
linked.
In most channel models with variable length messages, the probability of error-free reception
diminishes in inverse proportion with increasing message length. In other words, it's easier to
receive a short message than a longer message. Therefore, standard ARQ techniques involving
variable length messages have increased difficulty delivering longer messages, as each repeat is
the full length. Selective re-transmission applied to variable length messages completely
eliminates the difficulty in delivering longer messages, as successfully delivered sub-blocks are
retained after each transmission, and the number of outstanding sub-blocks in following
transmissions diminishes. Selective Repeat is implemented in UDP transmission.
Procedure:
PC1 PC2
• Click OK button and Download the driver to the NIU using the Boot button Command.
Booting from any one of the applications is enough.
• Run the experiment by clicking button or by choosing Run_Start from each application.
• Repeat the above steps for various time out vales and plot a graph between timeout value
and throughput. Find the optimum Timeout value.
Tabulations:
Inference: This protocol is better than Go back as the throughput and efficiency improve. As the
timeout value increases throughput also increases as so it is having a direct relation. After
sometime as the timeout value increases transmitted packet will saturate and hence throughput
increases.