Вы находитесь на странице: 1из 6

SVI

Traditionally, switches send traffic only to hosts within the same broadcast domain (Single
VLAN) and routers handled traffic between different broadcast domains (Different VLANs).
This meant that network devices in different broadcast domains could not communicate
without a router.

With SVIs the switch will use virtual Layer 3 interface to route traffic to other Layer 3
interface thus eliminating the need for a physical router.

VLANs reduce the load on a network by dividing a LAN into smaller segments and keeping
local traffic within a VLAN. However, because each VLAN has its own domain, a mechanism
is needed for VLANs to pass data to other VLANs without passing the data through a router.

The solution is to use switched virtual interface – SVI. An SVI is normally found on switches
(Layer 3 and Layer 2). With SVIs the switch recognizes the packet destinations that are local
to the sending VLAN and switches those packets and packets destined for different VLANs
are routed.

There is one-to-one mapping between a VLAN and SVI, thus only a single SVI can be
mapped to a VLAN. In default setting, an SVI is created for the default VLAN (VLAN1) to
permit remote switch administration.

In most typical designs we have the default gateway for the hosts pointing to the switches
SVI, then the switch will route the packets to rest of the Layer 3 domain.

Note: An SVI cannot be activated unless the VLAN itself is created and at least one physical
port is associated and active in that VLAN. Unless the VLAN is created there will be no
spanning tree instance running hence the line protocol will be down for the SVI VLAN.

SVIs are generally configured for a VLAN for the following reasons:

 Allow traffic to be routed between VLANs by providing a default gateway for the VLAN.
 Provide fallback bridging (if required for non-routable protocols).
 Provide Layer 3 IP connectivity to the switch.
 Support bridging configurations and routing protocol.

VRRP
The Virtual Router Redundancy Protocol (VRRP) is a computer networking protocol that provides
for automatic assignment of available Internet Protocol (IP) routers to participating hosts. This
increases the availability and reliability of routing paths via automatic default gateway selections on
an IP subnetwork.
The protocol achieves this by creation of virtual routers, which are an abstract representation of
multiple routers, i.e. master and backup routers, acting as a group. The default gateway of a
participating host is assigned to the virtual router instead of a physical router. If the physical router
that is routing packets on behalf of the virtual router fails, another physical router is selected to
automatically replace it. The physical router that is forwarding packets at any given time is called the
master router.
VRRP provides information on the state of a router, not the routes processed and exchanged by that
router. Each VRRP instance is limited, in scope, to a single subnet. It does not advertise IP routes
beyond that subnet or affect the routing table in any way. VRRP can be used
in Ethernet, MPLS and token ring networks with Internet Protocol Version 4 (IPv4), as well as IPv6.

HSRP
What is preemptive in HSRP
The standby preempt command enables the Hot Standby Router Protocol(HSRP) router with
the highest priority to immediately become the active router. Priority is determined first by the
configured priority value, and then by the IP address. In each case, a higher value is of greater
priority.

What is a valid HSRP MAC address?


Explanation. With HSRP, two or more devices support a virtual router with a fictitious MAC address and
unique IP address. There are two version of HSRP. + With HSRP version 1, the virtual router's MAC
address is 0000.0c07.ACxx , in which xx is the HSRP group.

VRF
Virtual Routing and Forwarding (VRF) is an IP technology that allows multiple instances of a routing table
to coexist on the same router at the same time. Because the routing instances are independent, the
same or overlapping IP addresses can be used without conflict. “VRF” is also used to refer to a routing
table instance that can exist in one or multiple instances per each VPN on a Provider Edge (PE) router.

I guess that what you wanted to say but isn't clear is that the VRF is a separate routing table within a
router. VRFs are to a router what VLANs are to a switch. Using VRFs, it is possible to virtualize a single
router into several instances, each of them being (relatively) independent of each other, allowing for
overlapping subnets, separate instances of routing protocols, separate set of interfaces assigned to each
VRF, etc.

Super VLAN
Inter-vlan communication is achieved by configuring a ip address on the Vlanif interfaces. If a network as
a large number of vlans, it will require an excessive use of ip addresses.

The concept of Super-vlans was introduced to save ip address space. A Super-vlan is a group of sub-
vlans. It has a vlan interface, but no physical ports can be added to it. A sub-vlan has physical ports but
no ip address assigned to the vlanif interface. Packets cannot be forward between sub-vlans at Layer 2,
if Layer 3 communication is needed from a sub-vlan it will use the ip address of the Super-vlan as the
gateway ip address.

The configuration of the SUPER VLAN can be broken into the following steps :
1. Create the SUPER VLAN as a regular VLAN
2. Enable the SUPER VLAN function with the aggregate-vlan command
3. Tie the VLANs (also called sub-VLANs) with the SUPER VLAN using the access-vlan command
4. Create the SVI for the SUPER VLAN ID
5. Last but not least, enable ARP Proxy for inter-vlan communication

Service VLAN / Customer VLAN


QinQ involves use multiple VLAN tags in an Ethernet header so that one VLAN ID can carry another 4096
VLAN IDs in a second tag. This makes a simple and useful tunnelling strategy.

The first/inner tag is the one set by the customer, and the second/outer tag would be set by the
network. It’s common in the Service Provider industry 1 to refer to these as Customer VLAN and Service
VLAN

VLAN Translation Works


VLAN translation replaces an incoming C-VLAN tag with an S-VLAN tag instead of adding an additional
tag. The C-VLAN tag is therefore lost, so a single-tagged packet is normally untagged when it leaves the
S-VLAN (at the other end of the link). If an incoming packet has had Q-in-Q tunneling applied in advance,
VLAN translation replaces the outer tag and the inner tag is retained when the packet leaves the S-VLAN
at the other end of the link.

To configure VLAN translation, use the mapping swap statement at the [edit vlans interface] hierarchy
level.

GRE
However, there are a few things that can be a little rough to get the hang of at first. The first
thing unlike physical interface you can declare tunnel interfaces from configuration mode and
you can start at any number. Now like any other interface you will want to assign your tunnel its
own IP address since these GRE tunnels operate around layer 3 they will need IP addresses on
the same subnet so both tunnel interfaces can communicate. The only other requirements for
configuring a GRE tunnel is specifying the tunnel source and the tunnel destination. The tunnel
source can be a physical interface or an IP address just keep in mind the tunnel source needs to
be local on the router. The tunnel destination is going to the address of the remote router you
are terminating the tunnel to. You will want to make sure the tunnel source and tunnel
destination configured have connectivity to each other since it is between these addresses the
GRE tunnel will run over.

TUNNEL PROTECTION
One thing to keep in mind about IPSec tunnels, is the fact they do not scale very. After all a
simple IPSec tunnel will not pass multicast traffic so routing updates will not traverse the tunnel
requiring you to either rely on RRI (Reverse route injection) or static routes. So how do we get
over this little obstacle, we run a GRE tunnel.

Use GRE where IP tunneling without privacy is required -- it's simpler and thus faster. But, use IPsec
ESP where IP tunneling and data privacy are required -- it provides security features that are not even
attempted by GRE.

So,

 IPsec stands for Internet Protocol Security while GRE stands for Generic Routing Encapsulation.
 IPsec is the primary protocol of the Internet while GRE is not.
 GRE can carry other routed protocols as well as IP packets in an IP network while IPSec cannot.
 IPsec offers more security than GRE does because of its authentication feature.

 Simplicity – GRE tunnels lack mechanisms related to flow-control and security by


default. This lack of features can ease the configuration process. However, you probably
don’t want to transfer data in an unencrypted form across a public network; therefore,
GRE tunnels can be supplemented by the IPSec suite of protocols for security purposes.
In addition, GRE tunnels can forward data from discontiguous networks through a single
tunnel, which is something VPNs cannot do.
 Multicast traffic forwarding – GRE tunnels can be used to forward multicast traffic,
whereas a VPN cannot. Because of this, multicast traffic such as advertisements sent by
routing protocols can be easily transferred between remote sites when using a GRE
tunnel.

Class of Service
Class of Service (CoS) or Quality of Service (QoS) is a way to manage multiple traffic profiles
over a network by giving certain types of traffic priority over others. For example you can give
Voice traffic priority over email or http traffic. CoS is offered by service providers normally within
an MPLS (Multi Protocol Label Switching) offering.

CoS is the classification of specific traffic (at layer 2) by manipulating the class of service bits (in
the frame header). It effectively 'marks' the traffic so that QoS can use this
identification/classification as a means to actually manipulate the traffic according to your policy.
It is one way to identify traffic (along with ToS, ACLs, etc) so that QoS knows what to manipulate
and how to manipulate.

Unlike QoS (Quality of Service) CoS does not offer guarantees with bandwidth or delivery time
its based on a best effort basis.

There are three main CoS technologies:

1. 802.1p Layer 2 Tagging


2. Type of Service (ToS)
3. Differentiated Services

Priority Queue 802.1p Name


Index
0 1 (lowest) Best Effort
1 1 (lowest) Background
2 1 (lowest) Reserved
3 2 Effort
Excellent
4 2 Load
Controlled
5 3 Video
6 4 Voice
(highest)
7 4 Network
(highest)

Diffserv
Differentiated services or DiffServ is a computer networking architecture that specifies a simple
and scalable mechanism for classifying and managing network traffic and providing quality of
service (QoS) on modern IP networks. DiffServ can, for example, be used to provide low-latency to
critical network traffic such as voice or streaming media while providing simple best-effort service to
non-critical services such as web traffic or file transfers.
DiffServ uses a 6-bit differentiated services code point (DSCP) in the 8-bit differentiated services
field (DS field) in the IP header for packet classification purposes. The DS field replaces the
outdated IPv4 TOS field
committed information rate (CIR) is the bandwidth for a virtual circuit guaranteed by an internet
service provider to work under normal conditions. Committed data rate (CDR) is
the payload portion of the CIR.
Above the CIR, an allowance of burstable bandwidth is often given, whose value can be expressed
in terms of additional rate, known as the excess information rate (EIR), or as its absolute
value, peak information rate (PIR).[a] The provider guarantees that the connection will always support
the CIR rate, and sometimes the EIR rate provided that there is adequate bandwidth. The PIR, i.e.
the CIR plus EIR, is either equal to or less than the speed of the access port into the network.

Ether type
EtherType is a two-octet field in an Ethernet frame. It is used to indicate
which protocol is encapsulated in the payload of the frame. The same field is also used to indicate
the size of some Ethernet frames. EtherType was first defined by the Ethernet II framing standard,
and later adapted for the IEEE 802.3 standard. This field is used by the data link layer to determine
which protocol to hand over the payload to on the receiving end.
Rather than broadcasting LSAs to all their OSPF neighbors, the routing devices send their
LSAs to the designated router. Each multiaccess network has adesignated router, which
performs two main functions: Originate network link advertisements on behalf of the network.

VDC
The Nexus 7000 NX-OS software supports Virtual Device Contexts (VDCs),VDC(s) allow the
partitioning of a single physical Nexus 7000 device into multiple logical devices.

VPC
A virtual PortChannel (vPC) allows links that are physically connected to two
different Cisco Nexus™ 5000 Series devices to appear as a single PortChannel to a third
device. The third device can be a Cisco Nexus 2000 Series Fabric Extender or a switch, server,
or any other networking device.

FEX
A Fabric Extender (FEX for short) is a companion to a Nexus 5000 or Nexus 7000 switch. The
FEX, unlike a traditional switch, has no capability to store a forwarding table or run any control
plane protocols. It relies on its parent 5000/7000 to perform those functions.

Fabric Path
TRILL ("TRansparent Interconnection of Lots of Links") is an IETF Standard implemented by
devices called RBridges (routing bridges) or TRILL Switches.TRILL combines techniques from
bridging and routing and is the application of link state routing to the VLAN-aware customer-
bridging problem.

Вам также может понравиться