Вы находитесь на странице: 1из 140

IP Precedence and DSCP Values

4 votes

IP packets have a field called the Type of Service field (also known as the TOS
byte). The original idea behind the TOS byte was that we could specify a priority

and request a route for high throughput, low delay and high reliable service.
The TOS byte has been defined back in 1981 in RFC 791 but the way we use it has
changed throughout the years. This makes it confusing to understand since there is
a lot of terminology and some of is not used anymore nowadays. In this tutorial Ill
explain everything there is to know about the TOS byte, IP precedence and DSCP
values.
Lets take a look at the TOS byte:

Above you see the IP header with all its fields, including the TOS byte.
Dont mix up TOS (Type of Service) and COS (Class of Service). The first one is found in the header
of an IP packet (layer 3) and the second one is found in the header of 802.1Q (layer 2). Its used for
Quality of Service on trunk links

So what does this byte look like? Well have to take some history lessons here

IP Precedence
In the beginning the 8 bits of the TOS byte were defined like this:

The first 3 bits are used to define a precedence. The higher the value, the more
important the IP packet is, in case of congestion the router would drop the low
priority packets first. The type of service bits are used to assign what kind of delay,
throughput and reliability we want.
Its a somehow confusing that we have a type of service byte and that bit 3-7 are called the type of
service bits. Dont mix them up, these are two different things.
Heres a list of the bits and the possible combinations:
Precedence:
000

Routine

001

Priority

010

Immediate

011

Flash

100

Flash Override

101

Critic/Critical

110

Internetwork Control

111

Network Control

Type of Service:
Bit 3:

0 = normal delay

1 = low delay

Bit 4:

0 = normal throughput

1 = high throughput

Bit 5:

0 = normal reliability

1 = high reliability

Bit 6-7:

Reserved for future use

This is what they came up with in 1981 but the funny thing is that the type of
service bits that specify delay, throughput and reliability have never really been
used. Only the precedence bits are used to assign a priority to the IP packets.
About 10 years later, in 1992 RFC 1349 was created that changes the definition of
the TOS byte to look like this:

The first 3 precedence bits remain unchanged but the type of service bits have
changed. Instead of 5 bits, we now only use 4 bits to assign the type of service and
the final bit is called MBZ (Must Be Zero). This bit isnt used, the RFC says its only
been used for experiments and routers will ignore this bit. The type of service bits
now look like this:
1000

minimize delay

0100

maximize throughput

0010

maximize reliability

0001

minimize monetary cost

0000

normal service

With the old 5-bit type of service bits you could flip some switches and have an IP
packet that requested low delay and high throughput. With the newer 4-bit type
of service bits you have to choose one of the 5 options. Good thinking but the type
of service bits have never been really used
So what do we actually use nowadays?

Differentiated Services
The year is 1998 and 6 years have passed since the last changes to the TOS
byte. RFC 2474 is created which describes a different TOS byte. The TOS byte gets a
new name and is now called the DS field(Differentiated Services) and the 8 bits
have changed as well. Heres what it looks like now:

The first 6 bits of the DS field are used to set a codepoint that will affect the PHB
(Per Hop Behavior)at each node.The codepoint is also what we call the DSCP value.

Let me rephrase this in plain english


The codepoint is similar to precedence that we used in the TOS byte, its used to set
a certain priority.
PHB is another fancy term that we havent seen before, it requires some more
explanation. Imagine we have a network with 3 routers in a row, something like
this:

Above we have two phones and 3 routers. When we configure QoS to prioritize the
VoIP packets, we have to do it on all devices. When R1 and R3 are configured to
prioritize VoIP packets while R2 treats it as any other IP packet, we can still
experience issues with the quality of our phone call when there is congestion on R2.
To make QoS work, it has to be configured end-to-end. All devices in the path
should prioritize the VoIP packets to make it work. There are two methods to do
this:

Use reservations, each device in the network will reserve bandwidth for the phone call that
we are about to make.

Configure each device separately to prioritize the VoIP packets.


Making a reservation sounds like a good idea since you can guarantee that we can
make the phone call, its not a very scalable solution however since you have to
make reservations for each phone call that you want to make. What if one of the
routers loses its reservation information? The idea of using reservations to enforce
end-to-end QoS is called IntServ (Integrated Services).
The opposite of IntServ is DiffServ (Differentiated Services) where we configure
each device separately to prioritize certain traffic. This is a scalable solution since
the network devices dont have to exchange and remember any reservation
information Just make sure that you configure each device correctly and thats it
With 6 bits for codepoints we can create a lot of different prioritiesin theory, there
are 64 possible values that we can choose from.
The idea behind PHB (Per Hop Behavior) is that packets that are marked with a
certain codepoint will receive a certain QoS treatment (for example
queuing, policing or shaping). Throughout the years, there have been some changes
to the PHBs and how we use the codepoints. Lets walk through all of them

Default PHB
The default PHB means that we have a packet that is marked with a DSCP value of
000000. This packet should be treated as best effort.

Class-Selector PHB
There was a time when some older network devices would only support IP
precedence and newer network devices would use differentiated services. To make
sure the two are compatible, we have the class-selector codepoints. Heres what it
looks like:

We only use the first three bits, just like we did with IP precedence. Here is a list of
the possible class-selector codepoints that we can use:
Class selector name

DSCP value

IP Precedence value

IP Precedence name

Default / CS0

000000

000

Routine

CS1

001000

001

Priority

CS2

010000

010

Immediate

CS3

011000

011

Flash

CS4

100000

100

Flash Override

CS5

101000

101

Critic/Critical

CS6

110000

110

Internetwork Control

CS6

111000

111

Network Control

As you can see, CS1 is the same as "priority" and CS4 is the same as "flash
override". We can use this for compatibility between the "old" TOS byte and the
"new" DS field.
The default PHB and these class-selector PHBs are both described in RFC 2474 from
1998.

Assured Forwarding PHB


About a year later, RFC 2597 arrives that describes assured forwarding. The AF
(Assured Forwarding) PHB has two functions:
1.
Queueing
2.
Congestion Avoidance
There are 4 different classes and each class will be placed in a different queue,
within each class there is also a drop probability. When the queue is full, packets
with a "high drop" probability will be deleted from the queue before the other
packets. In total there are 3 levels for drop precedence. Here's what the DS field
looks like:

The first 3 bits are used to define the class and the next 3 bits are used to define
the drop probability. Here are all the possible values that we can use:

Drop

Class 1

Class 2

Class 3

Class 4

001010

010010

011010

100010

AF11

AF21

AF31

AF41

001100

010100

011100

100100

AF12

AF22

AF32

AF42

001110

010110

011110

100110

AF13

AF23

AF33

AF43

Low

Medium

High

Class 4 has the highest priority. For example, any packet from class 4 will always get
better treatment than a packet from class 3.
Some vendors prefer to use decimal values instead of AF11, AF32, etc. A quick way to convert the
AF value to a decimal value is by using the 8x + 2y formula where X = class and Y = drop
probability. For example, AF31 in decimal is 8 x 3 + 2 x 1 = 26.

Expedited Forwarding
The EF (Expedited Forwarding) PHB also has two functions:
1.
2.

Queueing
Policing

The goal of expedited forwarding is to put packets in a queue where they


experience minimum delay and packet loss. This is where you want the packets of
your real-time applications (like VoIP) to be. To enforce this we use something
called a priority queue. Whenever there are packets in the priority queue, they will
be sent before all other queues. This is also a risk, there's a chance that the other
queues won't get a chance to send their packets so we need to set a "rate limit" for
this queue, this is done with policing.
The DSCP value is normally called "EF" and in binary it is 101110, the decimal value
is 46.

The real world


You should now have good understanding of the difference between IP precedence
and DSCP values. It's quite a long story right?
There's one thing that I should mention. We talked a lot about PHB (Per Hop Behavior) and the word
"behavior" makes it sound like when you use a certain DSCP value, the router will automatically
queue, police or drop the packets. The funny thing is that your router won't do anything! We have to
configure the "actions" that the router will perform ourselves...

We have a lot of different values that we can use for the TOS byte..IP precedence,
CS, AF and EF. So what do we really use on our networks?
The short answer is that it really depends on the networking vendor. IP Precedence
value 5 or DSCP EF is normally used for voice traffic while IP precedence value 3 or
DSCP CS3 or AF31 is used for call signaling.
See if your networking vendor has a Quality of Service design guide, they usually do
and give you some examples what values you should use.
I hope this tutorial has been helpful to understand the TOS byte, IP Precedence and
DSCP. If you have any questions feel free to leave a comment.

Rate this Lesson:

QoS Classification on Cisco IOS


Router
2 votes

On most networks you will see a wide range of applications, each application is
unique and has its own requirements when it comes to bandwidth, delay, jitter, etc.
For example, an FTP application used for backups of large files might require a lot
of bandwidth but delay and jitter wont matter since its not an interactive
application.
Voice over IP on the other hand doesnt require much bandwidth but delay and
jitter are very important. When your delay is too high your calls will become walkietalkie conversations and jitter screws up the sound quality.
To make sure each application gets the treatment that it requires we have to
implement QoS (Quality of Service).
The first step when implementing QoS is classification, thats what this tutorial is all
about.
By default your router doesnt care what kind of IP packets it is forwardingthe only
important thing is looking at the destination IP address, doing a routing table
lookup and whooshthe IP packet has been forwarded.

Before we can configure any QoS methods like queuing, policing or shaping we
have to look at the traffic that is running through our router and identify (classify)
it so we know to which application it belongs. Thats what classification is about.
Once the traffic has been classified, we will mark it and apply a QoS policy to it.
Marking and configuring QoS policies are a whole different story so in this tutorial
well just stick to classification.
On IOS routers there are a couple of methods we can use for classification:

Header inspection
Payload inspection

There are quite some fields in our headers that we can use to classify applications.
For example, telnet uses TCP port 23 and HTTP uses TCP port 80. Using header
inspection you can look for:

Layer 2: MAC addresses

Layer 3: source and destination IP addresses

Layer 4: source and destination port numbers and protocol

This is a really simple method of classification that works well but has some
downsides. For example, you can configure your router that everything that uses
TCP and destination port number 80 is HTTP but its possible that some other
applications (instant messaging for example) are also using TCP port 80. Your
router will perform the same action for IM and HTTP traffic.
Payload inspection is more reliable as it will do deep packet inspection. Instead of
just looking at layer 2/3/4 information the router will look at the contents of the
payload and will recognize the application. On Cisco IOS routers this is done
with NBAR (Network-Based Application Recognition).
When you enable NBAR on an interface, the router will inspect all incoming IP
packets and tries to match them with signatures and attributes in the PDLM (Packet
Description Language Module). For example, NBAR can detect HTTP traffic no
matter what ports you are using and it can also match on things like:

URL
MIME type (zip file, image, etc)
User-agent (Mozilla, Opera, etc)

Since NBAR can see the URL, it is also commonly used to block websites and a
popular choice for classification.
You should now have an idea what classification is about, lets look at some routers
and configure classification.

Configuration
Well start with a simple example where I use an access-list to classify some telnet
traffic. Heres the topology that I will use:

R1 will be our telnet client and R2 the telnet server. We will classify the packets
when they arrive at R2. Lets look at the configuration!

Classification with access-list


First I have to create an access-list that matches on telnet traffic:
R2(config)#ip access-list extended TELNET
R2(config-ext-nacl)#permit tcp any any eq 23

This will match on all IP packets that use TCP as the transport protocol and
destination port 23. Normally when you configure an access-list for filtering, we
apply it to the interface. When configuring QoS we have to use the MQC (Modular
Quality of Service Command-Line Interface). The name is pretty spectacular but its
a really simple method to configure QoS.

We use something called a policy-map where we configure the QoS actions we


want to performmarking, queueing, policing, shaping, etc. These actions are
performed on a class-map, and thats where we specify the traffic. Let me show you
how this is done:
R2(config)class-map TELNET
R2(config-cmap)#match ?
access-group
Access group
any
Any packets
class-map
Class map
cos
IEEE 802.1Q/ISL class of service/user
priority values
destination-address Destination address
discard-class
Discard behavior identifier
dscp
Match DSCP in IP(v4) and IPv6 packets
flow
Flow based QoS parameters
fr-de
Match on Frame-relay DE bit
fr-dlci
Match on fr-dlci
input-interface
Select an input interface to match
ip
IP specific values
mpls
Multi Protocol Label Switching specific
values
not
Negate this match result
packet
Layer 3 Packet length
precedence
Match Precedence in IP(v4) and IPv6 packets
protocol
Protocol
qos-group
Qos-group
source-address
Source address
vlan
VLANs to match

I created a class-map called TELNET and when you create a class-map you have a
lot of options. On top you see access-group which uses an access-list to classify the
traffic, thats what I will use. Some other nice methods are the input-interface,
frame-relay DLCI values, packet length, etc. The most simple option is probably the
access-list:
R2(config-cmap)#match access-group name TELNET

My class-map called TELNET now matches traffic that is specified in the access-list
called TELNET.

Now we can create a policy-map and refer to our class-map:


R2(config)#policy-map CLASSIFY
R2(config-pmap)#class TELNET

The policy-map is called CLASSIFY and the class-map called TELNET belongs to it.
Normally this is where I also specify the QoS action like marking, queueing, etc. Im
not configuring any action right since this tutorial is only about classification.
Before the policy-map does anything, we have to attach it to an interface:
R2(config)#interface FastEthernet 0/0
R2(config-if)#service-policy input CLASSIFY

Thats it, our router can now classify telnet traffic. Lets try it by telnetting from R1 to
R2:
R1#telnet 192.168.12.2
Trying 192.168.12.2 ... Open

Lets see what R2 thinks of this:


R2#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy input: CLASSIFY
Class-map: TELNET (match-all)
11 packets, 669 bytes
5 minute offered rate 0 bps
Match: access-group name TELNET
Class-map: class-default (match-any)
3 packets, 206 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

Great! Our router sees the telnet traffic that arrives on the FastEthernet 0/0
interface. You can see the name of the policy-map, the class-map and the access-list

that we used. Something that you should remember is that all traffic that is not
specified in a class-map will hit the class-default class-map. Not too bad right? Lets
see if we can also make this work with NBAR

Classification with NBAR


The configuration of NBAR is quite easy. First let me show you a simple example of
NBAR where it shows us all traffic that is flowing through an interface:
R2(config)#interface FastEthernet 0/0
R2(config-if)#ip nbar protocol-discovery

Now you can view all traffic that is flowing through the interface:
R2#show ip nbar protocol-discovery
FastEthernet0/0
Last clearing of "show ip nbar protocol-discovery" counters
00:00:20

Protocol
(bps)

Input
----Packet Count
Byte Count
5min Bit Rate (bps)

Output
-----Packet Count
Byte Count
5min Bit Rate

5min Max Bit Rate (bps)

5min Max Bit

Rate (bps)
------------------------ ----------------------------------------------telnet
8
7
489
457
0
0
0
0
unknown
3
2
180
120
0
0
0
0
Total
11
9
669
577
0
0
0
0

I don't have a lot going on on this router but telnet is there. This is a nice way to see
the different traffic types on your interface but if we want to use this information
for QoS we have to put NBAR in a class-map. Here's how:
R2(config)#class-map NBAR-TELNET
R2(config-cmap)#match protocol ?
3com-amp3
3Com AMP3
3com-tsmux
3Com TSMUX
3pc
Third Party Connect Protocol
914c/g
Texas Instruments 914 Terminal
9pfs
Plan 9 file service
CAIlic
Computer Associates Intl License Server
Konspire2b
konspire2b p2p network
acap
ACAP
acas
ACA Services
accessbuilder
Access Builder
accessnetwork
Access Network
acp
Aeolon Core Protocol
acr-nema
ACR-NEMA Digital Img
aed-512
AED 512 Emulation service
agentx
AgentX
alpes
Alpes
aminet
AMInet
an
Active Networks
anet
ATEXSSTR
ansanotify
ANSA REX Notify
ansatrader
ansatrader
aodv
AODV
[output omitted]

I created a class-map called "NBAR-TELNET" and when I use match protocol you
can see there's a long list of supported applications. I'm not going to show all of it
but telnet is in there somewhere:
R2(config-cmap)#match protocol telnet

That's how we use NBAR in a class-map. Now we need to add this class-map to the
policy-map:
R2(config)#policy-map CLASSIFY
R2(config-pmap)#no class TELNET
R2(config-pmap)#class NBAR-TELNET

I'll remove the old class-map with the access-list and add the new class-map to our
policy-map.
I showed you how you can use the ip nbar protocol-discovery command, it's a great way to see the
traffic on the interface but it's not a requirement for NBAR to work in a class-map. Using "match
protocol" in the class-map is enough for NBAR to work.

Now take a look at the policy-map in action:


R2#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy input: CLASSIFY
Class-map: NBAR-TELNET (match-all)
9 packets, 549 bytes
5 minute offered rate 0 bps
Match: protocol telnet
Class-map: class-default (match-any)
3 packets, 180 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

The output is pretty much the same as when I used the access-list but the "match:
protocol telnet" reveals that we are using NBAR for classification this time.
That's all I have for now! I hope this tutorial helps you to understand classification,
in other tutorials I will show you how to let your policy-map do something...things
like queueing, marking, shaping or policing. If you have any questions feel free to
leave a comment.

Rate this Lesson:

QoS Marking on Cisco IOS Router


3 votes

In this tutorial well take a look at marking packets. Marking means that we set the
TOS (Type of Service) byte with an IP Precedence value or DSCP value. If you have
no idea what precedence or DSCP is about then you should read my IP Precedence
and DSCP value tutorial first. Im also going to assume that you understand

what classification is, if you dontread my classification tutorial first.


Marking on a Cisco catalyst switch is a bit different than on a router, if you want to
know how to configure marking on your Cisco switch than look at this tutorial.
Having said that, lets take a look at the configuration!

Configuration
I will use three routers to demonstrate marking, connected like this:

I will send some traffic from R1 to R3 and we will use R2 to mark our traffic. Well
keep it simple and start by marking telnet traffic.
Lets create an access-list for classification:
R2(config)#ip access-list extended TELNET-TRAFFIC
R2(config-ext-nacl)#permit tcp any any eq telnet

Now we need to add the access-list to a class-map:


R2(config)#class-map TELNET-TRAFFIC
R2(config-cmap)#match access-group name TELNET-TRAFFIC

And well add the class-map to a policy-map:


R2(config)#policy-map MARKING
R2(config-pmap)#class TELNET-TRAFFIC
R2(config-pmap-c)#set ?
atm-clp
Set ATM CLP bit to 1
cos
Set IEEE 802.1Q/ISL class of service/user priority
cos-inner
Set Inner CoS
discard-class Discard behavior identifier
dscp
Set DSCP in IP(v4) and IPv6 packets
fr-de
Set FR DE bit to 1
ip
Set IP specific values
mpls
Set MPLS specific values
precedence
Set precedence in IP(v4) and IPv6 packets
qos-group
Set QoS Group
vlan-inner
Set Inner Vlan

There are quite some options for the set command. When it comes to IP packets
well use the precedence or DSCP values. Lets start with precedence:
R2(config-pmap-c)#set precedence ?
<0-7>
Precedence value

cos
critical
flash
flash-override
immediate
internet
(6)
network
priority
qos-group
routine

Set
Set
Set
Set
Set
Set

packet precedence from L2 COS


packets with critical precedence (5)
packets with flash precedence (3)
packets with flash override precedence (4)
packets with immediate precedence (2)
packets with internetwork control precedence

Set
Set
Set
Set

packets with network control precedence (7)


packets with priority precedence (1)
packet precedence from QoS Group.
packets with routine precedence (0)

For this example it doesnt matter much what we pick. Lets go for IP precedence 7
(network):
R2(config-pmap-c)#set precedence network

Last but not least, we have to activate the policy-map:


R2(config)#interface FastEthernet 0/0
R2(config-if)#service-policy input MARKING

Thats all there is to it. Lets see if it works.Ill telnet from R1 to R3:
R1#telnet 192.168.23.3
Trying 192.168.23.3 ... Open

Now look at R2:


R2#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy input: MARKING
Class-map: TELNET-TRAFFIC (match-all)
10 packets, 609 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group name TELNET-TRAFFIC
QoS Set
precedence 7
Packets marked 10

Class-map: class-default (match-any)


0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

Thats looking good! 10 packets have been marked with precedence 7. Thats not
too bad right?
Lets see if we can also mark some packets with a DSCP value, lets mark some HTTP
traffic:
R2(config)#ip access-list extended HTTP-TRAFFIC
R2(config-ext-nacl)#permit tcp any any eq 80

Create a class-map:
R2(config)#class-map HTTP-TRAFFIC
R2(config-cmap)#match access-group name HTTP-TRAFFIC

And well add it to the policy-map:


R2(config)#policy-map MARKING
R2(config-pmap)#class HTTP-TRAFFIC
R2(config-pmap-c)#set dscp ?
<0-63>
Differentiated services codepoint value
af11
Match packets with AF11 dscp (001010)
af12
Match packets with AF12 dscp (001100)
af13
Match packets with AF13 dscp (001110)
af21
Match packets with AF21 dscp (010010)
af22
Match packets with AF22 dscp (010100)
af23
Match packets with AF23 dscp (010110)
af31
Match packets with AF31 dscp (011010)
af32
Match packets with AF32 dscp (011100)
af33
Match packets with AF33 dscp (011110)
af41
Match packets with AF41 dscp (100010)
af42
Match packets with AF42 dscp (100100)
af43
Match packets with AF43 dscp (100110)
cos
Set packet DSCP from L2 COS
cs1
Match packets with CS1(precedence 1) dscp (001000)
cs2
Match packets with CS2(precedence 2) dscp (010000)

cs3
cs4
cs5
cs6
cs7
default
ef
qos-group

Match packets with CS3(precedence 3) dscp


Match packets with CS4(precedence 4) dscp
Match packets with CS5(precedence 5) dscp
Match packets with CS6(precedence 6) dscp
Match packets with CS7(precedence 7) dscp
Match packets with default dscp (000000)
Match packets with EF dscp (101110)
Set packet dscp from QoS Group.

Lets pick something..AF12 will do:


R2(config-pmap-c)#set dscp af12

Lets generate some traffic:


R3(config)#ip http server
R1#telnet 192.168.23.3 80
Trying 192.168.23.3, 80 ... Open

And check out the policy-map:


R2#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy input: MARKING
Class-map: TELNET-TRAFFIC (match-all)
10 packets, 609 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group name TELNET-TRAFFIC
QoS Set
precedence 7
Packets marked 10
Class-map: HTTP-TRAFFIC (match-all)
3 packets, 180 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group name HTTP-TRAFFIC
QoS Set
dscp af12
Packets marked 3
Class-map: class-default (match-any)
99 packets, 5940 bytes

(011000)
(100000)
(101000)
(110000)
(111000)

5 minute offered rate 0 bps, drop rate 0 bps


Match: any

That's all there is to it...


There is one thing left I'd like to share with you. Some network devices like switches
or wireless controllers sometimes re-mark traffic, this can be a pain and it's
something you might want to check. On a Cisco IOS router it's simple to do
this...just create a policy-map and some class-maps that match on your precedence
or DSCP values. This allows you to quickly check if you are receiving (correctly)
marked packets or not. Here's what I usually do:
R3(config)#class-map AF12
R3(config-cmap)#match dscp af12
R3(config)#class-map PREC7
R3(config-cmap)#match precedence 7
R3(config)#policy-map COUNTER
R3(config-pmap)#class AF12
R3(config-pmap-c)#exit
R3(config-pmap)#class PREC7
R3(config-pmap-c)#exit
R3(config)#interface FastEthernet 0/0
R3(config-if)#service-policy input COUNTER

I created two class-maps that match on DSCP AF12 or precedence 7 marked


packets. Take a look below:
R3#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy input: COUNTER
Class-map: AF12 (match-all)
4 packets, 240 bytes

5 minute offered rate 0 bps


Match: dscp af12 (12)
Class-map: PREC7 (match-all)
12 packets, 729 bytes
5 minute offered rate 0 bps
Match: precedence 7
Class-map: class-default (match-any)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

This proves that R3 is receiving our marked packets. In this scenario it's not a
surprise but when you do have network devices that mess with your markings, this
can be a relief to see.
Hopefully you enjoyed this tutorial...if you enjoyed this, please use any of the share
buttons below.

Rate this Lesson:

QoS Pre-Classify on Cisco IOS


2 votes

In this lesson you will learn about the QoS Pre-classify feature. When you use
tunnelling, your Cisco IOS router will do classification based on the outer (post)
header, not the inner (pre) header. This can cause issues with QoS policies that are
applied to the physical interfaces. I will explain the issue and we will take a look how
we can fix it. Heres the topology that we will use:

Below is the tunnel configuration, Im using a static route so that R1 and R3 can
reach each others loopback interfaces through the tunnel:
R1(config)#interface Tunnel 0
R1(config-if)#tunnel source 192.168.12.1
R1(config-if)#tunnel destination 192.168.23.3

R1(config-if)#ip address 172.16.13.1 255.255.255.0


R1(config)#ip route 3.3.3.3 255.255.255.255 172.16.13.3

The configuration on R3 is similar:


R3(config)#interface Tunnel 0
R3(config-if)#tunnel source 192.168.23.3
R3(config-if)#tunnel destination 192.168.12.1
R3(config-if)#ip address 172.16.13.3 255.255.255.0
R3(config)#ip route 1.1.1.1 255.255.255.255 172.16.13.1

The tunnel is up and running, before we play with classification and service policies,
lets take a look at the default classification behaviour of Cisco IOS when it comes to
tunnelling

Default Classification Behaviour


IOS will copy the information in the TOS (Type of Service) byte from the inner IP
header to the outer IP header by default. We can demonstrate this with a simple
ping, heres how:
R1#ping
Protocol [ip]:
Target IP address: 3.3.3.3
Repeat count [5]:
Datagram size [100]:
Timeout in seconds [2]:
Extended commands [n]: y
Source address or interface: 1.1.1.1
Type of service [0]: 160
Set DF bit in IP header? [no]:
Validate reply data? [no]:
Data pattern [0xABCD]:
Loose, Strict, Record, Timestamp, Verbose[none]:
Sweep range of sizes [n]:
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 3.3.3.3, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4
ms

This ping between 1.1.1.1 and 3.3.3.3 will go through the tunnel and I marked the
TOS byte of this IP packet with 160 (decimal). 160 in binary is 10100000, remove the
last two bits and you have our 6 DSCP bits. 101000 in binary is 40 in decimal which
is the same as the CS5.
Below you can see a wireshark capture of this ping:

As you can see, Cisco IOS automatically copied the TOS byte from the inner IP
header to the outer IP header. This is a good thing, Im using GRE in my example so
we can see both headers but if this was an encrypted IPSEC tunnel then we (and
any device in between) could only see the outer header.

When you have QoS policies based on the TOS byte then you will have no
problems at all because the TOS byte is copied from the inner to the outer header.

You will run into issues when you have policies based on access-lists that match on
source / destination addresses and/or port numbers. Let me give you an

example

Post Header Classification


Im going to create two class-maps, one for telnet traffic and another one for GRE
traffic. Both class-maps will use an access-list to classify traffic:
R1(config)#ip access-list extended TELNET
R1(config-ext-nacl)#permit tcp any any eq telnet
R1(config)#class-map TELNET
R1(config-cmap)#match access-group name TELNET
R1(config)#ip access-list extended GRE
R1(config-ext-nacl)#permit gre any any
R1(config)#class-map GRE
R1(config-cmap)#match access-group name GRE

The two class-maps will be used in a policy-map:


R1(config)#policy-map POLICE
R1(config-pmap)#class TELNET
R1(config-pmap-c)#police 128000
R1(config-pmap-c-police)#exit
R1(config-pmap-c)#exit
R1(config-pmap)#class GRE
R1(config-pmap-c)#exit
R1(config-pmap)#exit

Ive added policing for telnet traffic and nothing for GRE. It doesnt matter what
actions we configure here, even without an action the traffic will still be classified
and it will show up in the policy-map. Lets activate it on the physical interface:
R1(config)#interface FastEthernet 0/0
R1(config-if)#service-policy output POLICE

Something to keep in mind is that when you enable a policy on the physical
interface, it will be applied to all tunnel interfaces. Lets generate some telnet
traffic between R1 and R3 so it goes through the tunnel:
R1#telnet 3.3.3.3 /source-interface loopback 0
Trying 3.3.3.3 ... Open

Now take a look at the policy-map:


R1#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy output: POLICE
Class-map: TELNET (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group name TELNET
police:
cir 128000 bps, bc 4000 bytes
conformed 0 packets, 0 bytes; actions:
transmit
exceeded 0 packets, 0 bytes; actions:
drop
conformed 0 bps, exceed 0 bps
Class-map: GRE (match-all)
11 packets, 735 bytes
5 minute offered rate 0 bps
Match: access-group name GRE
Class-map: class-default (match-any)
2 packets, 120 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

See how it only matches the GRE traffic? We dont have any matches for the telnet
traffic. If this was a real network, it means that telnet traffic will never get policed
(or any other action you configured). The reason that we dont see any matches is
because Cisco IOS first encapsulates the IP packet and then applies the policy to the
GRE traffic. Let me illustrate this:

The blue IP header on top is our original IP packet with telnet traffic, this is
encapsulated and the router adds a GRE header and a new IP header (the red one).
The policy-map is then applied to this outer IP header.
How do we fix this? There are a couple of optionslets look at the first one!

Pre Header Classification (Physical Interface)


The first method to solve this issue is to enable pre-classification on the tunnel
interface. This tells the router to create a copy of the original IP header and to use
that for the policy. Here's how to do this:
R1(config)#interface Tunnel 0
R1(config-if)#qos pre-classify

You can use the qos pre-classify command to do this. Let's do another test and
we'll see the difference:
R1#clear counters
Clear "show interface" counters on all interfaces [confirm]
R1#telnet 3.3.3.3 /source-interface loopback 0
Trying 3.3.3.3 ... Open

Now take a look at the policy-map:


R1#show policy-map interface FastEthernet 0/0
FastEthernet0/0

Service-policy output: POLICE


Class-map: TELNET (match-all)
11 packets, 735 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group name TELNET
police:
cir 128000 bps, bc 4000 bytes
conformed 11 packets, 889 bytes; actions:
transmit
exceeded 0 packets, 0 bytes; actions:
drop
conformed 0 bps, exceed 0 bps
Class-map: GRE (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps
Match: access-group name GRE
Class-map: class-default (match-any)
1 packets, 60 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

Great! Now we see matches on our telnet traffic so it can be policed if needed. We
don't see any matches on our GRE traffic anymore. Let me visualize what just
happened for you:

When the router encapsulates a packet, it will make a temporary copy of the
header. This temporary copy is then used for the policy instead of the outer header.
When this is done, the temporary copy is destroyed.
We accomplished this with the qos pre-classify command but there is another
method to get the same result, here's how...

Pre Header Classification (Tunnel Interface)


Instead of activating the policy on the physical interface we can also enable it on the
tunnel interface:
R1(config)#interface FastEthernet 0/0
R1(config-if)#no service-policy output POLICE
R1(config)#interface Tunnel 0
R1(config-if)#no qos pre-classify
R1(config-if)#service-policy output POLICE

Note that I also removed the qos pre-classify command on the tunnel interface.
Let's give it another try:
R1#clear counters
Clear "show interface" counters on all interfaces [confirm]
R1#telnet 3.3.3.3 /source-interface loopback 0
Trying 3.3.3.3 ... Open

Here's what you will see:


R1#show policy-map interface Tunnel 0
Tunnel0
Service-policy output: POLICE
Class-map: TELNET (match-all)
11 packets, 737 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group name TELNET

police:
cir 128000 bps, bc 4000 bytes
conformed 11 packets, 737 bytes; actions:
transmit
exceeded 0 packets, 0 bytes; actions:
drop
conformed 0 bps, exceed 0 bps
Class-map: GRE (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps
Match: access-group name GRE
Class-map: class-default (match-any)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

If you enable the policy on the tunnel interface then the router will use the inner
header for classification, just like we saw when we used the qos pre-classify
command on the tunnel interface.
That's all there is to explain. I hope this lesson has been useful to understand the
difference between "outer" and "inner" header classification and how to deal with
this issue.

Rate this Lesson:

Why do we need QoS on LAN


Switches
2 votes

Quality of Service (QoS) on our LAN switches is often misunderstood. Every now
and then people ask me why we need it since we have more than enough
bandwidth. If we dont have enough bandwidth its easier to add bandwidth than on
our WAN links. If you use any real-time applications like Voice over IP on your
networks then you should think about implementing QoS on your switches. Let me
show you what could go wrong with our switches. Heres an example:

Above you see a computer connected to Switch with a Gigabit interface. Between
Switch A and Switch B theres also a gigabit interface. Between Switch B and the
server theres only a FastEthernet link. In the picture above the computer is sending
400 Mbps of traffic towards the server. Of course the FastEthernet link only has a
bandwidth of 100 Mbps so traffic will be dropped. Another example of traffic drops
on our switches is something that might occur on monday morning when all your
users are logging in at the same time. Let me show you a picture:

In the example above we have 3 computers connected to Switch A, B and C. These


switches are connected to Switch D. It's monday morning and all users are
connecting to the server to log in. The traffic rate for each computer is about 70
Mbps. 3x 70 Mbps = an aggregated traffic rate of 210 Mbps which is more than the

Fa0/0 interface of Switch D can handle. As a result the buffer will be full and traffic
is dropped. Is this a big problem that our traffic is dropped?
If this is a pure data network it wouldn't be much of a problem because most
traffic is TCP based. We can do retransmissions and besides that things are a bit
slower it will work. If we use real-time applications like voice over IP or a video
conference stream we want to avoid this as it will directly impact the quality of our
voice conversation or video stream.
In the Voice over IP world we use a DSP (Digital Signal Processor) to convert analog
audio to digital and vice versa. These DSPs are able to rebuild about 30ms of audio
without noticing it. Normally we will use about 20ms of audio in a single packet
which means that only a single packet can be dropped or our voice quality will be
degraded.
It's impossible to fix these problems just by adding more bandwidth. By adding
more bandwidth you can reduce how often congestion happens but you can't
prevent it completely. A lot of data applications will try to consume as much
bandwidth as possible so if the aggregated traffic rate exceeds one of your uplink
ports you will see congestion.
By configuring QoS we can tell our switches what traffic to prioritize in case of
congestion. When congestion occurs the switch will keep forwarding voice over IP
traffic (up to a certain level that we configure) while our data traffic will be dropped.
In short, bandwidth is not a replacement for QoS. Using QoS we can ensure that
real-time applications keep working despite (temporarily) congestions.

Rate this Lesson:

How to configure QoS trust


boundary on Cisco Switches
5 votes

When we configure QoS on our Cisco switches we need to think about our trust
boundary. Simply said this basically means on which device are we going to trust
the marking of the packets and Ethernet frames entering our network. If you are
using IP phones you can use those for marking and configure the switch to trust the
traffic from the IP phone. If you dont have any IP phones or you dont trust them,
we can configure the switch to do marking as well. In this article Ill show you how to
do both! First let me show you the different QoS trust boundaries:

In the picture above the trust boundary is at the Cisco IP phone, this means that we
wont remark any packets or Ethernet frames anymore at the access layer switch.
The IP phone will mark all traffic. Note that the computer is outside of the QoS trust
boundary. This means that we dont trust the marking of the computer. We can
remark all its traffic on the IP phone if we want. Lets take a look at another picture:

In the picture above we dont trust whatever marking the IP phone sends to the
access layer switch. This means well do classification and marking on the access
layer switches. I have one more example for you

Above you can see that we dont trust anything before the distribution layer
switches. This is something you wont see very often but its possible if you dont
trust your access layer switches. Maybe someone else does management for the
access layer switches and you want to prevent them to send packets or Ethernet
frames that are marked towards your distribution layer switches.
Lets take a look at a switch to see how we can configure this trust boundary. I have
a Cisco Catalyst 3560 that I will use for these examples. Before you do anything with
QoS, dont forget to enable it globally on your switch first:
3560Switch(config)#mls qos

Something you need to be aware of is that as soon as you enable QoS on your
switch it will erase the marking of all packets that are received! If you dont want
this to happen you can use the following command:

3560Switch(config)#no mls qos rewrite ip dscp

Lets continue by looking at the the first command. We can take a look at the QoS
settings for the interface with the show mls qos interface command. This will show
you if you trust the marking of your packets or frames:
3560Switch#show mls qos interface fastEthernet 0/1
FastEthernet0/1
trust state: not trusted
trust mode: not trusted
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none

Above you can see that we dont trust anything at the moment. This is the default
on Cisco switches. We can trust packets based on the DSCP value, frames on the
CoS value or we can trust the IP phone. Here are some examples:
3560Switch(config-if)#mls qos trust cos

Just type mls qos trust cos to ensure the interface trusts the CoS value of all frames
entering this interface. Lets verify our configuration:
3560Switch#show mls qos interface fastEthernet 0/1
FastEthernet0/1
trust state: trust cos
trust mode: trust cos
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none

By default your switch will overwrite the DSCP value of the packet inside your frame
according to the cos-to-dscp map. If you dont want this you can use the following
command:
3560Switch(config-if)#mls qos trust cos pass-through

The keyword pass-through will ensure that your switch wont overwrite the DSCP
value. Besides the CoS value we can also trust the DSCP value:
3560Switch(config-if)#mls qos trust dscp

Using the command above it will not trust the CoS value but the DSCP value of the
packets arriving at the interface. Heres what it will look like:
3560Switch#show mls qos interface fastEthernet 0/1
FastEthernet0/1
trust state: trust dscp
trust mode: trust dscp
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none

Trusting the Cos or DSCP value on the interface will set your trust boundary at the
switch level. What if we want to set our trust boundary at the Cisco IP phone? We
need another command for that!
3560Switch(config-if)#mls qos trust device cisco-phone

Use the mls qos trust device cisco-phone command to tell your switch to trust all
CoS values that it receives from the Cisco IP phone:
3560Switch#show mls qos interface FastEthernet0/1
FastEthernet0/1
trust state: not trusted
trust mode: not trusted
COS override: dis
default COS: 0
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: cisco-phone

Maybe you are wondering how the switch knows the difference between a Cisco IP
phone and another vendor? CDP (Cisco Discovery Protocol) is used for this. Now we
trust the CoS value of the Cisco IP phone but what about the computer behind it?
We have to do something about itheres one way to deal with it:

3560Switch(config-if)#switchport priority extend cos

The command above will overwrite the CoS value of all Ethernet frames received
from the computer that is behind the IP phone. Youll have to set a CoS value
yourself. Of course we can also trust the computer, theres another command for
that:
3560Switch(config-if)#switchport priority extend trust

This will trust all the CoS values on the Ethernet frames that we receive from the
computer.
The commands above will let you trust traffic but if we dont trust anything we can
also decide to mark or remark packets and Ethernet frames on the switch. This is
quite easy to do with the following command:
3560Switch(config-if)#mls qos cos 4

Just type mls qos cos to set a CoS value yourself. In the example above I will set a
CoS value of 4 to alluntagged frames. Any frame that is already tagged will not
be remarked with this command.
3560Switch#show mls qos interface FastEthernet0/1
FastEthernet0/1
trust state: not trusted
trust mode: not trusted
COS override: dis
default COS: 4
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none

Above you can see that the default CoS will be 4 but override (remarking)
is disabled. Marking Ethernet frames with this command is useful when you have a
computer or server that is unable to mark its own traffic. In case the Ethernet frame
already has a CoS value but we want to remark it, well have to do this:
3560Switch(config-if)#mls qos cos override

Use the keyword override to tell the switch to remark all traffic. If you receive
Ethernet frames that already have a CoS value then they will be remarked with
whatever CoS value you configured. Lets verify it:
3560Switch#show mls qos interface FastEthernet 0/1
FastEthernet0/1
trust state: not trusted
trust mode: not trusted
COS override: ena
default COS: 4
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none

Override (remarking) has been enabled. As a result all tagged and


untagged Ethernet frames will have a CoS value of 4. Thats all there is to trusting

the CoS, DSCP or Cisco IP phone and (re)marking your traffic. If this article was
useful to you please leave a comment!

Rate this Lesson:

Classification and Marking on


Cisco Switch
2 votes

When you are configuring QoS on your Cisco switches you are probably familiar
with the concept of trust boundaries. If not, take a look at this article that I wrote
earlier that explains the concept and teaches you how to trust markings or (re)mark
packets or Ethernet frames.
Using the mls qos trust command we can trust the Cos or DSCP value or an IP
phone. With the mls qos cos command we can set a new CoS value if we like. The
downside of these two commands is that it applies to all packets or Ethernet
frames that arrive on the FastEthernet 0/1 interface. What if we wanted to be a bit

more specific? Let me show you an example:

Above you see a small network with a server, switch and a router connected to a
WAN. Lets imagine the server is running a couple of applications:
1.
2.
3.

SSH server.
Mail server.
MySQL server.

What if the server is unable to mark its own IP packets with a DSCP value but we
want to prioritize SSH traffic on the router when it leaves the serial 0/0 interface? In
that case well have to doclassification and marking ourselves. I will show you how
to do this on a Cisco catalyst switch. You can use a standard, extended or MAC

access-list in combination with MQC (Modular QoS Configuration) to get the job

done.
Lets start with the standard access-list to classify traffic from the server. Since a
standard access-list can only match on source IP addresses I will be unable to
differentiate between different applications
Switch(config)#class-map match-all SERVER
Switch(config-cmap)#match access-group 1

Well use a class-map to select our traffic. I will refer to access-list 1 with
the match command.
Switch(config)#access-list 1 permit 192.168.1.1

Access-list 1 will match IP address 192.168.1.1. This is the classification part but we
still have to markour traffic. This is done with a policy-map:
Switch(config)#policy-map SET-DSCP-SERVER
Switch(config-pmap)#class SERVER
Switch(config-pmap-c)#set ip dscp 40

Above I created a policy map called SET-DSCP-SERVER and im referring to the


class-map SERVER that I created before. Using the set command I will set the
DSCP value to 40. Now I am almost done, I still need to activate this policy map on
the interface:
Switch(config)#interface FastEthernet 0/1
Switch(config-if)#service-policy input SET-DSCP-SERVER

This is how you activate it on the interface. Use the service-policy command and
you can use theinput or output keyword to apply it to inbound or outbound traffic.
If you want to verify your configuration and see if traffic is being marked you can
use the following command:
Switch#show policy-map interface FastEthernet 0/1
FastEthernet0/1
Service-policy input: SET-DSCP-SERVER
Class-map: SERVER (match-all)
0 packets, 0 bytes

5 minute offered rate 0 bps, drop rate 0 bps


Match: access-group 1
Class-map: class-default (match-any)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
0 packets, 0 bytes
5 minute rate 0 bps

Above you can see that the policy-map has been applied to the FastEthernet0/1
interface and even better, you can see the number of packets that have matched
this policy-map and class-map. At the moment there are 0 packets (nothing is
connected to my switch at the moment). You can also see the class-default class.
All traffic that doesnt belong to a class-map will belong to the class-default class.
The example above is nice to demonstrate the class-map and policy-map but I was
only able to match on the source IP address because of the standard access-list. Let
me show you another example that will only match on SSH traffic using an
extended access-list:
Switch(config)#class-map SSH
Switch(config-cmap)#match access-group 100

First ill create a class-map called SSH that matches access-list 100. Dont forget to
create the access-list:
Switch(config)#access-list 100 permit tcp host 192.168.1.1 eq 22 any
Access-list 100 will match source IP address 192.168.1.1 and source port 22 (SSH).
Now well pull it all together with the policy-map:
Switch(config)#policy-map SET-DSCP-SSH
Switch(config-pmap)#class SSH
Switch(config-pmap-c)#set ip dscp cs6

Whenever it matches class-map SSH we will set the DSCP value to CS6. Don't forget
to activate it:

Switch(config)#interface FastEthernet 0/1


Switch(config-if)#no service-policy input SET-DSCP-SERVER
Switch(config-if)#service-policy input SET-DSCP-SSH

You can only have one active policy-map per direction on an interface so first we'll
remove the old one. Let's take a look if it is active:
Switch#show policy-map interface fastEthernet 0/1
FastEthernet0/1
Service-policy input: SET-DSCP-SSH
Class-map: SSH (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group 100
Class-map: class-default (match-any)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
0 packets, 0 bytes
5 minute rate 0 bps

You can see that it's active. I still don't have any traffic so we are stuck at 0

packets

Using an extended access-list is a nice and clean method to

classify traffic. Last but not least let me show you the MAC address access-list. I
don't think it's very useful but it's an option:
Switch(config)#class-map SERVER-MAC
Switch(config-cmap)#match access-group name MAC

We'll create a class-map called SERVER-MAC and refer to an access-list called MAC.
Let's create that MAC access-list:
Switch(config)#mac access-list extended MAC
Switch(config-ext-macl)#permit host 1234.1234.1234 any

In my example the server has MAC address 1234.1234.1234. Now we'll create a
policy-map and activate it:
Switch(config)#policy-map SET-DSCP-FOR-MAC
Switch(config-pmap)#class SERVER-MAC
Switch(config-pmap-c)#set ip dscp cs1
Switch(config)#interface FastEthernet 0/1
Switch(config-if)#no service-policy input SET-DSCP-SSH
Switch(config-if)#service-policy input SET-DSCP-FOR-MAC

That's all there is to it. This is what it looks like:


Switch#show policy-map interface fastEthernet 0/1
FastEthernet0/1
Service-policy input: SET-DSCP-FOR-MAC
Class-map: SERVER-MAC (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group name MAC
Class-map: class-default (match-any)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
0 packets, 0 bytes
5 minute rate 0 bps

That's all there is to it. You have now learned how to configuration classification
and marking using MQC on Cisco Catalyst switches. Before I forget, MQC is similar
on routers so you can configure the same thing on your router. If you enjoyed this
article please leave a comment!

Rate this Lesson:

How to configure Queuing on


Cisco 3560 and 3750 switch
3 votes

QoS (Quality of Service) on Cisco Catalyst switches is not as easy as configuring it on


routers. The big difference is that on routers QoS runs in software while on our
switches its done in hardware. Since switching is done in hardware (ASICs) we also
have to do our congestion management (queuing) in hardware. The downside of
this is that QoS is in effect immediately. In this article I will give you an overview of
all the different commands and an explanation of how QoS works. If you are totally
new to LAN QoS do yourself a favor and watch this video:
The first 54 minutes are about classification, marking and policing so if you only
care about congestion management and queuing you can skip the first part. Having
said that lets walk through the different commands.

Priority Queue

If your switch supports ingress queuing then on most switches (Cisco Catalyst 3560
and 3750) queue 2 will be the priority queue by default. Keep in mind that there are
only 2 ingress queues. If we want we can make queue 1 the priority queue and we
can also change the bandwidth. Heres how to do it:
Switch(config)#mls qos srr-queue input priority-queue 1 bandwidth
20

The command makes queue 1 the priority queue and limits it to 20% of the
bandwidth of the total internal ring bandwidth.
For our egress queuing we have to enable the priority queue ourselves! Its not
enabled by default. Heres how you can do it:
Switch(config)#interface fa0/1
Switch(config-if)#priority-queue out

The command above will enable the outbound priority queue for interface fa0/1. By
default queue 1 is the priority queue!

Queue-set
The queue-set is like a template for QoS configurations on our switches. There are 2
queue-sets that we can use and by default all interfaces are assigned to queue-set
1. If you plan to make changes to buffers etc. its better to use queue-set 2 for this.
If you change queue-set 1 you will apply you new changes to all interfaces.
This is how you can assign an interface to a different queue-set:
Switch(config)#interface fa0/2
Switch(config-if)#queue-set 2

Above we put interface fa0/2 in queue-set 2. Keep in mind that we only have queuesets for egress queuing, not for ingress.

Buffer Allocation
For each queue we need to configure the assigned buffers. The buffer is like the
storage space for the interface and we have to divide it among the different
queues. This is how to do it:
mls qos queue-set output <queue set> buffers Q1 Q2 Q3 Q4

Above you see the mls qos command. First we select the queue-set and then we
can divide the buffers between queue 1,2,3 and 4. For queue 1,3 and 4 you can
select a value between 0 and 99. If you type 0 you will disable the queue. You cant
do this for queue 2 because it is used for the CPU buffer. Lets take a look at an
actual example:
Switch(config)#mls qos queue-set output 2 buffers 33 17 25 25

This will divide the buffer space like this:

33% for queue 1.


17% for queue 2.
25% for queue 3.
25% for queue 4.

Besides dividing the bufferspace between the queues we also have to configure the
following values per queue:

Threshold 1 value
Threshold 2 value
Reserved value
Maximum value

The command to configure these values looks like this:


mls qos queue-set output <queue-set> threshold <queue number> T1 T2
RESERVED MAXIMUM

First you need to select a queue-set, select the queue number and finally configure
a threshold 1 and 2 value, reserved value and the maximum value.
Heres an example:
Switch(config)#mls qos queue-set output 2 threshold 3 33 66 100 300

In the example above we configure queue-set 2. We select queue 3 and set the
following values:

Threshold 1 = 33%
Threshold 2 = 66%
Reserved = 100%
Maximum = 300%

This means that threshold 1 can go up to 33% of the queue. Threshold 2 can go up
to 66% of the queue. We reserve 100% buffer space for this queue and in case the
queue is full we can borrow more buffer space from the common pool. 300%
means we can get twice our queue size from the common pool.

Assign marked packets/frames to correct queue


You now know how to configure the buffers and thresholds but we still have to tell
the switch which CoS and DSCP values have to go to which queue. Heres the
command for it:
mls qos srr-queue <direction> <marking> <queue> <threshold> <values>
This is what it means:

Direction: input or output.


Marking: CoS or DSCP.
Queue: The queue number.
Threshold: this can be threshold 1,2 or 3.
Values: The CoS or DSCP values you want to put here.

Lets take a look at an actual example:

Switch(config)#mls qos srr-queue output cos-map queue 1 threshold 1


0 1

The command assigns CoS values 0 and 1 to queue 1 up to threshold 1.


Switch(config)#mls qos srr-queue output cos-map queue 1 threshold 2
2 3

This example assigns CoS values 2 and 3 to queue 1 up to threshold 2.


Switch(config)#mls qos srr-queue output cos-map queue 4 threshold 2
6 7

And this one assigns CoS values 6 and 7 to queue 4 up to threshold 2.

Bandwidth Allocation
The buffers determine how large the queues are. In other words how big is our
storage. The bandwidth is basically how often we visit our queues. We can change
the bandwidth allocation for each interface. Heres what it looks like for our igress
queuing:
mls qos srr-queue input bandwidth Q1 Q2

Igress queuing only has two queues. We can divide a weight between the two
queues. Heres an example:
Switch(config)#mls qos srr-queue input bandwidth 30 70

With the command above queue 1 will receive 30% of the bandwidth and queue 2
will receive 70%. These two values are weighted and dont have to add up to
100%. If I would have typed somethine like 70 60 then queue 1 would receive
60/130 = about 46% of the bandwidth and queue 2 would receive 70/130 = about
53%. Of course its easier to calculate if you make these values add up to 100.

For our egress queues we have to do the same thing but it will be on interface level.
We can also choose between shaping or sharing. Sharing means the queues will
divide the available bandwidth between each other. Shaping means you set a fixed
limit, its like policing. Heres an example:
Switch(config)#interface fa0/1
Switch(config-if)#srr-queue bandwidth share 30 20 25 25

This will divide the bandwidth as following:

Queue 1: 30%
Queue 2: 20%
Queue 3: 25%
Queue 4: 25%

In this case we have a 100Mbit interface which means queue 1 will receive 30Mbit,
queue 2 20Mbit, queue 3 25Mbit and queue 4 25Mbit. If there is no congestion than
our queues can go above their bandwidth limit. This is why its called sharing.
If I want I can enable shaping for 1 or more queues. This is how you do it:
Switch(config)#interface fa0/1
Switch(config-if)#srr-queue bandwidth shape 20 0 0 0

This value is a weighted value. The other queues are not shaped because theres a
0. When you configure shaping for a queue it will be removed from the sharing
mechanism. So how much bandwidth does queue 1 really get? We can calculate it
like this:

1/20 = 0.05 x 100Mbit = 5Mbit.


So traffic in queue 1 will be shaped to 5Mbit. Since queue 1 is now removed from
the sharing mechanismhow much bandwidth will queue 2,3 and 4 get?
Lets take a look again at the sharing configuration that I just showed you:
Switch(config)#interface fa0/1

Switch(config-if)#srr-queue bandwidth share 30 20 25 25

I just explained you that queue 1 would receive 30Mbit, queue 2 20Mbit, queue 3
25Mbit and queue 4 also 25Mbit. Since I enabled shaping for queue 1 it doesnt join
the sharing mechanism anymore. This means there is more bandwidth for queue
2,3 and 4. Heres what the calculation looks like now:

Interface fa0/1 is 100Mbit.


We configured shaping to 5Mbit for queue 1 so there is 95Mbit left.
We configured a weighted of value 20,25 and 25 for queue 2,3 and 4.
20 + 25 + 25 = 70 total.
Queue 2 will receive 20/70 = 0.28 * 100 Mbit 5 Mbit = 27.1 Mbit.
Queue 3 will receive 25/70 = 0.35 * 100 Mbit 5 Mbit = 33.9 Mbit.
Queue 4 will receive 25/70 = 0.35 * 100 Mbit 5 Mbit = 33.9 Mbit.

If we add all these values together:

Queue 1 is shaped to 5 Mbit.


Queue 2 is shared to 27.1 Mbit.
Queue 3 is shared to 33.9 Mbit.
Queue 4 is shared to 33.9 Mbit.
5 + 27.1 + 33.9 + 33.9 = Total bandwidth of 100 Mbit.

Thats how you do it!


Its also possible to rate-limit the entire interface for egress traffic if you want to
save the hassle of configuring shaping. This is how you do it:
Switch(config)#interface fa0/1
Switch(config-if)#srr-queue bandwidth limit 85

This will limit our 100 Mbit interface to 85% so youll end up with 85 Mbit.

Verification and troubleshooting


Now you know how to configure everything. Lets take a look at the different
commands you can use to verify everything!

First you should check the capabilities of a switch. You can do this on the interface
level as following:
Switch#show interfaces fa0/23 capabilities
FastEthernet0/23
Model:
WS-C3560-24PS
Type:
10/100BaseTX
Speed:
10,100,auto
Duplex:
half,full,auto
Trunk encap. type:
802.1Q,ISL
Trunk mode:
on,off,desirable,nonegotiate
Channel:
yes
Broadcast suppression: percentage(0-100)
Flowcontrol:
rx-(off,on,desired),tx-(none)
Fast Start:
yes
QoS scheduling:
rx-(not configurable on per port basis),
tx-(4q3t) (3t: Two configurable values and one fixed.)
CoS rewrite:
yes
ToS rewrite:
yes
UDLD:
yes
Inline power:
yes
SPAN:
source/destination
PortSecure:
yes
Dot1x:
yes

Above you can see that this Cisco Catalyst 3560 switch has 4 queues with 3
threshold levels.
If you are configuring QoS you need to make sure you enabled it globally first with
the mls qos command. You can verify if QoS is active or not with the following
command:
Switch#show mls qos
QoS is enabled
QoS ip packet dscp rewrite is enabled

It tells us that QoS is enabled globally. We can also check the QoS parameters for
each interface as following:
Switch#show mls qos interface fa0/1

FastEthernet0/1
trust state: trust cos
trust mode: trust cos
trust enabled flag: ena
COS override: dis
default COS: 1
DSCP Mutation Map: Default DSCP Mutation Map
Trust device: none
qos mode: port-based

Above you can see the trust state for this interface. We can also verify the queuesets for this switch. If you didnt configure them you will find some default values:
Switch#show mls qos queue-set
Queueset: 1
Queue
:
1
2
3
4
---------------------------------------------buffers
:
25
25
25
25
threshold1:
100
200
100
100
threshold2:
100
200
100
100
reserved :
50
50
50
50
maximum
:
400
400
400
400
Queueset: 2
Queue
:
1
2
3
4
---------------------------------------------buffers
:
33
17
25
25
threshold1:
100
33
33
100
threshold2:
100
66
66
100
reserved :
50
100
100
50
maximum
:
400
200
200
400

Above you will find queue-set 1 and 2. You can see how the buffers are divided per
queue and the values for our thresholds, reserved and maximum values.
We can check how queuing is configured per interface. This is how you do it:
Switch#show mls qos interface fastEthernet 0/24 queuing
FastEthernet0/24
Egress Priority Queue : disabled
Shaped queue weights (absolute) : 25 0 0 0
Shared queue weights : 25 25 25 25
The port bandwidth limit : 100 (Operational Bandwidth:100.0)
The port is mapped to qset : 1

Above you see that the priority queue is disabled. Also you see the shaped and
shared values and that this interface belongs to queue-set 1.
If you are troubleshooting you should check if you see any drops within the queues.
You can do it like this:
Switch#show platform port-asic stats drop FastEthernet 0/1
Interface Fa0/1
Queue 0
Weight 0 Frames
Weight 1 Frames
Weight 2 Frames
Queue 1
Weight 0 Frames
Weight 1 Frames
Weight 2 Frames
Queue 2
Weight 0 Frames
Weight 1 Frames
Weight 2 Frames
Queue 3
Weight 0 Frames
Weight 1 Frames
Weight 2 Frames

TxQueue Drop Statistics


0
0
0
0
0
0
0
0
0
0
0
0

Here you can see the drops for each queue. We can also verify if we are receiving
traffic that is marked:
Switch#show mls qos interface fastEthernet 0/1 statistics
FastEthernet0/1 (All statistics are in packets)
dscp: incoming
------------------------------0 - 4 :
0
5 - 9 :
0
10 - 14 :
0

0
0
0

15 - 19 :
0
0
0
20 - 24 :
0
0
0
25 - 29 :
0
0
0
30 - 34 :
0
0
0
35 - 39 :
0
0
0
40 - 44 :
0
0
0
45 - 49 :
0
0
0
50 - 54 :
0
0
0
55 - 59 :
0
0
0
60 - 64 :
0
dscp: outgoing
------------------------------0 0
5 0
10 0
15 0
20 0
25 0
30 0
35 0
40 0
45 0
50 0
55 0
60 cos:

4 :
9 :
14 :
19 :
24 :
29 :
34 :
39 :
44 :
49 :
54 :
59 :

0
0
0
0
0
0
0
0
0
0
0
0

64 :
incoming

------------------------------0 - 4 :
2
0
0
0
5 - 7 :
0
0
cos: outgoing
-------------------------------

0 - 4 :
0
0
0
5 - 7 :
0
Policer: Inprofile:

0
0
0 OutofProfile:

Thats all I have for you for now! I suggest you to check out these commands on
your own switches to become familiar with them. If you enjoyed this article please
leave a comment!

Rate this Lesson:

CBWFQ not supported on SubInterfaces


3 votes

If you are playing around with CBWFQ you might have discovered that its
impossible to attach a policy-map to a sub-interface directly. There is a good reason
for this and Id like to show you whythis occurs and how to fix it. This is the topology I
will use to demonstrate this:

Just two routers connected to teach other using frame-relay. We will try to
configure CBWFQ on the Serial 0/0.1 sub-interface of R1.

Configuration
First ill create a simple CBWFQ configuration:
R1(config)#class-map TELNET
R1(config-cmap)#match protocol telnet
R1(config)#class-map HTTP
R1(config-cmap)#match protocol http
R1(config)#policy-map CBWFQ
R1(config-pmap)#class TELNET
R1(config-pmap-c)#bandwidth percent 10
R1(config-pmap-c)#exit
R1(config-pmap)#class HTTP
R1(config-pmap-c)#bandwidth percent 20
R1(config-pmap-c)#exit

Nothing special herejust a simple CBWFQ configuration that gives 10% of the
bandwidth to telnet and 20% to HTTP traffic. Lets try to apply it to the subinterface:
R1(config)#interface serial 0/0.1
R1(config-subif)#service-policy output CBWFQ
CBWFQ : Not supported on subinterfaces

Too bad, its not gonna happenIOS has a day off. There is a workaround
howeverwe cant apply it directly, but if we use a hierarchical policy-map it will
work. Let me show you what I mean:
R1(config)#policy-map PARENT
R1(config-pmap)#class class-default
R1(config-pmap-c)#service-policy CBWFQ

Ill create a policy-map called PARENT that has our service-policy attached to the
class-default class. Now lets try to attach this to the sub-interface:
R1(config)#interface serial 0/0.1
R1(config-subif)#service-policy output PARENT
CBWFQ : Hierarchy supported only if shaping is configured in this
class

IOS is still complaining, it only allows a hierarchical policy-map when shaping is


configured. Lets give it what it wants:
R1(config)#policy-map PARENT
R1(config-pmap)#class class-default
R1(config-pmap-c)#shape average percent 100

I don't want to shape, but if I have to configure something we'll just set the shaper
to 100% of the interface bandwidth so that it doesn't limit our traffic. Let's attach it
to the sub-interface:
R1(config)#interface serial 0/0.1
R1(config-subif)#service-policy output PARENT

Bingo! It has been attached.

Verification
We'll try to telnet from R1 to R2 to see if it matches the policy-map:
R1#telnet 192.168.12.2
Trying 192.168.12.2 ... Open
Password required, but none set
[Connection to 192.168.12.2 closed by foreign host]

Let's check if it hit something:


R1#show policy-map interface serial 0/0.1
Serial0/0.1
Service-policy output: PARENT
Class-map: class-default (match-any)
39 packets, 4086 bytes
5 minute offered rate 0 bps, drop rate
Match: any
Traffic Shaping
Target/Average
Byte
Sustain
Increment
Rate
Limit bits/int
(bytes)
100 (%)
0 (ms)
1544000/1544000
9650
38600
4825
Adapt
Shaping
Active

Queue

Packets

Bytes

Active Depth
-

39

4086

Service-policy : CBWFQ
Class-map: TELNET (match-all)
11 packets, 514 bytes

0 bps
Excess

Interval

bits/int

(ms)

0 (ms)
38600
25
Packets

Bytes

Delayed

Delayed

no

5 minute offered rate 0 bps, drop rate 0 bps


Match: protocol telnet
Queueing
Output Queue: Conversation 73
Bandwidth 10 (%)
Bandwidth 154 (kbps)Max Threshold 64 (packets)
(pkts matched/bytes matched) 0/0
(depth/total drops/no-buffer drops) 0/0/0
Class-map: HTTP (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: protocol http
Queueing
Output Queue: Conversation 74
Bandwidth 20 (%)
Bandwidth 308 (kbps)Max Threshold 64 (packets)
(pkts matched/bytes matched) 0/0
(depth/total drops/no-buffer drops) 0/0/0
Class-map: class-default (match-any)
28 packets, 3572 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

Above you can see that my telnet traffic matches the policy-map. The shaper is
configured but since it's configured to shape to the entire interface bandwidth it
won't bother us.
So why do we have to use a shaper? Logical interfaces like sub-interfaces can't have
congestion like a physical interface so IOS doesn't support policy-maps that

implement for queuing. By using a shaper, we enforce a "hard limit" for the subinterface and so it will allow queuing.
I hope this has been helpful to you! If you have any questions feel free to ask.

Rate this Lesson:

QoS Traffic Policing Explained


3 votes

Introduction to Policing
When you get a subscription from an ISP (for example a fibre connection) you will
pay for the bitrate that you desire, for example 5, 10 or 20 Mbit. The fibre
connection however is capable of sending traffic at a much higher bitrate (for
example 100 Mbit). In this case the ISP will limit your traffic to whatever you are
paying for. The contract that you have with the ISP is often called the traffic
contract. The bitrate that you pay for at the ISP is often called the CIR (Committed
Information Rate). Limiting the bitrate of a connection is done with policing or

shaping. The difference between the two is that policing will drop the exceeding
traffic and shaping will buffer it.

If you are interested to see how shaping works you should read my traffic shaping
explained article. The logic behind policing is completely different than shaping. To

check if traffic matches the traffic contract the policer will measure the cumulative
byte-rate of arriving packets and the policer can take one of the following actions:

Allow the packet to pass.

Drop the packet.

Remark the packet with a different DSCP or IP precedence value.

When working with policing there are three categories that we can use to see if a
packet conforms the traffic contract or not:

Conforming
Exceeding
Violating

Conforming means that the packet falls within the traffic contract, exceeding means
that the packet is using up the excess burst capability and violating means that its
totally out of the traffic contract rate. Dont worry if you dont know what excess
burst is, well talk about it in a bit. We dont have to work with all 3 categorieswe
can also use just 2 of them (conforming and exceeding for example). Its up to us to
configure what will happen when a packet conforms, exceeds or violates.
When we use 2 categories (conforming and exceeding) well probably want the
packet to be forwarded when its conforming and dropped when its exceeding.
When we use 3 categories we can forward the packet when its conforming, re-mark
it when exceeding and drop it when violating. There are 3 different policing
techniques that we have:

Single rate, two-color (one token bucket).


Single rate, three-color (two token buckets).
Dual rate, three-color (two token buckets).

The single/dual rate, two/three color and token bucket thing might sound
confusing so lets walk through all 3 policer techniques so I can explain them to you:

Single rate, two color (one token bucket):

The basic idea behind a token bucket is pretty simple. Every now and then we put
tokens in the bucket (we call this replenishing) and each time a packet arrives the
policer will check if it has enough tokens in the bucket. If there are enough tokens,
the packet is allowed, if not, we will drop it. Just like a real bucket, it can only hold
a limited amount of tokens. In reality its a bit more complicated so lets continue
and dive into the details!
When we use token buckets for policing there are two important things that
happen:
1.
2.

Tokens are replenished into the token bucket.


When a packet arrives, the policer will check if there are enough tokens in the bucket to
allow the packet to get through.

When a packet arrives the policer will check if it has enough tokens in the token
bucket, if so the packet will be forwarded and the policer will take the tokens out of
the token bucket. So what is a token anyway? When it comes to policing each token
represents a single byte.
The second question is how does the policer replenish the the tokens in the token
bucket?
Each time a packet is policed, the policer will put some tokens into the token
bucket. The number of tokens that it will replenish can be calculated with the

following formula:
Packet arrival time - Previous packet arrival time * Police Rate /
8

So the number of tokens that we put in the bucket depends on the time between
two arriving packets. This time is in seconds. We will multiply the time with the

police rate and divide it by 8. Dividing it by 8 is done so that we have a number in


bytes instead of bits.
Lets look at an example so that this makes more sense:

Imagine we have a policer that is configured for 128.000 bps (bits per second). A
packet has been policed and it takes exactly 1 second until the next packet arrives.
The policer will now calculate how much tokens it should put in the bucket:

1 second * 128.000bps / 8 = 16.000 bytes


So it will put 16.000 tokens into the token bucket. Now imagine a third packet will
arrive, a half second later than the second packetthis is how we calculate it:
0.5 second * 128.000bps / 8 = 8.000 bytes
That means well put 8.000 tokens into the bucket. Basically the more often we
replenish the token bucket, the less tokens youll get. When the bucket is full our
tokens are spilled and discarded.
Now when a packet arrives at the policer this is what will happen:

If the number of bytes in the packet is less or equal than the number of tokens in the bucket,
the packet is conforming. The policer takes the tokens out of the bucket and performs the action
that we configured for conforming.
If the number of bytes in the packet is large than the number of tokens in the bucket, the
packet is exceeding. The policer will leave the tokens in the bucket and performs the action for
exceeding packets.

With this single rate two-color policer conforming probably means to forward the
packet, and exceeding means to drop it. You can also choose to remark exceeding
packets.
As silly as it might sound, its possible to drop packets that are conforming or to forward packets
that are exceedingits up to us to configure an action. That kinda sounds like giving a speeding
ticket to people that are not driving fast enough and rewarding the speed devils

Lets continue with the second type of policer!

Single rate, three-color (two token buckets):


Data traffic is not smooth like VoIP but its bursty. Sometimes we send a lot of
packetsthen its quiet again, couple of packets, etc. Because of this it makes sense
to allow the policer to burst. This means we can temporarily send (burst) more
packets than normally. When we want the policer to support this we will use two
buckets. The first bucket is for Bc (committed burst) tokens and the second one

for Be (excess burst) tokens. Ill explain the two buckets in a second! By allowing the
two buckets we can use three categories:

Conforming

Exceeding

Violating
To understand how bursting works we first need to talk about the two token
buckets. Let me show you a picture:

Above we see two bucketsI call them the bc bucket and the be bucket. Just like
the single rate policer the first bucket is replenished using the following formula:

Packet arrival time - Previous packet arrival time * Police Rate /


8

When we use a single bucket, and the bucket is full we will discard the tokens. With
two buckets it works differently. Above you can see that once the Bc bucket is full
the spillage will end up in the Be bucket. If the Be bucket is full then the tokens will
go where no token has gone beforethey are gone forever! Armed with the two
buckets the policer will work as following when a packet arrives:

When the number of bytes in the packet is less or equal than the number of tokens in the Bc
bucket the packet is conforming. The policer takes the required tokens from the Bc bucket and
performs the configured action for conforming.

If the packet is not conforming and the number of bytes in the packet is less than or equal to
the number of tokens in the Be bucket, the packet is exceeding. The policer will remove the
required tokens from the Be bucket and performs the corresponding action for exceeding
packets.

If the packet is not conforming or exceeding it is violating. The policer doesnt take any
tokens from the Bc or Be bucket and will perform the action that was configured for violating
packets.
Simply said, if we can use the Bc bucket our packets are conforming, when we use
the Be bucket we are exceeding and when we dont use any bucket it is violating.
How are you doing so far? We have one more policer type to cover!

Dual rate, three-color (two token buckets):


The dual rate policer with two token buckets also has a bursting feature but it
works differently compared to the previous (single rate, three-color, two token
buckets) policer that we discussed. Dual rate means that we dont work with a
single rate but we have a CIR and PIR (Peak Information Rate). This new PIR is
above the CIR and allows us to burst.

Packets that fall under the CIR are conforming.

Packets that exceed the CIR but are below the PIR are exceeding.

Packets above the PIR are violating.


A picture says more than a thousand words and this is very true when it comes to
our policers and token buckets. Let me show you how these buckets work:

This time we have two buckets next to each other. The second bucket is called the
PIR bucket and itsnot filled by spilled tokens from the Bc bucket but filled
directly. So how are the buckets filled now? When we configure this dual rate

policer we have to set a CIR and PIR rate. Lets say we have a CIR rate of 128.000
bps and a PIR rate of 256.000 bps. We still have the same formula to replenish
tokens:
Packet arrival time - Previous packet arrival time * Police Rate /
8

Let's say that 0.5 second passes between the first and the second packet to arrive.
This is how the CIR bucket will be filled:

0.5 * 128.000 / 8 = 8.000 tokens.


And the PIR bucket will be replenished as following:
0.5 * 256.000 / 8 = 16.000 tokens.
As you can see the PIR bucket will have more tokens than the Bc bucket. The big
secret is how the policer uses the different tokens from the buckets, this is how it
works:

When the number of bytes in the packet are less or equal than the number of tokens in the Bc
bucket the packet is conforming. The policer takes the required tokens from the Bc bucket and
performs the action. The policer also takes the same amount of tokens from the PIR bucket!
If the packet does not conform and the number of bytes of the packet is less than or equal to
the number of tokens in the PIR bucket, the packet is exceeding.The policer will remove the
required tokens from the PIR bucket and takes the configured action for exceeding packets.
When the packet is not conforming or exceeding, it is violating. The policer doesn't take
any tokens and performs the action for violating packets.

So in short, if there are tokens in the Bc bucket we are conforming, if not but we
have enough in the PIR bucket it is exceeding and otherwise we are violating. One
of the key differences is that for conforming traffic the policer will take tokens from
both buckets!
You have now seen the 3 policer techniques. Let me give you an overview of them
and their differences:

Single Rate, TwoColor

Single Rate, Threecolor

Dual-Rate, ThreeColor

1st bucket
refill

based on time
difference of arrival
between 2 packets

based on time
difference of arrival
between 2 packets

based on time difference


of arrival between 2
packets

2nd bucket
refill

no 2nd bucket
available

Filled by spilled tokens


from 1st bucket

Same as the 1st bucket,


but based on PIR rate

Conforming

take tokens from 1st


bucket

take tokens from 1st


bucket

take tokens from both


buckets

Exceeding

all packets that are


not conforming

packets that are not


conforming, take tokens
from 2nd bucket

packets that are not


conforming but enough
tokens in 2nd bucket

not available

All packets that are not


conforming or
exceeding

All packets that are not


conforming or
exceeding

Violating

That's the end of this policer story I hope this article is useful to you, policing can be
quite a mind-boggling topic to understand! In this tutorial you will see how
to configuring policing on a Cisco IOS router. If you have any questions, just leave a
comment.

Rate this Lesson:

QoS Policing Configuration


Example
4 votes

in this lesson you will learn how to configure the different types of policing on Cisco
IOS routers:

Single rate, two-color


Single rate, three-color
Dual rate, three-color

If you have no idea what the difference is between the different policing types then
you should start with my QoS Traffic Policing Explained lesson. Having said that, lets
configure some routers. Ill use the following topology for this:

We dont need anything fancy to demonstrate policing. I will use two routers for
this, R1 will generate some ICMP traffic and R2 will do the policing.
Lets start with the first policer

Single Rate Two-Color Policing


Configuration is done using the MQC (Modular QoS Command-Line Interface). First
we need to create a class-map to classify our traffic:
R2(config)#class-map ICMP
R2(config-cmap)#match protocol icmp

To keep it simple, I will use NBAR to match on ICMP traffic. Now we can create a
policy-map:
R2(config)#policy-map SINGLE-RATE-TWO-COLOR
R2(config-pmap)#class ICMP
R2(config-pmap-c)#police 128000
R2(config-pmap-c-police)#conform-action transmit

R2(config-pmap-c-police)#exceed-action drop

The policy-map is called SINGLE-RATE-TWO-COLOR and we configure policing for


128000 bps (128 Kbps) under the class-map. When the traffic rate is below 128
Kbps the conform-action is to transmit the packet, when it exceeds 128 Kbps we
will drop the packet.
Above I first configured the police CIR rate and then I configured the actions in the
policer configuration. You can also configure everything on one single line, then it
will look like this:
R2(config-pmap-c)#police 128000 conform-action transmit exceedaction drop

Both options achieve the same so it doesnt matter which one you use. For
readability reasons I selected the first option.
Lets activate the policer on the interface and well see if it works:
R2(config)#interface FastEthernet 0/0
R2(config-if)#service-policy input SINGLE-RATE-TWO-COLOR

You need to use the service-policy command to activate the policer on the
interface.
Time to generate some traffic on R1:
R1#ping 192.168.12.2 repeat 999999
Type escape sequence to abort.
Sending 999999, 100-byte ICMP Echos to 192.168.12.2, timeout is 2
seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!
!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!
!!!

You can already see some of the packets dont make it to their destination. Lets see
what R2 thinks about all these pings:
R2#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy input: SINGLE-RATE-TWO-COLOR
Class-map: ICMP (match-all)
1603 packets, 314382 bytes
5 minute offered rate 18000 bps, drop rate 0 bps
Match: protocol icmp
police:
cir 128000 bps, bc 4000 bytes
conformed 1499 packets, 199686 bytes; actions:
transmit
exceeded 104 packets, 114696 bytes; actions:
drop
conformed 10000 bps, exceed 0 bps
Class-map: class-default (match-any)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

Above you can see that the policer is doing its job. The configured CIR rate is
128000 bps (128 Kbps) and the bc is set to 4000 bytes. If you dont configure the bc
yourself then Cisco IOS will automatically select a value based on the CIR rate. You
can see that most of the packets were transmitted (conformed) while some of them
got dropped (exceeded).
If you understand the theory about policing then the configuration and verification
isnt too bad right? Lets move on to the next policer

Single Rate Three-Color Policing


If you understood the previous configuration then this one will be easy. Ill use the
same class-map:

R2(config)#policy-map SINGLE-RATE-THREE-COLOR
R2(config-pmap)#class ICMP
R2(config-pmap-c)#police 128000
R2(config-pmap-c-police)#conform-action transmit
R2(config-pmap-c-police)#exceed-action set-dscp-transmit 0
R2(config-pmap-c-police)#violate-action drop

Our CIR rate is still 128000 bps and the conform-action is still transmit. The
difference is the exceed-action which I've set to set-dscp-transmit. When the traffic
is exceeding, the policer will reset the DSCP value to 0 but still transmits the packet.
In our example, the ICMP traffic wasn't marked at all but imagine that some marked
traffic hits this policer...if it were "conforming" then it would be transmitted and
keeps it DSCP value, if it were exceeding it would also be transmitted but as a
"penalty" the DSCP value is stripped. The last command is also new, when the
traffic is violating we use violate-action to drop it.
Let's activate this policer:
R2(config-if)#no service-policy input SINGLE-RATE-TWO-COLOR
R2(config-if)#service-policy input SINGLE-RATE-THREE-COLOR

I'll remove the old policer and enable the new one. Let's generate some traffic on
R1 again:
R1#ping 192.168.12.2 repeat 999999
Type escape sequence to abort.
Sending 999999, 100-byte ICMP Echos to 192.168.12.2, timeout is 2
seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!
!!!

Some packets are being dropped, let's see what R2 thinks about it:
R2#show policy-map interface FastEthernet 0/0
FastEthernet0/0

Service-policy input: SINGLE-RATE-THREE-COLOR


Class-map: ICMP (match-all)
4170 packets, 475380 bytes
5 minute offered rate 20000 bps, drop rate 0 bps
Match: protocol icmp
police:
cir 128000 bps, bc 4000 bytes, be 4000 bytes
conformed 2658 packets, 303012 bytes; actions:
transmit
exceeded 1470 packets, 167580 bytes; actions:
set-dscp-transmit default
violated 42 packets, 4788 bytes; actions:
drop
conformed 25000 bps, exceed 14000 bps, violate 0 bps
Class-map: class-default (match-any)
9 packets, 576 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

Above you can see the conformed, exceeded and violated packets with the
transmit, set-dscp-transmit and drop actions. Also, if you take a close look you can
see the be (4000 bytes) next to the CIR rate. Just like the bc, if you don't configure it
yourself then Cisco IOS will select a be automatically.
We got one more policer to go...

Dual Rate Three-Color Policing


The configuration is similar but this time we also configure the PIR. Here's what it
looks like:
R2(config)#policy-map DUAL-RATE-THREE-COLOR
R2(config-pmap)#class ICMP
R2(config-pmap-c)#police cir 128000 pir 256000
R2(config-pmap-c-police)#conform-action transmit
R2(config-pmap-c-police)#exceed-action set-dscp-transmit 0
R2(config-pmap-c-police)#violate-action drop

Next to the CIR (128 Kbps) I also configured the PIR (256 Kbps). I've kept the actions
the same as the previous policer. Let's enable it:
R2(config)#interface FastEthernet 0/0
R2(config-if)#no service-policy input SINGLE-RATE-THREE-COLOR
R2(config-if)#service-policy input DUAL-RATE-THREE-COLOR

Let's generate some traffic:


R1#ping 192.168.12.2 repeat 99999
Type escape sequence to abort.
Sending 99999, 100-byte ICMP Echos to 192.168.12.2, timeout is 2
seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!
!!!!!!!!!!!!!!!!!!!!!!!!!.!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!

Now take a look at R2:


R2#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy input: DUAL-RATE-THREE-COLOR
Class-map: ICMP (match-all)
7472 packets, 851808 bytes
5 minute offered rate 29000 bps, drop rate 0 bps
Match: protocol icmp
police:
cir 128000 bps, bc 4000 bytes
pir 256000 bps, be 8000 bytes
conformed 3713 packets, 423282 bytes; actions:
transmit
exceeded 3715 packets, 423510 bytes; actions:
set-dscp-transmit default
violated 44 packets, 5016 bytes; actions:
drop
conformed 32000 bps, exceed 32000 bps, violate 0 bps
Class-map: class-default (match-any)

0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any

The output above is similar but now you see the CIR and PIR. Some of our packets
are conforming, others are exceeding and violating.
You have now seen how to configure the single-rate two-color / three-color and the
dual-rate three color policers. I hope these configuration examples have been
useful to you. If you have any questions, feel free to leave a comment!

Rate this Lesson:

QoS Traffic Shaping Explained


5 votes

Shaping is a QoS (Quality of Service) technique that we can use to enforce lower
bitrates than what the physical interface is capable of. Most ISPs will use shaping or
policing to enforce traffic contracts with their customers. When we use shaping
we will buffer the traffic to a certain bitrate, policing will drop the traffic when it
exceeds a certain bitrate. Lets discuss an example why you want to use shaping:
Your ISP sold you a fibre connection with a traffic contract and a guaranteed
bandwidth of 10 Mbit, the fibre interface however is capable of sending 100 Mbit
per second. Most ISPs will configure policing to drop all traffic above 10 Mbit so that
you cant get more bandwidth than what you are paying for. Its also possible that
they shape it down to 10 Mbit but shaping means they have to buffer data while
policing means they can just throw it away. The 10 Mbit that we pay for is called
theCIR (Commited Information Rate).
There are two reasons why you might want to configure shaping:

Instead of waiting for the policer of the ISP to drop your traffic, you might want to shape
your outgoing traffic towards the ISP so that they dont drop it.
To prevent egress blocking. When you go from a high speed interface to a low speed
interface you might get packet loss (tail drop) in your outgoing queue. We can use shaping to
make sure everything will be sent (until its buffer is full).

In short, we configure shaping when we want to use a lower bitrate than what the
physical interface is capable of.

Routers are only able to send bits at the physical clock rate. As network engineers
we think we can do pretty much anything but its impossible to make an electrical
or optical signal crawl slower through the cable just because we want to. If we want
to get a lower bitrate we will have to send some packets, pause for a moment, send
some packets, pause for a momentand so on.
For example lets say we have a serial link with a bandwidth of 128 kbps. Imagine
we want to shape it to 64 kbps. If we want to achieve this we need to make sure

that 50% of the time we are sending packets and 50% of the time we are pausing.
50% of 128 kbps = an effective CIR of 64 kbps.
Another example, lets say we have the same 128 kbps link but the CIR rate is 96
kbps. This means we will send 75% of the time and pause 25% of the time (96 / 128
= 0.75).
Now you have a basic idea of what shaping is, lets take a look at a shaping example
so I can explain some terminology:

Above we see an interface with a physical bitrate of 128 kbps that has been
configured to shape to 64 kbps. On the vertical line you can see the physical bitrate
of 128 kbps. Horizontally you can see the time from 0 to 1000 milliseconds. The
green line indicates when we send traffic and when we arepausing traffic. The first
62.5 ms we are sending traffic at 128 kbps and the second 62.5 ms we arepausing.
This first interval takes 125 ms (62.5 + 62.5 = 125 ms) and we call this interval the Tc
(Time Interval).

In total there are 8 time intervals of 125 ms each. 8x 125 ms = 1000 ms. Most Cisco
routers have a Tcdefault value of 125 ms. With the example above we are sending
traffic 50% of the time and pausing 50% of the time. 50% of 128 kbps = shaping
rate of 64 kbps.

Our Cisco router will calculate how many bits it can send each Tc so that it will
reach the targetted shaping rate. This value is called the Bc (committed burst).
In the example above the Bc is 8.000 bits. Each Tc (125 ms) it will send 8.000 bits
and when its done it will wait until the Tc expires. In total we have 1.000 ms of time.
When we divide 1.000 ms by 125 ms we have 8 Tcs. 8000 bits x 8 Tcs = shaping rate
of 64 kbps.
To sum things up, this is what you have learned so far:

Tc (time interval) is the time in milliseconds over which we can send the Bc (committed
burst).
Bc (committed burst) is the amount of traffic that we can send during the Tc (time interval)
and is measured in bits.
CIR (committed information rate) is the bitrate that is defined in the traffic contract that
we received from the ISP.

There are a number of formulas that we can use to calculate the values above:
Bc value:
Bc = Tc * CIR

In the example above we have a Tc of 125 ms and we are shaping to 64 kbps (thats
the CIR) so the formula will be:
125 ms * 64 kbps = 8.000 bits.
Tc value:
Tc = Bc / CIR

We just calculated the Bc (8.000 bits) and the CIR rate is 64 kbps, the formula will
be:

8.000 bits / 64.000 = 0.125. So thats 125 ms.


Lets look at another example. Imagine we have an interface with a physical bitrate
of 256 kbps and we are shaping to 128 kbps. How many bits will we send each Tc?
CIR = 128 kbps
TC = 125 ms (the default)
125 ms x 128 kbps = 16.000 bits
So the Bc is 16.000 bits. Each Tc we will send 16.000 bits.
The shaper will grab 16.000 bits each Tc and send them. Once they are sent it will
wait until the Tc has expired and a new Tc will start.
The cool thing about shaping is that all traffic will be sent since we are buffering it.
The downside of buffering traffic is that it introduces delay and jitter. Let me show
you an example:

Above we have the same interface with a physical bitrate of 128 kbps and the Tc is
125 ms. Shaping has been configured for 64 kbps. You can see that each Tc it takes
62 ms to send the Bc. How did I come up with this number? Let me walk you
through it:
125 ms * 64 kbps = 8.000 bits.
Now we know the Bc we can calculate how long it takes for a 128 kbps interface to
send these 8000 bits. This is how you do it:
Delay value:
Delay = Bc / physical bitrate

Lets try this formula to find out how long it takes for our 128kbps interface to send
8.000 bits:
8.000 / 128.000 = 0.0625
So it takes 62.5 ms to send 8000 bits through a 128 kbps interface. If we have a fast
interface the delay will of course be a lot lower, lets say we have a T1 interface
(1.54 Mbit):
8.000 / 1.540.000 = 0.0051
It only takes 5 ms to send 8000 bits through a T1 interface.
The default Tc of 125 ms is maybe not a very good idea when you are working with
Voice over IP. Imagine that we are sending a data packet that is exactly 8.000 bits
over this T1 link. It will only take 5 ms but that means that we are waiting 120 ms
( 125 ms 5 ms) before the Tc expires and we can send the next 8.000 bits . If this
next packet is a VoIP packet then it will at least be delayed by 120 ms.

Cisco recommends a one way delay of 150 to 200 ms for realtime traffic like VoIP so
wasting 120 ms just waiting isnt a very good idea. When you have realtime traffic
like voice, Cisco recommends to set your Tc to 10ms to keep the delay to a
minimum.
So if we set our Tc to 10 ms instead of the default 125 mswhat will our Bc be? In
other words how many bits can we send during the Tc of 10 ms?
Lets get back to our 128 kbps interface that is configured to shape to 64 kbps to
calculate this:
10 ms * 64 kbps = 640 bits.
640 bits is only 80 bytes...not a lot right? Many IP packets are larger than 80 bytes so if you
configure the Tc at 10 ms you will probably also have to use fragmentation. In this example, IP
packets should be fragmented to 80 bytes each.

How are you doing so far? I can imagine all the terminology and formulas make
your head spin. We are almost at the end we only have to talk about the excess
burst.

When we configure traffic shaping we have the option to send more than the Bc in
some Tcs. There is a very good reason to do this. Data traffic is not smooth but very

bursty...sometimes we don't send anything, then a few packets and suddenly


there's an avalanche of traffic. It would be nice if you can send a little bit more
traffic than the normal 'Bc' after a quiet period. To illustrate this we first need to
talk about the token bucket.

Imagine we have a bucket....this bucket we will fill with tokens and each token
represents 1 bit. When we want to send a packet we will grab the number of tokens

we require to send this packet. If the packet is 120 bits we will grab 120 tokens and
send the packet. The amount of tokens in this bucket is the Bc. Once the bucket is
empty we can't send anything anymore and you'll have to wait for the next Tc. At
the next Tc we will refill our token bucket with the Bc and we can send again.
This means that we can never send more than the Bc...it's impossible to save
tokens so that you can go beyond the Bc. If we don't use all of our tokens it won't

fit in the bucket and they will be discarded. When it comes to shaping it's good to
be a big spender...use those tokens!

Now let's talk about the excess burst. We still have the same token bucket but now
the bucket is larger and can contain the Bc + Be. At the beginning of the Tc we will
only fill the token bucket with the Bc but because it's larger we can "save" tokens up
to the Be level. The advantage of having a bigger bucket is that we can save tokens
when we have periods of time where we send less bits than the configured
shaping rate. Normally the bucket would spill once the Bc is full but now we can

save up to the Be level.


Let's take a look at an example of a shaper that we configured to use the Bc and the
Be:

Above you see an interface with a physical bitrate of 128 kbps. It has been
configured to shape to 64 kbps with a default Tc of 125 ms. This means the Bc is
8.000 bits and the Be is configured at 8.000 bits. This means we can store up to
16.000 bits. Imagine that the interface didn't send any traffic for quite some time,
this allows the token bucket to fill up to 16.000 bits (8.000 Bc + 8.000 Be) . This
means that in the first 125 ms we can send 16.000 bits.

In the second interval the token bucket is refilled up to the Bc level so we can send
another 8.000 bits. There's quite some traffic so each Tc all 8.000 Bc bits are used.
After awhile all traffic has been sent which allows us to save tokens again and fill
the token bucket completely up to the Bc+Be level. The usage of the Be allows us to
effectively "burst" after a period of inactivity.
So there you go, you have now learned how traffic shaping works, what the CIR, Tc,
Bc and Be are and how to calculate them. In another lesson I will cover how
to configure traffic shaping on a Cisco IOS router. If you have any questions feel free
to ask!

Rate this Lesson:

Traffic Shaping on Cisco IOS


4 votes

In a previous lesson I explained how we can use shaping to enforce lower bitrates.
In this lesson, I will explain how to configure shaping. This is the topology we will
use:

Above we have two routers connected to each other with a serial and FastEthernet
link. Well use both interfaces to play with shaping. The computers are used for
iPerf which is a great application to test the maximum achievable bandwidth. The
computer on the left side is our client, on the right side we have the server. Right
now we are using the serial interfaces thanks to the following static routes:
R1#
ip route 192.168.2.0 255.255.255.0 192.168.12.2
R2#
ip route 192.168.1.0 255.255.255.0 192.168.12.1

Lets take a look at some examples!

Configuration

We will start with some low bandwidth settings. Lets set the clock rate of the serial
interface to 128 Kbps:
R2(config)#interface Serial 0/0/0
R2(config-if)#clock rate 128000

Lets start iPerf on the server:


SERVER# iperf -s
-----------------------------------------------------------Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

Thats all we have to do on the server side, it will listen on the default port with a
window size of 85.3 Kbyte. Heres what we will do on the client side:
CLIENT# iperf -c 192.168.2.2 -P 8
-----------------------------------------------------------Client connecting to 192.168.2.2, TCP port 5001
TCP window size: 85.0 KByte (default)
-----------------------------------------------------------[ 4] local 192.168.1.1 port 44344 connected with 192.168.2.2
5001
[ 5] local 192.168.1.1 port 44345 connected with 192.168.2.2
5001
[ 6] local 192.168.1.1 port 44346 connected with 192.168.2.2
5001
[ 7] local 192.168.1.1 port 44347 connected with 192.168.2.2
5001
[ 8] local 192.168.1.1 port 44348 connected with 192.168.2.2
5001
[ 3] local 192.168.1.1 port 44343 connected with 192.168.2.2
5001
[ 9] local 192.168.1.1 port 44349 connected with 192.168.2.2
5001

port
port
port
port
port
port
port

[ 10] local 192.168.1.1 port 44350 connected with 192.168.2.2 port


5001

The -P parameter tells the client to establish eight connections. Im using multiple
connections so we get a nice average bandwidth. Heres what you will see on the
server:
Server#
[ ID] Interval
[ 4] 0.0-136.2
[ 10] 0.0-137.0
[ 11] 0.0-138.0
[ 9] 0.0-138.4
[ 5] 0.0-148.0
[ 6] 0.0-166.7
[ 8] 0.0-171.4
[ 7] 0.0-172.9
[SUM] 0.0-172.9

sec
sec
sec
sec
sec
sec
sec
sec
sec

Transfer
Bandwidth
256 KBytes 15.4 Kbits/sec
256 KBytes 15.3 Kbits/sec
256 KBytes 15.2 Kbits/sec
256 KBytes 15.1 Kbits/sec
384 KBytes 21.3 Kbits/sec
384 KBytes 18.9 Kbits/sec
384 KBytes 18.4 Kbits/sec
384 KBytes 18.2 Kbits/sec
2.50 MBytes
121 Kbits/sec

Above you see the individual connections and the [SUM] is the combined
throughput of all connections. 121 Kbps comes pretty close to the clock rate of 128
Kbps which we configured.
Lets configure shaping to limit the throughput of Iperf. This is done with the MQC
(Modular Quality of Service) framework which makes the configuration very simple.
First we need to configure an access-list which matches our traffic:
R1(config)#ip access-list extended IPERF_CLIENT_SERVER
R1(config-ext-nacl)#permit ip host 192.168.1.1 host 192.168.2.2

The access-list above will match all traffic from 192.168.1.1 to 192.168.2.2. Now we
need to create a class-map:
R1(config)#class-map IPERF
R1(config-cmap)#match access-group name IPERF_CLIENT_SERVER

The class map is called IPERF and matches our access-list. Now we can configure a
policy-map:

R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape ?
adaptive
Enable Traffic Shaping adaptation to BECN
average
configure token bucket: CIR (bps) [Bc (bits) [Be
(bits)]],
send out Bc only per interval
fecn-adapt
Enable Traffic Shaping reflection of FECN as BECN
fr-voice-adapt Enable rate adjustment depending on voice
presence
peak
configure token bucket: CIR (bps) [Bc (bits) [Be
(bits)]],
send out Bc+Be per interval

In the policy-map we select the class-map, above you can see the options for
shaping. Well start with a simple example:
R1(config-pmap-c)#shape average ?
<8000-154400000> Target Bit Rate (bits/sec). (postfix k, m, g
optional;
decimal point allowed)
percent
% of interface bandwidth for Committed
information rate

We will go for shape average where we have to specify the target bit rate. Lets go
for 64 Kbps (64000 bps):
R1(config-pmap-c)#shape average 64000 ?
<32-154400000> bits per interval, sustained. Recommend not to
configure, the
algorithm will fi

When you configure the target bit rate, theres an option to specify the bits per
interval. Cisco IOS recommends you not to configure this manually so for now, well
stick to configuring the bit rate. This means Cisco IOS will automatically calculate
the Bc and Tc:
R1(config-pmap-c)#shape average 64000

Thats all there is to it. Now we can activate our policy-map on the interface:

R1(config)#interface Serial 0/0/0


R1(config-if)#service-policy output SHAPE_AVERAGE

Everything is now in place, lets try iPerf again:


CLIENT# iperf -c 192.168.2.2 -P 8

Heres the sum on the server:


SERVER#
[SUM] 0.0-300.5 sec

2.12 MBytes

59.3 Kbits/sec

Great, thats close to 64 Kbps. Heres what it looks like on our router:
R1#show policy-map interface Serial 0/0/0
Serial0/0/0
Service-policy output: SHAPE_AVERAGE
Class-map: IPERF (match-all)
1916 packets, 2815928 bytes
5 minute offered rate 41000 bps, drop rate 0 bps
Match: access-group name IPERF_CLIENT_SERVER
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/324/0
(pkts output/bytes output) 1592/2330664
shape (average) cir 64000, bc 256, be 256
target shape rate 64000
Class-map: class-default (match-any)
102 packets, 7456 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 47/3319

Above you can see that we have matched packets on our policy-map. Cisco IOS
decided to use 256 bits for the Bc value.

The example above is of a Cisco 2800 router running IOS 15.1 which only shows you the calculated
Bc value. Older Cisco IOS versions show a lot more detailed information, including the calculated Tc
value.

How did it come up with this value? The Tc can be calculated like this:
Tc = Bc / CIR

This is what the formula looks like:


256 / 64000 = 0.004.
By using a Bc value of 256 bits, our Tc becomes 4 ms.
Lets look at some more examples, Ill also explain how to change the Be and Tc
values.
Let's set the clock rate to 256 Kbps and shape to 128 Kbps:
R2(config)#interface Serial 0/0/0
R2(config-if)#clock rate 256000

Now we only have to change our shaping configuration:


R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 128000

Here's what it looks like on the router:


R1#show policy-map interface Serial 0/0/0
Serial0/0/0
Service-policy output: SHAPE_AVERAGE
Class-map: IPERF (match-all)
1916 packets, 2815928 bytes
5 minute offered rate 0 bps, drop rate 0 bps

Match: access-group name IPERF_CLIENT_SERVER


Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/324/0
(pkts output/bytes output) 1592/2330664
shape (average) cir 128000, bc 512, be 512
target shape rate 128000

This time we have a Bc of 512 bits. Why?


512 / 128000 = 0.004
Once again we have a Tc value of 4 ms. Let's try iPerf again:
CLIENT# iperf -c 192.168.2.2 -P 8

Here's the result:


SERVER#
[SUM] 0.0-153.5 sec

2.25 MBytes

123 Kbits/sec

Seems our shaper is working fine, we get close to 128 Kbps. Let's bump up the clock
rate again:
R2(config)#interface Serial 0/0/0
R2(config-if)#clock rate 512000

We'll shape to 50% of the available bandwidth again:


R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 256000

Here's what it looks like on the router:


R1#show policy-map interface Serial 0/0/0
Serial0/0/0

Service-policy output: SHAPE_AVERAGE


Class-map: IPERF (match-all)
8380 packets, 12339992 bytes
5 minute offered rate 79000 bps, drop rate 0 bps
Match: access-group name IPERF_CLIENT_SERVER
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/1304/0
(pkts output/bytes output) 7076/10387712
shape (average) cir 256000, bc 1024, be 1024
target shape rate 256000

This time we see a Bc value of 1024:


1024 / 256000 = 0.004
Once again, Cisco IOS sets the Bc value so we end up with a Tc value of 4 ms. Let's
try iPerf again:
CLIENT# iperf -c 192.168.2.2 -P 8
SERVER#
[SUM] 0.0-93.8 sec 2.75 MBytes

246 Kbits/sec

This is looking good, our traffic is limited to 246 Kbps.


What about faster interfaces? Let's try something with our FastEthernet interfaces
between R1 and R2. Let's change the static route so that R1 and R2 don't use the
serial links anymore:
R1(config)#no
R1(config)#ip
R2(config)#no
R2(config)#ip

ip route 192.168.2.0 255.255.255.0 192.168.12.2


route 192.168.2.0 255.255.255.0 192.168.21.2
ip route 192.168.1.0 255.255.255.0 192.168.12.1
route 192.168.1.0 255.255.255.0 192.168.21.1

Let's see what kind of throughput we get without any shaper configured:
CLIENT# iperf -c 192.168.2.2 -P 8

SERVER#
[SUM] 0.0-10.2 sec

116 MBytes

95.4 Mbits/sec

The output above is what we would expect from a 100 Mbit link. Let's shape this to
1 Mbit:
R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 1m

Instead of specifying the shape value in bits, you can also use "k" or "m" to specify
Kbps or Mbps. Let's activate it:
R1(config)#interface FastEthernet 0/0
R1(config-if)#service-policy output SHAPE_AVERAGE

What Bc value did the router calculate this time?


R1#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy output: SHAPE_AVERAGE
Class-map: IPERF (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group name IPERF_CLIENT_SERVER
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 0/0
shape (average) cir 1000000, bc 4000, be 4000
target shape rate 1000000

We see a Bc value of 4000:


4000 / 1000000 = 0.004 ms.
Once again, the router prefers a Tc of 4 ms.

The most recent Cisco IOS versions always prefer a Tc of 4 ms and will calculate the Bc value
accordingly. On older Cisco IOS versions it's possible that you see higher Bc values with a Tc of 125
ms.

Let's test Iperf again:


CLIENT~# iperf -c 192.168.2.2 -P 8
SERVER#
[SUM] 0.0-27.4 sec 3.12 MBytes
955 Kbits/sec

Great, our traffic is now shaped to 955 Kbps which is close enough to 1 Mbps.
So far we used the default Bc and Tc values that the router calculated for us. What
if we have a requirement where we have to configure one of these values
manually?
We can't configure the Tc directly but we can change the Bc. Let's say that we have
a requirement where we have to set the Tc to 10 ms. How do we approach this?
Here's the formula to calculate the Tc:
Bc = Tc * CIR

So in our case we want 10 ms:


10 ms * 1000 Kbps = 10.000 bits
Let's configure our Bc value to 10.000 bits:
R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 1m ?
<32-154400000> bits per interval, sustained. Recommend not to
configure, the
algorithm will fi

First we set the targetted bit rate and then we set the Bc value:

R1(config-pmap-c)#shape average 1m 10000

That should do it, let's check the router:


R1#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy output: SHAPE_AVERAGE
Class-map: IPERF (match-all)
2496 packets, 3716912 bytes
5 minute offered rate 19000 bps, drop rate 0 bps
Match: access-group name IPERF_CLIENT_SERVER
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/189/0
(pkts output/bytes output) 2307/3430766
shape (average) cir 1000000, bc 10000, be 10000
target shape rate 1000000

That's all there is to it. Let's try one more example, let's say we want a Tc of 125 ms:
125 ms * 1000 Kbps = 125.000 bits
Let's configure this:
R1(config)#policy-map SHAPE_AVERAGE
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape average 1000000 125000

Here's what it looks like on the router:


R1#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy output: SHAPE_AVERAGE
Class-map: IPERF (match-all)
2496 packets, 3716912 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group name IPERF_CLIENT_SERVER

Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/189/0
(pkts output/bytes output) 2307/3430766
shape (average) cir 1000000, bc 125000, be 125000
target shape rate 1000000

That's it, you have now seen how to configure shaping and how to influence the Tc
by setting different Bc values.

Conclusion
Thanks to the MQC, configuring shaping on Cisco IOS routers is pretty
straightforward. You have now learned how to configure shaping and also how to
influence the Tc by setting the correct Bc value.
In the next lesson, I will explain how "peak" shaping works which works a bit
different compared to "average" shaping.
I hope you enjoyed this lesson, if you have any questions feel free to leave a
comment below.

Rate this Lesson:

Peak Traffic Shaping on Cisco IOS


3 votes

Cisco IOS routers support two types of shaping:

shape average
shape peak

In my first lesson I explained the basics of shaping and I demonstrated how to


configure shape average. This time we will take a look at peak shaping which is often

misunderstood and confusing for many networking students.

Shape Average
Heres a quick recap of how shape average works:

We have a bucket and it can contain Bc and Be tokens. At the beginning of the Tc
we will only fill the token bucket with the Bc but because its larger we can save
tokens up to the Be level. The advantage of having a bigger bucket is that we can
save tokens when we have periods of time where we send less bits than the
configured shaping rate.

After a period of inactivity, we can send our Bc and Be tokens which allows us to
burst for a short time. When we use a bucket that has Bc and Be, this is what our
traffic pattern will look like:

Above you can see that we start with a period where we are able to spend Bc and
Be tokens, the next interval only the Bc tokens are renewed so we are only able to
spend those. After awhile a period of inactivity allows us to fill our bucket again.

Shape Peak
Peak shaping uses the Be in a completely different way. We still have a token
bucket that stores Bc + Be but will fill our token bucket with Bc and Be tokens each
Tc and unused tokens will be discarded.

Heres what our traffic pattern will look like:

Each Tc our Bc and Be tokens are renewed so we are able to spend them. A period
of inactivity doesnt mean anything.
Now you might be wondering why do we use this and whats the point of it?
Depending on your traffic contract, an ISP might give you a CIR and PIR (Peak
Information Rate). The rate is the guaranteed bandwidth that they offer you, the

PIR is the maximum non-guaranteed ratethat you could get when there is no
congestion on the network. When there is congestion, this traffic might be dropped.
ISPs typically use policing to enforce these traffic contracts.
The idea behind peak shaping is that we can configure shaping and take the CIR
and PIR of the ISP into account.
When we send a lot of traffic, we will be spending the Bc and Be tokens each Tc and
we are shaping up to the PIR. When there isnt as much traffic to shape, we only
spend Bc tokens and thats when we are shaping up to the CIR.

Lets look at an configuration example which will help to clarify things.

Configuration
I will use the following topology to demonstrate peak shaping:

Above we have two computers and two routers. The computers will be used to
generate traffic with iPerf, Ill configure peak shaping on R1. Lets do a quick test
with iPerf, time to start the server:
SERVER# iperf -s
-----------------------------------------------------------Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------

Now let's generate some traffic:


CLIENT# iperf -c 192.168.2.2
-----------------------------------------------------------Client connecting to 192.168.2.2, TCP port 5001
TCP window size: 85.0 KByte (default)

-----------------------------------------------------------[ 3] local 192.168.1.1 port 42646 connected with 192.168.2.2 port


5001
[ ID] Interval
Transfer
Bandwidth
[ 3] 0.0-10.0 sec
113 MBytes 94.9 Mbits/sec

Ok great, that's close to 100 Mbit. That's what we would expect from a FastEternet
link. Now let's take a look at the peak shaping configuration:
R1(config)#ip access-list extended IPERF_TRAFFIC
R1(config-ext-nacl)#permit tcp any any eq 5001
R1(config-ext-nacl)#exit
R1(config)#class-map IPERF
R1(config-cmap)#match access-group name IPERF_TRAFFIC

First we create an access-list that matches our iPerf traffic and we attach it to a
class-map. Now we can configure the policy-map:
R1(config)#policy-map SHAPE_PEAK
R1(config-pmap)#class IPERF
R1(config-pmap-c)#shape peak ?
<8000-154400000> Target Bit Rate (bits/sec). (postfix k, m, g
optional;
decimal point allowed)
percent
% of interface bandwidth for Committed
information rate

Above you see the shape peak command where we configure the target bit rate.
The value that you specify here is the CIR, not the PIR! Let's try a CIR of 128 Kbps:
R1(config-pmap-c)#shape peak 128000

Only thing left to do is to activate it on the interface:


R1(config)#interface FastEthernet 0/0
R1(config-if)#service-policy output SHAPE_PEAK

Now let's try iPerf again:

CLIENT# iperf -c 192.168.2.2


SERVER#
[ 5] 0.0-21.4 sec
640 KBytes

245 Kbits/sec

We can see that the shaper works since we only get a transfer rate of 245 Kbps.
Let's take a closer look at policy-map on R1:
R1#show policy-map interface FastEthernet 0/0
FastEthernet0/0
Service-policy output: SHAPE_PEAK
Class-map: IPERF (match-all)
466 packets, 699180 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: access-group name IPERF_TRAFFIC
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/9/0
(pkts output/bytes output) 457/685554
shape (peak) cir 128000, bc 512, be 512
target shape rate 256000
Class-map: class-default (match-any)
32 packets, 3447 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: any
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 32/3447

Here's what you see above:


Our CIR is 128000 bits per second (128 Kbps) which is what we configured with the
shape peak command, our Bc and Be are 512 bits. Each TC our Bc and Be tokens
are renewed, by using both we can shape up to the PIR of 256 Kbps. With the
default settings, our Be and Bc have the same size so the PIR is 2 * CIR.

Conclusion

You have now seen how peak shaping works and how to configure it. Most students
however are still confused after learning about peak shaping so let's get a couple of
things out of the way.
First of all, you need to keep in mind that the talk about CIR and PIR is only
"cosmetic" when it comes to peak shaping. In the output of the router you can see a
CIR and PIR but that's it. It's not like the router will automatically adapt its shaping
rate up to the CIR or PIR or something. When we configure peak shaping, we set a
maximum rate and the router will shape up to that rate...that's it!
When there is a lot of traffic we will be shaping up to that maximum rate so we can
say we are shaping up to the PIR of the ISP. When there isn't as much traffic, maybe
we are only using our Bc tokens so we can say that we are shaping up to the CIR
but that's it, it's just talk.
One question I see all the time is that students ask if the following two commands
will achieve the same thing:

shape peak 128000


shape average 256000

We have seen that shape peak 128000 gets us a PIR of 256 Kbps and shape average
256000 should get us a CIR of 256 kbps. Do we get the same result with both
commands?
The answer is no although if you measure it, it will be insignificant. Let me explain
this:
When we use shape average, each Tc we renew the Bc tokens which allows us to
shape up to 256 Kbps. However after a period of inactivity and our bucket is full
with Bc and Be tokens, we can spend both during a Tc which means we will shape
up to 512 Kbps for a short time.

With peak shaping, we renew Bc and Be tokens each Tc, unused tokens
are discarded so there is no way to get above 256 Kbps. Shape average would give a
slightly better result but only after a period of inactivity.
The following commands however should give you the exact same result:

shape peak 128000


shape average 256000 1024 0

When you disable the Be tokens for shape average, it will be unable to burst so it
can no longer get above 256 Kbps.

Configurations
R1
R2
Want to take a look for yourself? Here you will find the configuration of each device.

I hope this lesson has been useful, if you have any questions feel free to leave a
comment!

Rate this Lesson:

Shaping with Burst up to interface


speed

2 votes

One of the QoS topics that CCIE R&S students have to master is shaping and how to
calculate the burst size. In this short article I want to explain how to calculate the
burst size so that you can allowbursting up to the physical interface rate after a
period of inactivity. Lets take a look at an example:

Above we have a router with two PVCs. The physical AR (Access Rate) of this
interface is 1536 Kbps. The PVC on top has a CIR rate of 512 Kbps and the one at
the bottom has a CIR of 64 Kbps. Lets say we have the following requirements:

Each PVC has to be shaped to the CIR rate.


After a period of inactivity, both PVCs should be able to burst up to the physical access rate.
Tc should be 50 Ms.

So how do we calculate this? Lets first calculate the Bc for the first PVC that has a
CIR of 512 Kbps:

512.000 bits

1000 ms

51.200 bits

100 ms

25.600

50 ms

With a CIR rate of 512 Kbps it means we can send 512.000 bits in 1000 ms. In 50 ms
we will be able to send 25.600 bits. Now we have to calculate the number of Be bits
so that we can burst up to the AR rate. The physical access rate is 1536 Kbps:

1536.000 bits

1000 ms

153.600 bits

100 ms

76.800 bits

50 ms

So with a Tc of 50 milliseconds we have to send 76.800 bits to get up to the physical


access rate. So what value should we configure for our Be?
The Bc and Be combined should be 76.800 bits to get to the physical access rate:
76.800 bits 25.600 bits (Bc) = 51.200 bits
Set your Bc to 25.600 bits and your Be to 51.200 bits and youll be able to burst up
to the physical access rate.
Now lets calculate this for the 64 Kbps link, first the Bc:

64.000 bits

1000 ms

6.400 bits

100 ms

3.200 bits

50 ms

So the Bc is 3.200 bits. Now we can calculate the Be:


76.800 bits - 3.200 bits = 73.600 bits
Set your Bc to 3.200 bits and your Be to 73.600 bits and you will be able to burst up
to the physical access rate.
I hope this has been helpful to you, if you have any questions feel free to ask!

Rate this Lesson:

PPP Multilink Link Fragmentation


and Interleaving
2 votes

PPP Multilink lets us bundle multiple physical interfaces into a single logical
interface. We can use this to load balance on layer 2 instead of layer 2. Take a look
at the following picture so I can give you an example:

Above we have two routers connected to each other with two serial links. If we want
to use load balancing we could do this on layer 3, just configure a subnet on each
serial link and activate both links in a routing protocol like EIGRP or OSPF.
When we use PPP multilink we can bundle the two serial links into one logical layer
3 interface and well do load balancing on layer 2. PPP multilink will break the
outgoing packets into smaller pieces, puts a sequence number on them and sends
them out the serial interfaces. Another feature of PPP multilink is fragmentation.
This could be useful when you are sending VoIP between the two routers.
Most voice codecs require a maximum delay of 10 ms between the different VoIP
packets. Lets say the serial link offers 128 Kbit of bandwidthhow long would it
take to send a voice packet that is about 60 bytes?
60 bytes * 8 = 480 bits / 128.000 = 0.00375.
So it takes roughly 3.7 ms to send the voice packet which is far below the required
10 ms. We can run into issues however when we also send data packets over this
link. Lets say we have a 1500 bytes data packet that we want to send over this link:
1500 bytes * 8 = 12.000 / 128.000 = 0.093.

So it will take about 93 ms to send this packet over this 128 Kbit link. Imagine we
are sending this data packet and a voice packet arrives at the router, it will have to
wait for at least 93 ms before the data packet is out of the wayexceeding our 10
ms maximum delay.
Multilink PPP offers a solution by fragmenting the data packets
and interleaving the voice packets between the data fragments. This way a large
data packet will not delay a voice packet for too long.
Anyway now you have an idea what multilink PPP is about, let me show you how to
configure it. I will use the following topology:

I am using two routers with only a single serial link between them. Even though its
called multilink PPP you can still configure it on only one link. This is how we
configure it:
R1(config)#interface virtual-template 1
R1(config-if)#bandwidth 128
R1(config-if)#ip address 192.168.12.1 255.255.255.0
R1(config-if)#fair-queue
R1(config-if)#ppp multilink fragment delay 10
R1(config-if)#ppp multilink interleave
R2(config)#interface virtual-template 1
R2(config-if)#bandwidth 128
R2(config-if)#ip address 192.168.12.2 255.255.255.0
R2(config-if)#fair-queue
R2(config-if)#ppp multilink fragment delay 10
R2(config-if)#ppp multilink interleave

We will use a virtual-template to configure the IP addresses and to configure PPP


multilink. The ppp multilink fragment delay commands lets us configure the
maximum delay. In my example Ive set it to 10 ms. Dont forget to use ppp

multilink interleave or interleaving wont work. Im using WFQ to prioritize voice

traffic before data traffic using the fair-queue command. Interleaving will occur
between WFQ and the FIFO queue and has 2 queues, a normal and priority queue.
non-fragmented traffic goes to the priority queue and fragmented traffic will use
the normal queue. Now lets link the virtual template to PPP multilink:
R1(config)#multilink virtual-template 1
R2(config)#multilink virtual-template 1

And last but not least, configure the interfaces to use PPP multilink:
R1(config)#interface serial
R1(config-if)#bandwidth 128
R1(config-if)#encapsulation
R1(config-if)#ppp multilink
R2(config)#interface serial
R2(config-if)#bandwidth 128
R2(config-if)#encapsulation
R2(config-if)#ppp multilink

0/0
ppp
0/0
ppp

Just make sure you enable PPP encapsulation and PPP multilink on the interfaces
and you are done. Now let's see if it's working or not:
R1#show ppp multilink
Virtual-Access2
Bundle name: R2
Remote Endpoint Discriminator: [1] R2
Local Endpoint Discriminator: [1] R1
Bundle up for 00:00:25, total bandwidth 128, load 1/255
Receive buffer limit 12192 bytes, frag timeout 1000 ms
Interleaving enabled
0/0 fragments/bytes in reassembly list
0 lost fragments, 0 reordered
0/0 discarded fragments/bytes, 0 lost received
0x2 received sequence, 0x2 sent sequence
Member links: 1 (max not set, min not set)
Se0/0, since 00:00:25, 160 weight, 152 frag size
No inactive multilink interfaces
R1#show interfaces virtual-access 2
Virtual-Access2 is up, line protocol is up
Hardware is Virtual Access interface
Internet address is 192.168.12.1/24

MTU 1500 bytes, BW 128 Kbit/sec, DLY 100000 usec,


reliability 255/255, txload 1/255, rxload 1/255
Encapsulation PPP, LCP Open, multilink Open
Open: IPCP
MLP Bundle vaccess, cloned from Virtual-Template1
Vaccess status 0x40, loopback not set
Keepalive set (10 sec)
DTR is pulsed for 5 seconds on reset
Last input 00:01:05, output never, output hang never
Last clearing of "show interface" counters 00:01:05
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output
drops: 0
Queueing strategy: weighted fair
Output queue: 0/1000/64/0 (size/max total/threshold/drops)
Conversations 0/1/32 (active/max active/max total)
Reserved Conversations 0/0 (allocated/max allocated)
Available Bandwidth 96 kilobits/sec
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 0 bits/sec, 0 packets/sec
2 packets input, 28 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 0 giants, 0 throttles
0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored, 0 abort
2 packets output, 40 bytes, 0 underruns
0 output errors, 0 collisions, 0 interface resets
0 output buffer failures, 0 output buffers swapped out
0 carrier transitions

Above you can see that PPP multilink is enabled and that we are using interleaving.
If you have any questions or comments let me know!

Rate this Lesson:

Introduction to RSVP
2 votes

IntServ and RSVP


When it comes to QoS we have two models that we can use:

DiffServ (Differentiated Services)


IntServ (Integrated Services)

In short, when using DiffServ we implement QoS on a hop by hop basis where we
use the ToS byte of IP packets for classification. IntServ is completely different, its a
signaling process where network flows can request a certain bandwidth and delay
that is required for the flow. IntServ is described in RFC 1633 and there are two
components:

Resource reservation

Admission control
Resource reservation signals the network and requests a certain bandwidth and

delay that is required for a flow. When the reservation is successful each network
component (mostly routers) will reserve the bandwidth and delay that is
required. Admission control is used to permit or deny a certain reservation. If we
would allow all flows to make a reservation, we cant guarantee any service
anymore
When a host wants to make a reservation it will send a RSVP reservation request
using a RSVP path message. This message is passed along the route towards the

destination. When a router can guarantee the required bandwidth/delay it will


forward the message. Once it reaches the destination it will reply with a RSVP resv
message. The same process will occur for the opposite direction. Each router will

check if they have enough bandwidth/delay for the flow and if so, they will forward
the message towards the source of the reservation. Once the host receives the
reservation message we are done.
Now this might sound nice but the problem with IntServ is that its difficult to
scaleeach router has to keep track of each reservation for each flow. What if a
certain router doesnt support Intserv or loses its reservation information?
Currently RSVP is mostly used for MPLS traffic engineering, we use DiffServ for QoS
implementations.

RSVP Configuration Example


Anyway lets take a look at the configuration of RSVP, I will be using the following
topology:

First we need to enable RSVP on all interfaces:


R1(config-if)#ip rsvp bandwidth 128 64
R2(config)#interface fa0/0
R2(config-if)#ip rsvp bandwidth 128 64
R2(config)#interface fa0/1
R2(config-if)#ip rsvp bandwidth 128 64
R3(config)#interface fa0/0
R3(config-if)#ip rsvp bandwidth 128 64
R3(config)#interface fa0/1
R3(config-if)#ip rsvp bandwidth 128 64
R4(config)#interface fa0/0
R4(config-if)#ip rsvp bandwidth 128 64

If you dont specify the bandwidth then by default RSVP will use up to 75% of the
interface bandwidth for reservations. Im telling RSVP that it can only use up to 128
kbps for reservations and that the largest reservable flow can be 64 kbps.
Now well configure R1 to act like a RSVP host so it will send a RSVP send path
message:
R1(config)#ip rsvp sender-host 192.168.34.4 192.168.12.1 tcp 23 0
64 32

I will make a reservation between destination 192.168.34.4 and source 192.168.12.1


using TCP destination port 23 (telnet). The source port is 0 which means it can be
anything. The average bitrate is 64 kbps with a maximum burst of 32 kbps.

R1#show ip rsvp sender


To
From
BPS
192.168.34.4 192.168.12.1
64K

Pro DPort Sport Prev Hop


TCP 23

I/F

192.168.12.1

Above you see the reservation that we configured on R1. Now lets configure R4 to
respond to this reservation:

R4(config)#ip rsvp reservation-host 192.168.34.4 192.168.12.1 tcp


23 0 ff ?
load Controlled Load Service
rate Guaranteed Bit Rate Service

I can choose between controlled load or guaranteed bit rate. Guaranteed means
the flow will have a bandwidth and delay guarantee. Controlled load will guarantee
the bandwidth but not the delay.
R4(config)#ip rsvp reservation-host 192.168.34.4 192.168.12.1 tcp
23 0 ff rate 64 32

Lets verify our configuration on R4:


R4#show ip rsvp reservation
To
From
Serv BPS
192.168.34.4
192.168.12.1
RATE 64K

Pro DPort Sport Next Hop


TCP 23

I/F Fi

192.168.34.4

FF

You can see that it has received the reservation from R1. What about R2 and R3?
R2#show ip rsvp reservation
To
From
Serv BPS
192.168.34.4 192.168.12.1
RATE 64K
R3#show ip rsvp reservation
To
From
Serv BPS
192.168.34.4 192.168.12.1
RATE 64K

Pro DPort Sport Next Hop

I/F

Fi

TCP 23

Fa0/1

FF

Pro DPort Sport Next Hop

I/F

Fi

TCP 23

Fa0/1

FF

192.168.23.3

192.168.34.4

Above you can see that R2 and R3 also made the reservation. We can also check
RSVP information on the interface level:
R2#show ip rsvp interface detail | begin Fa0/1
Fa0/1:
Interface State: Up
Bandwidth:
Curr allocated: 64K bits/sec
Max. allowed (total): 128K bits/sec

Max. allowed (per flow): 64K bits/sec


Max. allowed for LSP tunnels using sub-pools: 0 bits/sec
Set aside by policy (total): 0 bits/sec
Admission Control:
Header Compression methods supported:
rtp (36 bytes-saved), udp (20 bytes-saved)
Traffic Control:
RSVP Data Packet Classification is ON via CEF callbacks
Signalling:
DSCP value used in RSVP msgs: 0x3F
Number of refresh intervals to enforce blockade state: 4
Number of missed refresh messages: 4
Refresh interval: 30
Authentication: disabled

Above you can see how R2 reserved 64 kbps on its FastEthernet0/1 interface.

Debugging RSVP
If you really want to see what is going on you should enable a debug, lets do so on
all routers:
R1,R2,R3,R4#debug ip rsvp
RSVP signalling debugging is on

Now lets create a new conversation on R1:


R1(config)#ip rsvp sender-host 192.168.34.4 192.168.12.1 tcp 80 0
32 16

This is what you will see:


R1#
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Received Path
message from 192.168.12.1 (on sender host)
RSVP: new path message passed parsing, continue...
RSVP: Triggering outgoing Path refresh
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh Path psb =
66C8D7CC refresh interval = 0mSec
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Path message
to 192.168.12.2

You can see that R1 has received a path message from itself and that it forwards it
towards 192.168.12.2.
R2#
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Received Path
message from 192.168.12.1 (on FastEthernet0/0)
RSVP: new path message passed parsing, continue...
RSVP: Triggering outgoing Path refresh
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh Path psb =
650988D4 refresh interval = 0mSec
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Path message
to 192.168.23.3
R3#
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Received Path
message from 192.168.23.2 (on FastEthernet0/0)
RSVP: new path message passed parsing, continue...
RSVP: Triggering outgoing Path refresh
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh Path psb =
6508EB64 refresh interval = 0mSec
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Path message
to 192.168.34.4
R4#
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Received Path
message from 192.168.34.3 (on FastEthernet0/0)
R4#
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh Path psb =
6618082C refresh interval = 30000mSec
RSVP: can't forward Path out received interface

R2 receives the path message from R1 and forwards it towards R3 who will forward
it to R4. Now let's configure R4 to respond:
R4(config)#$reservation-host 192.168.34.4 192.168.12.1 tcp 80 0 ff
rate 64 32

This is what you will see:


R4#
RSVP session 192.168.34.4_80[0.0.0.0]: Received RESV for
192.168.34.4 (receiver host) from 192.168.34.4
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: this RESV has a
confirm object

RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: reservation not


found--new one
RSVP-RESV: Admitting new reservation: 674BE740
RSVP-RESV: Locally created reservation. No admission/traffic
control needed
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: start requesting 64
kbps FF reservation for 192.168.12.1(0) TCP-> 192.168.34.4(80) on
FastEthernet0/0 neighbor 192.168.34.3
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh RESV,
req=674C39E8, refresh interval=0mSec [cleanup timer is not awake]
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Resv message
to 192.168.34.3
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: RESV CONFIRM Message
for 192.168.34.4 (FastEthernet0/0) from 192.168.34.3
R3#
RSVP session 192.168.34.4_80[0.0.0.0]: Received RESV for
192.168.34.4 (FastEthernet0/1) from 192.168.34.4
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: this RESV has a
confirm object
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: reservation not
found--new one
RSVP-RESV: Admitting new reservation: 66171920
RSVP-RESV: reservation was installed: 66171920
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: start requesting 64
kbps FF reservation for 192.168.12.1(0) TCP-> 192.168.34.4(80) on
FastEthernet0/0 neighbor 192.168.23.2
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh RESV,
req=661769B4, refresh interval=0mSec [cleanup timer is not awake]
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Resv message
to 192.168.23.2
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: RESV CONFIRM Message
for 192.168.34.4 (FastEthernet0/0) from 192.168.23.2
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending RESV CONFIRM
message to 192.168.34.4
R2#
RSVP session 192.168.34.4_80[0.0.0.0]: Received RESV for
192.168.34.4 (FastEthernet0/1) from 192.168.23.3
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: this RESV has a
confirm object
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: reservation not
found--new one
RSVP-RESV: Admitting new reservation: 674B8E00
RSVP-RESV: reservation was installed: 674B8E00
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: start requesting 64
kbps FF reservation for 192.168.12.1(0) TCP-> 192.168.34.4(80) on
FastEthernet0/0 neighbor 192.168.12.1
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Refresh RESV,
req=674BDE94, refresh interval=0mSec [cleanup timer is not awake]

RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending Resv message


to 192.168.12.1
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: RESV CONFIRM Message
for 192.168.34.4 (FastEthernet0/0) from 192.168.12.1
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending RESV CONFIRM
message to 192.168.23.3
R1#
RSVP session 192.168.34.4_80[0.0.0.0]: Received RESV for
192.168.34.4 (FastEthernet0/0) from 192.168.12.2
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: this RESV has a
confirm object
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: reservation not
found--new one
RSVP-RESV: Admitting new reservation: 66C95AF4
RSVP-RESV: reservation was installed: 66C95AF4
RSVP 192.168.12.1_0->192.168.34.4_80[0.0.0.0]: Sending RESV CONFIRM
message to 192.168.12.2

Above you can see that each router forwards the RESV message and makes the
reservation for this particular flow. That's all I wanted to show you for now, I hope
this helps you to understand RSVP. If you have any questions feel free to ask.

Rate this Lesson:

RSVP DSBM (Designated


Subnetwork Bandwidth Manager)
2 votes

RSVP will work fine when you need to make a reservation on the link between two
routers, but what if you have a shared segment? An example could be a couple of
routers that is connected to the same half-duplex Ethernet network. These routers
will share the bandwidth so when multiple routers make an RSVP reservation its
possible that we oversubscribe.
The routers should know about all RSVP reservations that are made on this shared
segment and thats exactly why we have the DSBM (Designated Subnetwork
Bandwidth Manager).

One of the routers on the shared segment will be elected as the DSBM and all other
RSVP routers willproxy their RSVP PATH and RESV messages through the DSBM.
This way we will have centralized admission control and we wont risk
oversubscribing the shared segment.
Besides being in charge of admission control, the DSBM can also distribute other
information to RSVP routers, for example the amount of non-reservable traffic that
is allowed in the shared segment or the average/peak rate and burst size for nonRSVP traffic.
The election to become the RSVP DSBM uses the following rules:

The router with the highest priority becomes the DSBM.


In case the priority is the same, the highest IP address is the tie-breaker.

Multiple DSBMs can be configured for a shared segment but DSBM is nonpreemptive. This means that once the election is over, the router that was elected

will stay as the DSBM even if you configure another router later with a higher
priority.
Configuration-wise its easy to implement DSBM, lets use the following topology to
see how it works:

Just 3 routers connected to the same switch. First we will enable RSVP on all
interfaces:
R1(config)#interface FastEthernet 0/0
R1(config-if)#ip rsvp bandwidth
R2(config)#interface FastEthernet 0/0
R2(config-if)#ip rsvp bandwidth
R3(config)#interface FastEthernet 0/0
R3(config-if)#ip rsvp bandwidth

Now we'll configure R3 as the DSBM for this segment:


R3(config)#interface FastEthernet 0/0
R3(config-if)#ip rsvp dsbm candidate

If you want, you can configure the DSBM to tell other RSVP routers to limit the
reservations:
R3(config-if)#ip rsvp bandwidth 2048

Ill set the maximum bandwidth to 2048 kbit. We can also set a number of
parameters for non-RSVP traffic:
R3(config-if)#ip rsvp dsbm non-resv-send-limit ?
burst
Maximum burst (Kbytes)
max-unit Maximum packet size (bytes)
min-unit Minimum policed unit (bytes)
peak
Peak rate (Kbytes/sec)
rate
Average rate (Kbytes/sec)

Lets verify if R3 has won the election:


R1#show ip rsvp sbm detail
Interface: FastEthernet0/0
Local Configuration
IP Address: 192.168.123.1
DSBM candidate: no
Priority: 64
Non Resv Send Limit
Rate: unlimited
Burst: unlimited
Peak: unlimited
Min Unit: unlimited
Max Unit: unlimited
R2#show ip rsvp sbm detail
Interface: FastEthernet0/0
Local Configuration
IP Address: 192.168.123.2
DSBM candidate: no
Priority: 64

Current DSBM
IP Address: 192.168.123.3
I Am DSBM: no
Priority: 64
Non Resv Send Limit
Rate: 2147483 Kbytes/sec
Burst: 536870 Kbytes
Peak: unlimited
Min Unit: unlimited
Max Unit: unlimited

Current DSBM
IP Address: 192.168.123.3
I Am DSBM: no
Priority: 64

Non Resv Send Limit


Rate: unlimited
Burst: unlimited
Peak: unlimited
Min Unit: unlimited
Max Unit: unlimited
R3#show ip rsvp sbm detail
Interface: FastEthernet0/0
Local Configuration
IP Address: 192.168.123.3
DSBM candidate: yes
Priority: 64
Non Resv Send Limit
Rate: unlimited
Burst: unlimited
Peak: unlimited
Min Unit: unlimited
Max Unit: unlimited

Non Resv Send Limit


Rate: 2147483 Kbytes/sec
Burst: 536870 Kbytes
Peak: unlimited
Min Unit: unlimited
Max Unit: unlimited

Current DSBM
IP Address: 192.168.123.3
I Am DSBM: yes
Priority: 64
Non Resv Send Limit
Rate: 2147483 Kbytes/sec
Burst: 536870 Kbytes
Peak: unlimited
Min Unit: unlimited
Max Unit: unlimited

With R3 as the DSBM it will be in the middle of all RSVP messages. We can test this
by configuring a reservation between R1 and R2:
R1(config)#ip rsvp sender-host 192.168.123.2 192.168.123.1 tcp 23 0
128 64
R2(config)#reservation-host 192.168.123.2 192.168.123.1 tcp 23 0 ff
rate 128 64

When we check R3 you can see that it knows about the reservation that we just
configured:
R3#show ip rsvp installed
RSVP: FastEthernet0/0
BPS
To
From
128K
192.168.123.2
192.168.123.1

Protoc DPort
TCP
23

Sport
0

Thats all I wanted to share about DSBM for now. If you have any questions feel free
to ask!

Rate this Lesson:

Block website with NBAR on Cisco


Router
1 vote

When you create access-lists or QoS (Quality of Service) policies you normally use
layer 1,2,3 and 4 information to match on certain criteria. NBAR (Network Based
Application Recognition) addsapplication layer intelligence to our Cisco IOS router
which means we can match and filter based on certain applications.
Lets say you want to block a certain website like Youtube.com. Normally you would
lookup the IP addresses that youtube uses and block those using an access-list or
perhaps police / shape them in your QoS policies. Using NBAR we can match on the
website addresses instead of IP addresses. This makes life a lot easier. Lets look at
an example where we use NBAR to block a website (youtube for example):
R1(config)#class-map match-any BLOCKED

R1(config-cmap)#match protocol http host "*youtube.com*"


R1(config-cmap)#exit

First I will create a class-map called BLOCKED and I will use match protocol to use
NBAR. As you can see I match on the hostname youtube.com. The * means any
character. Effectively this will block all sub-domains of youtube.com, for example

subdomain.youtube.com will also be blocked. Now we need to create a policymap:


R1(config)#policy-map DROP
R1(config-pmap)#class BLOCKED
R1(config-pmap-c)#drop
R1(config-pmap-c)#exit

The policy-map above matches our class-map BLOCKED and when this matches the
traffic will be dropped. Last but not least we need to apply the policy-map to the
interface:
R1(config)#interface fastEthernet 0/1
R1(config-if)#service-policy output DROP

I will apply the policy-map to the interface that is connected to the Internet. Now
whenever someone tries to reach youtube.com their traffic will be dropped. You
can verify this on your router using the following command:
R1#show policy-map interface fastEthernet 0/1
FastEthernet0/1
Service-policy output: DROP
Class-map: BLOCKED (match-any)
1 packets, 500 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: protocol http host "*youtube.com*"
1 packets, 500 bytes
5 minute rate 0 bps
drop
Class-map: class-default (match-any)
6101 packets, 340841 bytes

5 minute offered rate 10000 bps, drop rate 0 bps


Match: any

Above you see that we have a match for our class-map BLOCKED. Apparently
someone tried to reach youtube.com. The class-map class-default matches all other
traffic and it is permitted.
In case you were wondering...you can only use NBAR to match HTTP traffic, not
HTTPS. The reason for this is that NBAR matches on the HTTP "get" command
which is encrypted if you use HTTPS. Take a look at the following wireshark capture
for HTTP:

Above you see the HTTP GET request for youtube.com in plaintext. This is what
NBAR looks at and matches on. Now let me show you the HTTPS capture:

Above you see a wireshark capture of HTTPS traffic between my computer and
youtube.com. It's impossible for NBAR to look into these SSL packets and see what
website you are requesting. In this case your only option is to use a proxy server for
HTTP server or block the IP addresses using an access-list.
This is how you can block websites using your normal Cisco IOS router. If you have
any questions just leave a comment!

Rate this Lesson: