Вы находитесь на странице: 1из 16

W H I T E PA P E R

MULTILAYER PACKET CLASSIFICATION: OPTIMIZING BUSINESS-CRITICAL APPLICATIONS


Matrix N-Series with Distributed Forwarding Engines (DFEs)

Table of Contents
1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 2. Convergence of applications on the network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 3. Plain old switching is not enough any more . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 4. What is Multilayer Traffic Classification? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 5. The Matrix E7, N3 and N7 with NetSight Atlas Policy Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 6. Applications for Multilayer Packet Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 6.1 Quality of Service for Business-Critical Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 6.1.1 Application Prioritization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 6.1.2 Application Rate Limiting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 6.2 Network Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 6.2.1 Application Containment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 6.2.2 Application Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 6.2.3 Network Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 6.3 Voice and Data Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18

Page 2 of 16 Enterasys Whitepaper

1. Introduction
The technologies of IT are a fast evolving environment. New business applications, frameworks and solutions are emerging every day, making the IT environment more and more mission critical along the way. It is not enough to understand the complexities of new technologies, and how they work. IT organizations must know how to build and manage suitable infrastructures to support them, and integrate them with the vast collection of existing technologies so that the anticipated business solutions can be leveraged. This is quite a challenge for IT organizations today. With technologies like wireless for user mobility, Instant Messenger and video/voice real-time collaboration tools, and new-age data storage, IT architects have to carefully plan and deploy the infrastructure and supporting systems that will enable companies to maximize these critical solutions. More and more, IT organizations will be looking for tools and architectures that enable them to deploy and manage systems that are directly related to delivering the business benefits found in these advanced technologies. As an IT administrator, would you feel comfortable if 50% of your companys employee base discovered the benefits of realtime voice and video collaboration through the use of an Instant Messenger application that came with the operating system you were deploying? Most IT administrators would not feel confident that their network architecture and support systems were capable of delivering this widespread service in a user-acceptable manner. So what can we do to better prepare networks for the evolution of technology? We can find solutions that allow the technology to be properly aligned with the business.

2. Convergence of applications on the network


Enterasys Networks convergence strategy is not only about supporting voice, video and data over the enterprise network. It is about supporting old applications (Web, e-mail, ftp, etc.), current applications (ERP , CRM, etc.) and next-generation applications (voice and video over IP , Storage over IP , Instant Messaging, etc.) over a common network infrastructure. The network of today and for the future must support all these applications at the level of service they require. It is important to consider that this has to be handled in an evolutionary manner, not in a revolutionary manner. New applications emerge every day; network infrastructures shouldnt have to be upgraded or replaced constantly to support new applications. This means the infrastructure should not be locked to a specific system or set of applications. The infrastructure has to be highly flexible, providing a set of network services that can be applied to any set of applications, current or future. This is exactly what Enterasys Networks is providing in its network systems: forward migration and legacy support.

3. Plain old switching is not enough any more


Historically, the first applications to be supported on the network were e-mail and Web surfing. Access to the network was limited to a small group of people in the enterprise. With enterprises relying more and more on the network to increase employee productivity, there was a proliferation in network access, with the final result of having all employees connected to the enterprise network. Application sharing or database access became more widespread. Some of those new distributed applicationamong them databases, medical imaging and Enterprise Resource Planningwere bandwidth hungry, putting pressure on enterprise networks to support them effectively. The first answer to that was to throw more bandwidth on the network. This lead to the evolution of Ethernet technology to Fast Ethernet, Gigabit Ethernet, and now 10-Gigabit Ethernet. And, to a certain extent, this was an acceptable solution. The limitations of simply adding bandwidth became apparent with the advent of latency-sensitive applications. Video streaming and telephony over IP , when adopted, have different requirements than other applications. The integrity of information transfer is somewhat less important than it is for databases applications, for example. It is acceptable to have some loss of information in a phone conversation; you can ask your correspondent to repeat a sentence. However, a delay in information transfer is unacceptable. You dont want to hear what your correspondent said three seconds later. So, to support applications like video streaming or telephony over IP , wire-speed switching or routing is important, but providing these applications with the minimum possible latency is even more important.

Page 3 of 16 Enterasys Whitepaper

Figure 1 below shows todays enterprise applications and their requirements in terms of bandwidth and latency on the network. This highlights the need for differentiated application support on the network. Applications Video Conferencing File Transfer, CAD, Desktop Publishing, IP Storage Instant Messaging Voice over IP E-mail, Web Thin Clients ERP , CRM Video Streaming Bandwidth Requirements High High Low Low-Moderate Low-Moderate Low-Moderate Moderate Moderate-High Latency Sensitivity High Low Moderate High Low Moderate Moderate High

Figure 1: Enterprise applications with associated bandwidth requirements and latency

The concept of Quality of Service (QoS) was introduced to support latency-sensitive applications. The demand for QoS to fulfill the need for tight control over latency and throughput in a mixed application environment is undeniable. QoS refers to a set of mechanisms for guaranteeing levels of bandwidth, maximum latency limits and controlled inter-packet timing. True QoS strategy strives to meet the needs of all traffic flows in the network by providing wire-speed bandwidth and low latency to all applications. However, when output wires on a switch are overloaded and internal buffers are filled, QoS is required to prioritize traffic by creating rules or policies that stipulate priority. Policy-based QoS gives network managers control over latency and throughput so that the demands of high-priority traffic may be met. For example, video streaming can be serviced first to ensure minimal latency, then ERP traffic can be serviced because it is critical for the enterprise business, and finally e-mail can be serviced because it is considered as less important. Another area to consider is where in the network Quality of Service should be implemented? At the backbone level? At the edge level? After many discussions and disagreements between network equipment vendors, the final outcome is that QoS is required everywhere. At the core layer, static aggregation QoS rules are important to specify that from an overall company business point of view, ERP traffic is more important than Web surfing, for example. At the edge layer, at the closest possible point to the user, we need flexible, dynamic QoS rules. Actually how useful is it to prioritize ERP traffic for a user that doesnt use this type of application? Users need priority on the applications that are critical for them, to fulfill the mission of their position in the enterprise. And what about security? Since the non-event of Y2K, enterprises realize the importance of securing their intellectual property from resource misuse or intentional attacks. Traditionally, security was handled in routers through security filters and Access Control Lists. Although this provides a nice level of security, it is surely not enough. This is about core security. Now, how do you prevent users from misusing resources? If only core security is activated, it doesnt prevent forbidden applications to enter the network. Those applications not only put intellectual property in jeopardy, but unneeded bandwidth-greedy applications can saturate the network. Enterasys focuses not only on QoS to expedite the various business applications but also on security. We believe the network should understand that the application is present and that the application is allowed to exist, in an efficient manner. We can also prevent applications from entering the network. We can also restrict the use of certain applications to certain users or class of users. To summarize, based on the obvious need to provide QoS and security, it is important to carefully consider the right switching platforms and associated network management systems to make sure business-critical applications are serviced optimally.

Page 4 of 16 Enterasys Whitepaper

4. What is Multilayer Traffic Classification?


The delivery of a services-based network (one that can provide the appropriate services to the appropriate person) requires an infrastructure that is highly flexible, and that can support very complex and granular rules at the service edge of the network. This offers greater security of both network stored data and network bandwidth. Enterasys products (including the Matrix N-series switches) were designed and architected with this level of granularity. Delivery of this intelligent network edge is achieved through the use of Classification Rules, implemented at the point of ingress for the user. These rules allow any number of actions to be implemented dynamically on any combination of Layer 2, 3, or 4 variables (based on the OSI model). Figure 2 below shows the Multilayer Traffic Classification process.

Layer 2
Port-based

Access Control Deny

Port

MAC address EtherType (IP , IPX, AppleTalk, etc.)

Layer 3
IP Address IP Protocol (TPC, UDP , etc.) ToS

Permit

Groups

Layer 4
TCP/UDP port (HTTP , SAP , etc.)

Contain

User-based

User

Class of Service

Groups Matrix N-Series

Priority/QoS Rate Limit

Figure 2: The Multilayer Traffic Classification process

The process of traffic classification is made up of three distinct steps: roles definition, inspection and action.

Roles Definition
The first step is to define where a traffic classification rule needs to be applied. A rule can be port based or user based. This choice is related to the decision to implement or not implement user-authentication functionality. For port-based multilayer traffic classification, it is possible to choose one port, several ports or entire switches on which to apply the classification rule. Using network-based authentication (IEEE 802.1X, MAC-based authentication or Web-based authentication), it is possible to identify a user or a group of users who will be the target of the classification rule. Such a solution is called policy-based management and is delivered by Enterasys Networks within the User Personalized Networking (UPN) architecture.

Frame Inspection
The second step is the frame inspection. The goal is to identify traffic, based upon the frames Datalink, Network or Transport Layer information (Layers 2, 3 and 4, respectively, in the OSI model, shown in Figure 3 below). Although the Distributed Forwarding Engines make classification decisions based on Layers 2-4, their forwarding mechanism is still that of a Layer 2 storeand-forward device (a bridge or switch) or a Layer 3 router.

Page 5 of 16 Enterasys Whitepaper

7 6 5 4 3 2 1

Application Presentation Session Transport Network Datalink Physical

Figure 3: The OSI Model At Layer 2, an administrator can classify frames based on MAC addresses (physical address) or Ethertype field, which defines the Layer 3 protocol, e.g., IP , IPX, AppleTalk, etc. At Layer 3, an administrator can classify based on specific information contained within the Layer 3 header of an IP or IPX frame. For IP frames, it is possible to look at IP Type of Service (ToS) information used for DiffServ Quality of Service. The IP Protocol Type defines the Layer 4 protocol that is used (e.g., TCP , UDP , ICMP , etc.). And of course, it is possible to classify based on the IP addresses and subnets. IPX frames can be classified based on IPX Class of Service, Packet Type, Network and Socket Numbers. At Layer 4, an administrator can classify IP frames based on the specific Layer 4 TCP or UDP port numbers. Those Layer 4 port numbers give information on the application transported in the frame (e.g., Web, e-mail, SNMP , etc.).

Action
Once the traffic has been defined in the classification rule, the network administrator has to create a set of actions to be taken by the Distributed Forwarding Engines each time this defined frame is recognized. On the security side, it is possible to create access control types of actions. The administrator can choose to have this specific traffic forwarded or discarded by the switch. This provides the ability to filter unwanted traffic, including traffic to specific servers or application traffic such as Web traffic (HTTP) or network management traffic (SNMP). Only business-critical applications, or authorized applications will be transmitted over the network. It is also possible to increase the overall security of the infrastructure by preventing hacking of active equipment, routers or switches. To increase the overall availability and security of the network infrastructure, it is possible to group users of a given protocol or application together logically and control the flow of their traffic on the network. Then network administrators can make sure that no protocol or application will overload the network. This mechanism is referred to as containment, and is based on VLAN (Virtual LAN IEEE 802.1Q) technology. On the QoS side, it is possible to assign Class of Service to any type of application. Those priority levels determine which applications should be serviced first, based on business requirements. It is also possible to use the rate limiting functionality on applications. The DFE modules can limit the rate at which traffic enters network ports. Rate limiting can be combined with Layer 3/4 prioritization to construct a committed information rate (CIR) that guarantees the delivery of critical traffic through the enterprise network. In summary, advanced multilayer switching does protect business-critical applications by ensuring optimal delivery of those applications through QoS mechanisms. The security features protect the business applications by containing, limiting or even forbidding non-critical applications. Traffic classification brings a great level of control to your network architecture at the closest possible point to users. Enterasys Networks can offer a network infrastructure that is capable of providing recognition of and special handling for any application. The quality and consistency of the user experience can be improved by activating those features in the DFE modules.

Page 6 of 16 Enterasys Whitepaper

5. The Matrix E7, N3 and N7 with NetSight Atlas Policy Manager


Although the Multilayer Traffic Classification functionality can bring many benefits to the enterprise, especially by providing differentiated application handling in order to optimize transport of business-critical applications, there are a couple of areas to take into consideration to achieve successful use of the technology. The switching platform is one of the critical parts of the solution. Delivering optimal application handling through traffic classification requires an infrastructure that is highly flexible, and that can support very complex and granular rules at the service edge of the network. The switch or router needs to be able to read and manipulate information contained in the packet at Layer 2/3/4 of the OSI model. In some (usually older) switching or routing architectures, activating this functionality induces additional latency and degraded performance. The end result is the opposite of what we expect. Providing this kind of flexibility while maintaining the desired level of performance can only be achieved in a platform where the architecture was intended to deliver such services, like the Matrix N-series with the Distributed Forwarding Engines. It is also most important to realize that having the capability to deploy these rules does not fulfill the requirement. Todays LAN switches on the market usually support those mechanisms for QoS and security, but can those features be implemented and maintained in a realistic manner? Without a configuration or management software abstraction layer, it can quickly become a nightmare. Configuring QoS using the Command Line Interface on dozens or thousands of switches can be difficult, considering this configuration has to be done on a switch-by-switch basis. To troubleshoot, it means you have to go back to every switch to check or correct the configuration. There is absolutely no coherency in this approach. These classification rules must be instrumented for deployment in an automated, system-level fashion to achieve wide spread acceptance in large enterprise networks. System-level deployments require that the network connectivity components are modeled as a single entity, or a small number of reasonably sized entities depending on the needs of a given scenario. This requires software that can understand the relationships and dependencies between a number of network devices and configure them as a system. Additionally, automation is absolutely critical for the deployment of complex and granular rule sets. Again, a model that mandates the management of complex rule sets on an individual, per element basis would surely collapse under its own weight, either at implementation, administration or troubleshooting. Additionally, there is a lack of relationship between the technical considerations of deploying classification rules and the business reasons for doing it. What is now facilitating the use of this functionality and making it even more appropriate for deployment in large networks is the automation and system-level control provided by Enterasys innovative combination of authentication and policy-based management tools. NetSight Atlas Policy Manager software provides a relationship hierarchy (shown in Figure 4) that creates links between the network technologies (classification rules, bottom layer) and the business functions (top layer). This link is the Services tier (middle layer), which allows multiple classification rules to be aggregated into something understandable. The Services layer provides the common language that both IT and business people can understand. It is the glue between the IT department, and the business. Both groups understand the concepts of e-mail, and Internet access, and can develop solutions around those concepts. With this unique modeling of the enterprise business in software, it becomes easy to understand where classification rules have to be deployed.

Figure 4: The NetSight Atlas Policy Manager software relationship hierarchy

Page 7 of 16 Enterasys Whitepaper

The creation of classification rules is done quickly, using the wizard-based role and policies (services) creation in the NetSight Atlas Policy Manager application. The process is simple, and it takes less than a minute to configure a classification rule:

1. Service Creation

2. Classification Rule(s) Creation within the Service

3. Traffic Description (Layer 2, 3 or 4 -> Field -> Classification type -> Field value)

4. Action Definition

5. Service Mapped to Role

Page 8 of 16 Enterasys Whitepaper

Once configured, the classification rules and associated services (or policy containers) can be re-used whenever and wherever needed, as many times as needed. This removes the complexity of configuring QoS and policy rules. Once the above business model is defined in the NetSight Atlas Policy Manager application, network administrators can deploy the policy rule set to the entire network with a single mouse click, significantly reducing deployment time of QoS and security settings. Troubleshooting is also more efficient, as it can be handled from one central location, the policy management application, instead of on several discreet switches. In summary, traffic classification brings a great level of control of your network architecture at the closest possible point to users. Enterasys Networks can offer a network infrastructure that is capable of providing recognition of and special handling for any application. The quality and consistency of user experience can be improved by activating those features in the Matrix N-Series switches. Within the User Personalized Networking framework and using the NetSight Atlas Policy Manager application, Enterasys Networks provides an automated, coherent way to configure and operate end-to-end Quality of Service and security throughout the network. The next section provides more details on how Multilayer Packet Classification can be used to treat business-critical applications.

6. Applications for Multilayer Packet Classification


This section details how Multilayer Packet Classification optimizes transport of business-critical applications. These advanced functionalities (shown in Figure 5) can also solve some common issues found in networks, including edge and distribution security. Overall performance can be increased and uplink oversubscription problems can be avoided. Traffic classification can optimize any network infrastructure.

Density

Performance

Security

Users

Servers

Security

Oversubscription

Capacity

Figure 5: Common network issues

6.1 Quality of Service for Business-Critical Applications


IEEE 802.1p defines a method of prioritizing packets based on a Layer 2 tag, which is inserted into the frame by an end station, switch or router. The standard defines eight different priority levels. Each priority level is mapped into a specific transmit queue by the switch or router. The insertion of the priority value (0-7) allows the DFE modules to make intelligent forwarding decisions. The DFEs support four transmit queues (0-3) per Fast Ethernet or copper Gigabit Ethernet port. T raffic mapped to higher priority queues will be transmitted before lower priority traffic. Traffic Classification defines which traffic patterns have to be serviced first. This scheme provides Class of Service functionality for 802.1D devices. Please note that the DFEs can also use the DiffServ technology as defined in IETF RFC 2474. DFEs can create Quality of Service parameters based on both the IP Precedence and Type of Service fields, now known as DiffServ Code Point (DSCP) field. Using DiffServ QoS, it is possible to define delay (latency), throughput and reliability (integrity) parameters for each frame sent into the network. The DFEs also have the ability to rewrite the Type of Service Field.

Page 9 of 16 Enterasys Whitepaper

6.1.1 Application Prioritization


The Priority feature on the Matrix N-Series takes this concept a step further to allow for better defined Class of Service configurations. Network administrators can, on a port-by-port or role-by-role basis, classify a frame based on its Layer 2-4 information with higher or lower priority status than other received frames. A good use of this feature is a configuration where a network administrator assigns priority to three network applicationsSAP R/3, Web traffic, and e-mail, in that orderas shown in Figure 6 below.

Performance

Users

Mail SAP Web Servers

Figure 6: Assigning priority to SAP , Web traffic and e-mail

There are two main steps required to accomplish this: configuring the classification rules and configuring the Priority-to-Transmit Queue mapping for the switch. Classification Rules Rule 1 (SAP R/3)All frames to or from the IP address of the SAP R/3 server will be tagged with a priority indicator of 7 (highest). Rule 2 (Web)All frames with a UDP port number of 80 (HTTP Web) will be tagged with a priority indicator of 5 (medium). Rule 3 (e-mail)All frames with a UDP port number of 25 (SMTP e-mail) will be tagged with a priority indicator of 3 (low). Priority Queuing Configuration Based on the default Matrix priority-to-transmit queue mapping, the values selected above will work so that each frame classification type will be mapped to the desired transmit queue.

Page 10 of 16 Enterasys Whitepaper

Classification
SAP (IP Header) Layer 3 Source IP HTTP (UDP Header) Layer 4 Source Port SMTP (UDP Header) Layer 4 Source Port

Assign 802.1p Priority


SAP=7 HTTP=6 SMTP=3

HIGH Queue1 SAP MED Queue 2 HTTP LOW Queue 3 HTTP

SAP Forwarded First HTTP Forwarded Second SMTP Forwarded Third

Figure 7: Application of advanced Class of Service functionality

Result With the classification rules for the network shown in Figure 7, above, the Matrix N-series provides advanced Class of Service functionality for individual network applications, but still forwards at Layer 2 (or Layer 3).

6.1.2 Application Rate Limiting


The Distributed Forwarding Engines can limit the rate at which traffic enters network ports. In an enterprise environment, rate limiting can be combined with Layer 3/4 prioritization to construct a committed information rate (CIR) that guarantees the delivery of critical traffic through the enterprise network even when it is congested. The network shown below in Figure 8 demonstrates this concept. In the above example (Section 6.1.1), the Matrix N3 supports 50 end users attached via 100 Mbps Ethernet ports and is connected to an X-Pedition Switch Router via a Gigabit Ethernet uplink. If each user tried to transfer data out of the wiring closet at the maximum possible rate, there could be up to 5 Gbps of traffic attempting to leave the chassis over the 1 Gbps link, which would result in traffic being arbitrarily dropped. The network administrator needs to guarantee delivery of SAP R/3 traffic by prioritizing it above all other traffic coming into the chassis. (This was described in the previous section.) The administrator also needs to control the rate of SAP R/3 traffic. Unless both these conditions are enforced, there is still a potential for high-priority traffic oversubscribing the outbound gigabit link. The solution is to configure rate limiting to provide each user with 16 Mbps of high-priority bandwidth into the fabric. This caps the maximum possible load of outbound, high-priority traffic at 800 Mbps (16 Mbps x 50 users). The Gigabit link has ample capacity to support this load.

Performance

1Gbps USERS

Mail SAP Web SERVERS

Oversubscription
Figure 8: Rate limiting and prioritization for CIR to guarantee delivery of critical traffic The end result is that the uplink is not oversubscribed and SAP gets guaranteed high-priority delivery. This functionality increases end-to-end performance and avoids uplink oversubscription.

Page 11 of 16 Enterasys Whitepaper

6.2 Network Security 6.2.1 Application Containment


The classification for containment allows network administrators to group users of a given protocol or application logically and control the flow of their traffic on the network. For example, as shown in Figure 9, in a media/publishing company, it is possible to separate end-user traffic based on the protocol used by each department. T raffic from pre-press (model makers) doing desktop publishing and using AppleTalk protocol, can be separated from other departments, which use the IP protocol. This can optimize bandwidth on the network, making sure that no department eats the entire bandwidth against the other departments.

Others (IP)

Pre-Press (Apple Talk)

Figure 9: Separating end-user traffic based on protocol

This approach also provides some level of security. For example, in a retail company, administrators may want to separate cash registers traffic from all other traffic. No one can access cash registers traffic in order to steal financial information from the company. As shown in Figure 10, application containment can solve performance issues from the edge to the core of the network and increase the overall security of communications on the network.

Performance

Security

Users

Servers

Security

Capacity

Figure 10: Using application containment to solve performance issues

Page 12 of 16 Enterasys Whitepaper

6.2.2 Application Filtering


Frame Classification for access control can be used to prevent specific traffic from entering the network. Network administrators can filter specific unwanted traffic, such as broadcast routing protocols, traffic from specific IP addresses, or even application traffic such as HTTP or SMTP . Unsupported or legacy protocols can be filtered at the ingress point of the network (e.g., Banyan Vines, NetBIOS). This preserves valuable bandwidth for business-critical applications. Network management protocols can be filtered from users who are not network administrators. With the same need of preserving bandwidth, peer-to-peer applications can be filtered for students, in an education environment, to prevent them from downloading MP3 music. Application filtering increases the overall infrastructure performance and ensures a great level of security at the network edge, hardening the entire network.

6.2.3 Network Security


The Distributed Forwarding Engines can provide many levels of security using Layer 2/3/4 classification. This includes additional security for the infrastructure itself. Figure 11 below illustrates a network configuration that includes a router and a Matrix N3. In this configuration, end users connect to the Matrix N3. Some of these users have been hacking into the router and altering its configuration. A simple classification rule can be put in place to prevent these types of occurrences. Since the end users should never need to communicate directly to the router using the routers IP address (192.168.1.2 in the below example), it is easy to recognize and discard traffic from those hackers.

Security

192.168.1.2 USERS

SERVERS

Security
Figure 11: Using classification to increase security

The end result is that any frames from a user trying to hack into the router will be discarded before they reach the router. It is possible to apply the same kind of security for all network elements or to protect specific servers (e.g., the DHCP server). The overall network security is therefore increased.

Page 13 of 16 Enterasys Whitepaper

6.3 Voice and Data Convergence


As stated earlier, Voice-over-IP technologies are beginning to generate interest in enterprises. It is important to be able to prioritize voice and data applications according to their differentiated needs. The vast majority of IP phones available on the market today have embedded switching capabilities. Therefore, IP phones send frames on the network, which have either an 802.1p priority indicator (or 802.1p tag) or an already populated DSCP (DiffServ Code Point) field, with the right information regarding voice traffic related latency needs. As shown in Figure 12, by using this 802.1p or DSCP information, the Distributed Forwarding Engines can recognize Voice-over-IP traffic and apply the proper Quality of Service mechanisms. (Please refer to Section 6.1 of this document for more details.) Set up high-priority to ensure voice traffic will be transmitted first, due to the need for very low latency. Set up rate limiting to ensure that voice traffic will always have the necessary bandwidth available on the network. Even if voice traffic does require very small bandwidth, network administrators will always want to make sure that some bandwidth is always available for voice traffic.

Performance
IP Phones

1 Gbps Users

Mail Web IP PBX SERVERS

Guaranteed Voice Delivery


Figure 12: Guaranteed voice delivery with QoS

The end result is that every employee in the enterprise can make phone calls over the network in a reliable manner. This is mandatory for enterprises that plan to replace their traditional voice systems by telephony over IP .

7. Conclusion
The standards-based Multilayer Frame Classification abilities of the Matrix N-Series with the Distributed Forwarding Engines provide network administrators with a powerful set of utilities that allow more intelligent configuration and management of todays and tomorrows converged networks. Those functionalities, many previously viewed as optional, become mandatory in enterprise networks where multiple (and actually more and more) applications co-exist; each application needs to be serviced differently. NetSight Atlas Policy Manager is the graphical interface that allows quick and easy set-up of those classification rules. Providing a relationship between those technical rules and the business requirements on IT, NetSight Atlas Policy Manager allows automated creation and enforcement of enterprise-wide QoS and security rules throughout the network. Coupled with networkbased authentication, NetSight Atlas Policy Manager allows the creation of a User Personalized Network. For more information on NetSight Atlas Policy Manager: http://www.enterasys.com/netsight For more information on Enterasys User Personalized Network: http://www.enterasys.com/upn

Page 14 of 16 Enterasys Whitepaper

NOTES

Page 15 of 16 Enterasys Whitepaper

2003 Enterasys Networks, Inc. All rights reserved. Lit. #9013244-1 12/03

Page 16 of 16 Enterasys Whitepaper

Вам также может понравиться