Академический Документы
Профессиональный Документы
Культура Документы
2.4 ORCHESTRATION................................................................................................................................ 11
3 Network Abstraction................................................................................................13
3.1 ONECONTROLLER............................................................................................................................... 13
3.3 ANALYTICS-AS-A-SERVICE................................................................................................................ 15
4 Network Infrastructure............................................................................................16
4.1 HIGH AVAILABILITY............................................................................................................................ 17
4.6 ELASTICITY........................................................................................................................................ 37
5.2 STORAGE........................................................................................................................................... 50
5.3 FIREWALLS......................................................................................................................................... 51
Introduction
Today’s highly distributed wired and wireless networks are designed for increased
flexibility, scale and reliability. Additionally, enterprises of all sizes increasingly
turn to outsourced services of all types – SaaS, PaaS, IaaS, and more. Securely
delivering new end-to-end services and applications across these environments
often results in increased complexity, compromise, and costs. Customers face
challenges that require a data center to be:
Simpler:
3. Traffic isn’t optimized both within the data center and between/among
interconnected data centers.
Faster:
1. Rolling out new applications, services and other network changes is inefficient,
ineffective and takes too long.
2. The data center can’t scale to accommodate the speed and performance
demanded by the explosive growth of new applications and devices.
3. The data center is expensive in terms of OPEX; operators are spending too
much time on basic maintenance tasks and not enough on leveraging this
valuable business asset.
2. Operators need better analytics into network usage so they can leverage this
Business Intelligence in order to assure Service Level Agreements (SLAs) and to
improve the business overall
Data center LANs are constantly evolving. Business pressures are forcing IT
organizations to adopt new application delivery models. Edge computing models are
transitioning from applications at the edge to virtualized desktops in the data center.
The evolution of the data center from centralized servers to a private cloud is well
underway and will be augmented by hybrid and public cloud computing services.
With data center traffic becoming less client-server centric and more server-server
centric, new data center topologies are emerging. Yesterday’s heavily segmented data
center is becoming less physically segmented and more virtually segmented. Virtual
segmentation allows sharing the same physical infrastructure in the most efficient
manner, leading to both capital and operational expense (CAPEX/OPEX) savings.
Virtual segmentation accelerates the time to spin up new business applications.
With Extreme Networks OneFabric Connect and SDN architecture, the network tier
becomes as dynamic, automated and modifiable as the storage and compute tiers,
providing a simple, fast, and smart networking solution that delivers the benefits of:
• Faster provisioning that supports any application while providing flexibility for
deploying the operator’s choice of best-of-breed applications, solutions and
vendors
This document will walk through the layers and architectural components and
describe how Extreme Networks each of those requirements. Derived from the
business objectives and the requirements of the applications hosted in the data
center (see the Business Applications in Figure 1), the common design goals include:
• Application availability
• Resource utilization
Business alignment
• Easily integrate with business applications with Software Defined Networking for
operational efficiency
• Operational efficiency
Security
NetSight OneView: This screen shows devices and MLAG specific information
• Control managed and unmanaged BYOD devices within the same infrastructure,
with unified single-pane-of-glass visibility
• Easily deploy and manage new applications, devices, users and services
• Enforce policies based on context at the network layer for more comprehensive
control
• View application usage and threat detection information to quarantine users and
devices
• Gain insights into asset information for increased visibility, as well as search and
location capabilities for any user and device on the network
MOBILITY AirWatch
MobileIron
JAMF Casper
Fibrelink MaaS360
Extreme Networks Data Center Manager (DCM), part of OneFabric Control Center,
provides IT administrators with a transparent, cross-functional service provisioning
and orchestration tool that bridges the divide between the server, networking, and
storage teams and provides a single integrated view of virtual server and network
environments. By enabling the unification and automation of the physical and virtual
network provisioning, Data Center Manager enables networks to benefit from the
high availability required for mission critical application and data performance. DCM
delivers numerous benefits to IT teams, including the ability to:
• Gain granular visibility into traffic flows and real-time and historical data to
simplify incorporation of VMs into the network, improve visibility and control,
and enable simplified auditing of the network via policy-based management
• VMware View
• Citrix XenDesktop
For more information on these and other pre-defined integrations and Technology
Solution Integration Partners, please go to: http://www.extremenetworks.com/
partners/tsp/
2.4 ORCHESTRATION
Customers use a myriad of data center orchestration solutions that need to
seamlessly integrate into the rest of the ecosystem, without vendor lock-in. They want
to rapidly automate service delivery and application provisioning, and to simplify
data center operations, managing the infrastructure elements together. Datacenter
customers want a best-of-breed multi-vendor environment, and vendors that
embrace integration with other vendors are the most appealing.
OpenStack has features presented in an abstract view across many physical devices,
and some of these features require dynamic reconfigurations of the devices involved.
Plus OpenStack may use multiple network configurations at the same time. This
dynamic nature poses one of the greater challenges when it is connected to a
physical network. OpenStack provides an internal, virtual node-to-node network and
also can provide physical break-out points into the LAN, along with tenant separation
within the internal virtual network. These different networks typically overlap and
interweave with each other dynamically, but will have to be established across static,
physical network equipment.
2.5 DEVOPS
To manage the large numbers of data center servers and VMs that are running
typically identical application and services, DevOps community uses tools like
Puppet, Chef, Salt, and Ansible. These tools provide a programmatic way to perform
configuration tasks. Although traditionally under the compute admin domain, these
tools are useful for management of network infrastructure as well, so under the same
umbrella these administrative tools can manage both compute and network and even
storage. They can maintain switches and verify their configuration by making them
check-in with a centralized server, and to the tool the switch looks like just another
device. Extreme Networks easily supports the DevOps tools, which are based on
open source code with vendor specific interfaces.
Abstraction is key to achieving the agility, manageability, and elasticity in the data
center. Abstraction of the network removes it as a bottleneck, enables VM-VM
reachability regardless of location, and provides the ability to rapidly react to business
application needs.
3.1 ONECONTROLLER
Data centers need a single platform to tie together network management, network
access control, network optimization, advanced application analytics. The single
platform can also tie together a heterogeneous, brownfield data center that has
deployed multiple vendors, white box and black box, plus enable the developing
Network Function Virtualization (NFV) solutions. A single platform promotes
community led innovation on top of that platform when it is standards-based and
comprehensive. When it can be deployed ready to integrate with existing and
multi-vendor hardware and software network environments, it preserves customer
investments and avoids vendor lock-in.
The architecture is highly available and redundant, and provides scalability, both
horizontal scalability to support additional devices and vertical scalability being
lightweight. For redundancy, if multiple OneControllers are deployed in Active/
Active or Active/Standby mode, a network management system can provision and
manage the multiple instances, and perform life-cycle management. The software
NAC rules to authenticate VMs and provision VLANs and policies associated with the VM
From a business perspective, an in-depth view into the real-time and historical
network and applications also provides valuable information for up front budget
planning when implementing new applications for the business while also
ensuring security compliance for approved applications. This saves both time
and money for the business when the critical applications are running at the best
possible performance.
Purview includes over 14,000 application fingerprints and new fingerprints are
continually added. Application fingerprints are XML files that are developed by
Extreme or they can be developed by users themselves to provide visibility to custom
applications that may be used by an organization. Application detection does not
stop with signature-based fingerprints though. To detect applications that try to
obscure themselves (like P2P and others) Purview also includes heuristics (behavioral
detection) based fingerprints to ensure the applications are detected appropriately.
Through its robust fingerprinting technology, Purview is able to identify an application
regardless of whether they run on well known ports or use non-standard ports.
Purview can also be integrated with technologies that provide VM-to-VM traffic for
VMs residing on the same hypervisor. For example, Purview integrates with Ixia’s
Phantom vTap to extend application visibility from physical to virtual networking
across the entire data center. Administrators can mirror traffic from the VMs, sending
traffic of interest to Purview, and then they have a complete view of the data center
for total visibility, security, and control..
4 Network Infrastructure
The network fabric provides interconnectivity between servers, storage, security
devices, and the rest of the IT infrastructure. This section describes the requirements
of that data center network. The hardware and software should perform well and
address these requirements regardless whether the data center is a small deployment
99.000% 53 minutes
99.000% 5 minutes
99.000% 30 seconds
The table above shows availibilty percentage and down time per year
One must also consider data center site redundancy through warm standby or hot
standby, and Disaster Recovery (DR) scenarios.
In warm standby, the primary data center will be active and provide services while a
secondary data center will be in standby. The advantage to warm standby is simplicity
of design, configuration and maintenance. However, the disadvantage is no load sharing
between two sites, which leads to under utilization of resources, inability to verify that
the failover to secondary site is fully functional when it is not used consistently during
normal operation, and an unacceptable delay in the event that a manual cutover is
required. It is also difficult to verify that the “warm” failover is functional when it is not
used during normal operation.
In hot standby, both the primary and secondary data centers provide services in a load
sharing manner, optimizing resource utilization. The disadvantage to this scenario is that
it is significantly more complex, requiring the active management of two active data
centers and implementation of bi-directional data mirroring, or synchronous replication,
which results in additional overhead and more bandwidth between the 2 sites.
4.2 MULTIPATH
Extreme Networks connectivity solutions provide the ability to compress the traditional
3-tier network into a physical 2-tier network by virtualizing the routing and switching
functions into a single tier (the middle tier). Virtualized routing provides for greater
resiliency and fewer switches dedicated to just connecting switches. Reducing the
number of uplinks (switch hops) in the data center improves application performance as
Switches are typically deployed in pairs with redundant links inter-connecting them
for resiliency. While this definitely satisfies the desired high-availability, it does
introduce the concepts of loops within the environment. In an effort to avoid these
loops, traditional Layer 2 loop prevention protocols like Spanning Tree Protocol (STP)
were developed. However, STP has many limitations such as inefficient utilization of
links and high convergence times. Modern network fabric designs steer away from
STP. Depending on the size of the deployment and other requirements, customers
can consider several options as described below.
• Firewalls
The device-level redundancy on the BDX8 and Summit X670 is provided via the
feature Multi-Switch Link Aggregation (MLAG), and on the S-series and 7100 series is
provided via the feature Virtual Switch Bonding (VSB).
MLAG peers have an Inter Switch Connection (ISC) dedicated control VLAN that
is used exclusively used for inter-MLAG peer control traffic and should not be
provisioned to carry any user data traffic. Data traffic however can traverse the ISC
port using other user-defined data VLANs.
This diagram shows MLAG configuration between leaf layer down to server.
MLAG PORT
LAG PORT
Extreme also supports MLAG switches to create one or two MLAG peers. The design
in this document focuses on any given switch having just one MLAG peer, but it is
possible for one switch to have two MLAG peers as in a linear daisy chain of ISCs.
Customers can split the downlink hosts or switches between the peers such that if
one of the switches has a failure, only a subset of hosts or switches would lose half
their bandwidth – the remainder connected to the other two MLAG peers would not
be impacted. All the basic MLAG functionality and traffic forwarding rules apply to
one or two MLAG peers.
The VSB switches may be connected via dedicated hardware ports. S-Series VSB
allows two chassis to be fully virtualized to form a single entity via dedicated
hardware ports or normal 10G ports. The S-Series depending on the model can use
either multiple ordinary 10G ports or multiple dedicated VSB ports to form the high
speed link between chassis. 7100-Series virtual switch bonding will allow up to eight
switches to form a single entity.
Fully meshed data center designs leveraging SPB provide load-sharing through the
efficient use of multiple paths through network. They improve the resiliency of the
networks because they:
• Restrict failures so only directly affected traffic is impacted during restoration; all
surrounding traffic continues unaffected
4.3 REDUNDANCY
ExtremeXOS software supports dynamic load sharing which includes the Link
Aggregation Control Protocol (LACP) and Health Check Link Aggregation. The Link
Aggregation Control Protocol is used to dynamically determine if link aggregation
is possible and then to automatically configure the aggregation. LACP is part of
the IEEE 802.3ad standard and allows the switch to dynamically reconfigure the
link aggregation groups (LAGs). The LAG is enabled only when LACP detects that
the remote device is also using LACP and is able to join the LAG. Health Check Link
Aggregation is used to create a link aggregation group that monitors a particular
TCP/IP address and TCP port. Static load sharing is also supported but is susceptible
to configuration error.
LACP should be used when configuring LAG between the spine and leaf switches, for
the ISC LAG. Where supported on the virtualization, customers should also configure
LACP on the switch edge ports and enable it on the servers.
4.3.2 VRRP
4.3.2.1 OVERVIEW
Virtual Router Redundancy Protocol (VRRP) allows multiple switches to provide
redundant routing services to users.
VRRP is used to eliminate the single point of
failure associated with manually configuring a default gateway address on each host
in a network. Without using VRRP, if the configured default gateway fails, you must
reconfigure each host on the network to use a different router as the default gateway.
VRRP provides a redundant path for the hosts. Using VRRP, if the default gateway
When a VRRP router instance becomes active, the master router issues a gratuitous
ARP response that contains the VRRP router MAC address for each VRRP router
IP address. The VRRP MAC address for a VRRP router instance is an IEEE 802 MAC
address in the following hexadecimal format: 00-00-5E-00-01-<vrid>. The master
also always responds to ARP requests for VRRP router IP addresses with an ARP
response containing the VRRP MAC address. Hosts on the network use the VRRP
router MAC address when they send traffic to the default gateway.
• VLAN Tracking: track active VLANs, e.g. VLANs that go to the Core
• Route Table Tracking: track specified routes in the routing table, e.g. route to
Core next hop
If the tracking condition fails, then VRRP behaves as though it is locally disabled and
relinquishes master status. [Works with Active/Active?]
4.3.3 ROUTING-AS-A-SERVICE
As described above, Fabric Routing optimizes east/west data center traffic by
pushing routing functionality to the edge of the network so that inter-VLAN traffic
can be switched at the edge. In an SPB deployment, similar edge routing value can be
achieved as Fabric Routing but eliminates Layer 3 routing protocols including VRRP,
and this uses a new Extreme Networks feature called Routing-as-a-Service.
In an SPB deployment, eliminating VRRP may be desirable since VRRP has its own
drawbacks, independent of SPB, that may make it undesirable. VRRP is a chatty
protocol that sends advertisements once per second, so it can be fairly resource
intensive and scaling becomes an issue if there are many interfaces that may require
it. Routing-as-a-Service preserves the VRRP property of virtual IP addressing (anycast
addressing) and router redundancy without actually using VRRP, and utilizing the
best path attributes of SPB. It interoperates with any traditional switch and can be
positioned in various Layer 2 configurations including Virtual Switch Bonding (VSB)
and redundantly attaching to Rapid and Multiple Spanning Trees (RSTP/MSTP).
Traditional routing
Routing-as-a-Service
With SPB, it is possible to know the topology of participating devices as SPB uses
IS-IS to compute all paths to all devices. It is also possible to know the whereabouts
of all hosts, i.e. the precise access devices where the hosts attach to the network. If
every device has all VLANs configured, then all subnets in the domain are already
locally attached and traffic can be routed directly to their destination via SPB. Hosts
on different VLANs (and thus different IP subnets) within the SPB domain can
communicate with one another through single hop routing. Routing-as-a-Service also
avoids asymmetrical routing.
The LSNAT device then makes the appropriate changes to packet and header
checksums before passing the packet along. On the return path, the device sees the
source and destination pair with the real IP address and port number and knows that
it needs to replace this source address and source port number with the VIP and
appropriate checksum recalculations before sending the packet along. Persistence is
a critical aspect of LSNAT to ensure that all service requests from a particular client
will be directed to the same real server. Sticky persistence functionality provides less
security but increased flexibility, allowing users to load balance all services through
a virtual IP address. In addition, this functionality provides better resource utilization
and thus increased performance.
An essential benefit of using LSNAT is that it can be combined with routing policies.
Configuring different costs for OSPF links, a second redundant server farm can be
made reachable by other metrics. In this way, load balancing is achieved in a much
more cost effective manner.
• VLAN
• VRF
• VR
In the given design, all the Layer 3 interfaces will be configured on the Spine switch
with the Leaf switches acting as pure Layer 2 transport. Thus no VR/VRF instances
will be configured on the TOR switches, but all the VLANs need to be configured on
both leaf and spine.
Note: VRF in this context is not to be confused with Layer 3 VPN VRFs.
4.4.1 VLAN
VLAN: At a basic level, the term VLAN is used to refer to a collection of devices
that communicate as if they were on the same physical LAN. LAN segments are not
restricted by the hardware that physically connects them, hence the V=virtual. The
default VLAN is untagged on all ports, and there can be a maximum of 4094 VLANs
on any Extreme platform.
• Management VLANs
• Tenant VLANs
• Public VLANs
4.4.2 VRS
The ExtremeXOS software supports virtual routers. This capability allows a single
physical switch to be split into multiple virtual routers. This feature isolates traffic
forwarded by one VR from traffic forwarded on a different virtual router.
Each virtual router maintains a separate logical forwarding table, which allows
the virtual routers to have overlapping IP addressing. Because each virtual router
maintains its own separate routing information, packets arriving on one virtual router
are never switched to another. Ports on the switch can either be used exclusively by
one virtual router, or can be shared among two or more virtual routers. Each VLAN
can belong to only one virtual router.
There are System VRs and User-defined VRs. The System VRs are used for
Management and there is one default VR-Default pre-allocated. The user VRs can be
created for tenants and the VRs support any of the switch routing protocols including
BGP, OSPF, ISIS, RIP etc.
If the customers are deploying a multi-tenant environment where only the Gold service
tier use dedicated VRs, the VR scale limit can dictate the number of Gold tenants that
can be supported. Examples where VR needs may be required for tenants: higher
bandwidth, routing needs extend beyond static routing or BGP routing.
4.4.3 VRF
Virtual Router and Forwarding instances (VRFs) are similar to VRs in that they
maintain isolation. The routing tables for each VRF are separate from the tables for
other VRs and VRFs, so VRFs can support overlapping address space.
VRFs are created as children of user VRs or VR-Default, and each VRF supports Layer
3 routing and forwarding. VRFs can only run static and BGP.
VRFs tend to scale better than VRs as they require fewer resources, so VRFs are
preferable for tenant isolation.
4.5.1 OVERVIEW
The Quality of Service (QoS) concept of quality is one in which the requirements
of some applications and users are more critical than others, which means that
some traffic receives preferential treatment. By using QoS mechanisms, network
administrators can use existing resources efficiently and ensure the required level of
service without reactively expanding or over-provisioning their networks.
• Queue or buffer frames and packets that exceed specified limits and forward
them later (rate shaping)
4.5.2 CLASSIFICATION
In the given IaaS platform, traffic can be classified
• Infrastructure traffic
The ACL-based traffic classification provide the most control of QoS features and can
be used to apply ingress and egress rate limiting. An ACL can be used to add traffic
to a traffic group based on the following frame or packet components:
• Ethertype
• IP protocol
• TCP flag
• IP fragmentation
Depending on the platform you are using, traffic classified into an ACL traffic group
can have one of these actions:
In addition, port based traffic classification groups forward traffic to egress QoS
profiles based on the incoming port number. VLAN-based traffic classification
forward traffic to egress QoS profiles based on the VLAN membership of the
ingress port.
When you are configuring ACL-based traffic groups, you can use the qosprofile
action modifier to select an egress QoS profile. For DiffServ-, port-, and VLAN-based
traffic groups, the traffic group configuration selects the egress QoS profile. For CoS
dot1p traffic groups on all platforms, the dot1p value selects the egress QoS profile.
BlackDiamond X8 series switches and Summit family switches have two defaults
egress QoS profiles named QP1 and QP8. Up to six additional QoS profiles (QP2
through QP7) can be configured on the switch. The default settings for egress QoS
profiles are summarized in the following table.
0-6 QP1 1(Low) 100% 1 This QoS profile is part of the default configuration
and cannot be deleted.
QP2 2(LowHi) 100% 1 You must create the QoS profile before using it.
QP3 3(Normal) 100% 1 You must create this QoS profile before using it.
QP4 4(NormalHi) 100% 1 You must create this QoS profile before using it.
QP5 5(Medium) 100% 1 You must create this QoS profile before using it.
QP6 6(MediumHi) 100% 1 You must create this QoS profile before using it.
QP7 7(High) 100% 1 You must create this QoS profile before using it.
You cannot create this QoS profile on SummitStack
7 QP8 8(HighHi) 100% 1 This QoS profile is part of the default configuration
and cannot be deleted.
New standards like parallel Network File System (pNFS) increase that level of
parallelization towards the database servers. This parallelization will often lead to
the condition in which packets must be transmitted at the exact same time (which
is obviously not possible on a single interface); this is the definition of an “incast”
problem. The switch needs to be able to buffer these micro bursts so that none of the
packets in the transaction get lost, otherwise the whole database transaction will fail.
As interface speeds increase, large network packet buffers are required.
On the other hand, applications with shorter-living TCP sessions or transactions, such
as high frequency trading, database transactions, character-oriented applications,
many web applications, don’t have such large buffer requirements. For such cases,
Extreme BDX8 and X670 leverages Smart Buffer technology which provides a
dynamic and adaptive on-chip buffer allocation scheme that is superior to static
per-port allocation schemes and avoids latency incurred by off-chip buffers. Ports
have dedicated buffers and in addition can get extra buffer allocation from a shared
pool as needed, thereby demonstrating an effective management of and tolerance for
microbursts. In contrast, arbitrarily large off-chip buffers can exacerbate congestion
or can increase latency and jitter, which leads to less deterministic Big Data job
performance, especially if chaining jobs. While the Extreme hardware maximizes
burst absorption capability and addresses temporary congestion, it also maintains
fairness. Since Extreme’s Smart Buffer technology is adaptive in the shared buffering
allocations, uncongested ports do not get starved of access to the shared buffer pool
and they are not throttled by congestion on other ports, while still allowing congested
ports to get more of the buffers to address the traffic burst.
4.5.5 OVERSUBSCRIPTION
The acceptable oversubscription in a data center network, is highly dependent on the
applications in use and is radically different than in a typical access network. Today’s
design of presentation/web server, application server and database server “layers”
combined with the new dynamics introduced through virtualization make it hard to
predict traffic patterns and load between given systems in the data center network.
The fact is that servers which use a hypervisor to virtualize applications yield higher
performance and the resulting average demand on the interfaces belonging to these
systems will be higher than on a typical server.
Also, if virtual desktops are deployed, one has to carefully engineer the
oversubscription and the quality of service architecture at the LAN access as well.
Typically 0.5 to 1 Mbit/s per client must be reserved – without considering future
streaming requirements.
In the case of a MLAG and VSB, all the links between switches are active and allow
traffic to flow through. In the case of x670, the oversubscription ratio at the leaf
switch is 3:1 (480G down and 160G up). In the case of a single 40G link failure
between a leaf switch and spine switch, the oversubscription ratio at the edge switch
will change to 4:1. If traffic utilization between the leaf and spine switch is too high,
the 4:1 could cause serious congestion and packet drop even in a single link failure
scenario. So if it is necessary to maintain the desired oversubscription rate in the
event of single link failure, additional interfaces may be required in the design.
Essentially, DCB enables the different treatment of traffic based on a set of priorities.
The benefits of a converged (bridged) data center network include:
• Simpler management with only one fabric to deploy, maintain, and upgrade.
• Lower costs because fewer cables, switches and other equipment require less
power to communicate.
Extreme’s data center solutions use the following specifications from the IEEE 802.1
DCB Task Group are:
iSCSI can accomplishes the same state via TCP. However, DCB has aspects that make
an iSCSI environment more reliable and customizable. It can improve performance
and make that performance more deliverable. In a traditional IP network any
lost frames need to be retransmitted. Removal of the potential for loss means
that no retransmissions need to occur; and fewer retransmissions mean a gain in
performance. While retransmissions are rare in well designed, traditional Ethernet
deployments, DCB comes close to removing them completely. The second capability
of DCB that’s important to iSCSI implementations is the allocation of bandwidth on
specific links to specific functions.
4.6 ELASTICITY
A data center model that is truly elastic focuses on agility and modularity with
simplified operations. Elasticity means that all the layers of the data center
respond rapidly to new resource demands, adding and removing resources based
on customer needs, with focus on being the whole network and end-to-end
provisioning, not just on a single switch. Elasticity is beyond just an automation
challenge, it’s about how synchronously the data center reacts to end customer
business applications.
4.6.1 VM TRACKING
Data traffic from VM to VM traverses the network as tagged traffic to maintain
VLAN and tenant isolation. The VLANs need to be configured in the network
fabric and associated to the appropriate edge ports on the leaf switches and be
matched to the hypervisor configuration. Extreme Networks switches support
multi-user, multi-method authentication on every port, absolutely essential when
you have virtual machines as well as devices such as IP phones, computers,
printers, copiers, security cameras and badge readers connected to the data
center network. These multiple devices (or virtual machines) can connect to
the same port and each device can have an independent policy configuration
associated to it.
The uplink ports can have either static VLAN configuration or they can also have the
VLANs configured dynamically as needed.
4.6.2 AUTO-CONFIGURATION
Extreme Networks provides a flexible and simple switch configuration solution,
which allows organizations to quickly build networks or replace faulty switches for
business continuity. Extreme Networks Auto Configuration feature is aimed to achieve
plug-and-play deployment. It provides the ability to drop ship Extreme switches into
the customer premises helps reduce or eliminate operational expenditure (OPEX)
and costs involved in staging and any initial switch configuration. It also reduces
costs incurred for customization of configuration with the ability to classify switches
according to function, hardware type, or location. Standards-based classification
(using DHCP) helps administrators create flexible and easy-to-manage configurations.
• Simple configuration which is easily enabled or disabled and the ability to drop
ship Extreme switches into customer premises with feature enabled in advance
either by the channel partners, system integrators or the Value-Added Resellers
(VAR).
• Works with existing DHCP and TFTP infrastructure in the network, with minimal
customization.
4.7SECURITY
Extreme has a rich identity management (IDM) platform that seamlessly with
NAC to manage an identity database and respond to all identity event triggers.
IDM works with a variety of software components like LLDP, Kerberos, NetLogin,
FDB, IP-Security.
Extreme’s IDM platform also serves foundation for Virtual Desktop Infrastructure
(VDI). With VDI, a user’s desktop is hosted in the data center as a virtual
machine. What this means is that traditional network access control as well
as Role Based Access Control (RBAC) that was tied to the user’s identity at
the campus network edge now needs to move into the data center server-
network edge where the user’s desktop is hosted. Extreme Networks’ Identity
management solution, which transparently detects a user’s identity based on the
user’s Kerberos authentication exchange, can be used in the data center at the
server network edge for this purpose.
When the VDI connection broker assigns a VM to a user, the user’s Kerberos
authentication request passes through the network access switch in the data
center. Extreme Networks Identity Management solution, detects the user’s
identity based on passive Kerberos snooping, and provisions the network port
with the right privileges for the user’s desktop that is now a virtual machine.
Based on the user’s identity the appropriate role for the user can be configured
and enforced dynamically directly at the VM level on the network access port.
When a virtual desktop VM moves, the VM tracking capabilities can detect the VM
movement and inform the Identity Management solution so that the user’s role
can be enforced at the target server where the virtual desktop VM has moved.
• Number of servers in a rack varies over time, thus varying the number of switch
ports that must be provided Unused CAPEX sitting in the server racks is not
efficient
These caveats may result in an overall higher Total Cost of Ownership (TCO) for a
ToR deployment compared to an EoR deployment. Additionally cabling, cooling, rack
space, power and services costs must also be carefully evaluated when choosing an
architecture. Lastly a ToR design results in a higher oversubscription ratio towards the
core and potentially a higher degree of congestion. A fabric-wide quality of service
(QoS) deployment (with the emerging adoption of DCB) cannot fully address this
concern today.
Compared to a ToR design the servers can be placed anywhere in the racks so hot
areas due to high server concentration can be avoided. Also the usage of the EoR
equipment is optimized compared to a ToR deployment, with rack space, power
consumption, cooling and CAPEX decreased as well. The number of switches that
must be managed is reduced with the added advantages of a highly available and
scalable design. Typically chassis switches also provide more features and scale in
an EoR scenario compared to smaller platforms typical of ToR designs. On the other
hand, cabling can be more complex as the density in the EoR rack increase.
4.9.1 OVERVIEW
The evolving traffic patterns of clusters, servers and storage virtualization solutions
are demanding new redundancy schemes. These schemes provide the transport
technology used for inter-data center connectivity and cover the geographical
distances between data centers. They are critical as the network design evolves to
provide ever higher levels of stability, resiliency and performance.
• Cloud Bursting: create an elastic private cloud infrastructure that allows for
optimized application delivery based on current and varying business demands
• Jitter and delay acceptance for virtualized applications and their storage
One solution to the session data issue is to send all requests in a user session
consistently to the same back end server. This is known as “persistence” or
“stickiness”. A downside to this technique is its lack of automatic failover: if a backend
server goes down, its per session information becomes inaccessible, and sessions
depending upon it are lost. So a seamless failover cannot be guaranteed. In most
cases dedicated hardware load balancers are required.
The discussion about load balancing and persistence has a great impact on
separation. The figure below shows a typical situation for cluster node separation
across two redundant data centers. In this example the node separation of different
clusters types with shared nothing and shared data bases are shown.
In many cases, the same subnet is used across both of the data centers, which is then
route summarized. The “cluster” subnet will be advertised as an external route using
“redistribute connected” and by filtering all subnets except the cluster subnet. While
redistributing, the primary data center will be preferred to the remote data center by
lower path cost until such time as the primary data center disappears completely.
However this might cause problems in event of failover, when the traffic must be
re-routed from the primary data center to the backup data center. This is especially
true when traffic traverses stateful firewalls, when one has to make sure that traffic
on both directions passes the same firewall system. Techniques for VRRP interface or
next hop tracking can make sure that this is covered appropriately.
To provide database access across both data centers at any time, connectivity
between access switches and storage systems must be duplicated. Replication of
databases must be achieved through Layer 2 techniques, such as VPLS, GRE, SPB,
or with 802.1Q and RSTP/MSTP along with 802.3ad Link Aggregation or possibly
through switch clustering/bonding techniques. In all cases one will face huge demand
for bandwidth and performance that can be quite expensive for WAN links and must
be properly sized. But the benefit will be improved data center availability and data
center users will be able to load balance across them.
A more scalable method for Layer 2 data center interconnect is desirable. SPB can
extend the Layer 2 domain between data centers. An alternative is to extend a Layer
2 tunnel between the data center sites to allow Layer 2 traffic to be transported
transparently across the Layer 3 infrastructure, with the added benefit of not extending
the size of the size of the spanning tree domain. Extreme switches leverage standard
IP/GRE tunneling or VPLS to interconnect the data centers. In these scenarios, the
multiple data center sites see each other as part of a common Layer 2 domain.
Devices or virtual machines can easily be moved between data centers in a hot or
cold manner. The networks can leverage Extreme Networks functionality including:
After moving to the new location, the VM will be reachable via its new location as
a result of the VM host route advertisement by the local fabric router in the new
data center location. Fabric routing and host routing optimize the flow of traffic into
and between data centers by providing direct access to and from each data center
symmetrically. The traffic optimization limits the amount of traffic that traverses
the interconnect links to traffic that needs to go between data centers providing
the added benefit of conserving bandwidth on potentially expensive data center
interconnect links.
Considering redundant data center designs where the same subnet is used across
the primary and backup data center, a standard routed layer 3 interconnect may be
suitable. This is environment is suitable in the scenario where traffic does not need to
have direct Layer 2 connectivity between the respective data centers, such as when
physical movement of server connectivity to a new data center is desired.
4.10 MANAGEMENT
The centralized data center management also can discover devices and provide
topology information, as should be seamlessly integrated for the whole data center.
The NetSight Discovery feature can automatically discover the new switches in the
data center:
• Dedicated path for management traffic, which while not bandwidth intensive, it
is critical in providing key services to the infrastructure.
• SNMP
• NTP
• Syslog
• Authentication
• Network Management
The hypervisor service consoles can have one VLAN and the leaf/spine switches can
have another VLAN, to maintain separation. And the IP address range used for these
management VLANs should be completely different from the data VLANs.
There are a wide variety of infrastructure elements that are needed in the ecosystem
and there are a plethora of vendors that are available. With so many players, it is
crucial to innovate and push technology boundaries synchronously. Extreme has
many technology solution partners that help enrich the ecosystem of third party
infrastructure elements. Please see http://www.extremenetworks.com/partners/tsp/
for more details on Extreme’s Technology Solution Partner program.
Similarly at the application layer, current models of custom hardware appliances for
different services are being replaced in favor of Network Functions Virtualization
(NFV), which will change the way firewalls, intrusion detect devices, load balancers,
and other virtualized network functions (VNF) are deployed in the data center.
NFV can address the challenges of legacy data centers, and VNF can be linked
through “service chaining” enabling new services to be applied more quickly via the
orchestration mechanisms.
5.1.1 OVERVIEW
Virtualization has introduced the ability to create dynamic data centers and with the
added benefit of “green IT.” Server virtualization can provide better reliability and
higher availability in the event of hardware failure. Server virtualization also allows
higher utilization of hardware resources while improving administration by having a
single management interface for all virtual servers.
Different vendors such as VMWare, Citrix, Microsoft, and others, provide centralized
management platforms, e.g. vSphere, Xen, Hyper-V, respectively, that provide
administrators with a single interface for all aspects of monitoring, managing, and
maintaining the virtual infrastructure, which can be accessed from multiple devices.
5.2 STORAGE
Storage is a critical piece of a virtualized infrastructure. Data Centers are moving
towards converged infrastructures that will result in fewer adapters, cables, and nodes,
and ultimately in more efficient network operations. This is driving more requirements
Storage requirements vary by server type. Application servers require much less
storage than database servers. There are several storage options – Direct Attached
Storage (DAS), Network Attached Storage (NAS), or Storage Area Network (SAN).
In the past, Fibre Channel (FC) offered better reliability and performance but
needed highly-skilled SAN administrators. Dynamic data centers, leveraging server
virtualization with Fibre Channel attached storage, will require the introduction of
a new standard, Fibre Channel over Ethernet (FCoE). FCoE, requires LAN switch
upgrades due to the nature of the underlying requirements, as well as Data Center
Bridging Ethernet standards. FCoE is also non-routable, so it may cause issues
when it comes to the implementation of disaster recovery or large geographical
redundancy that Layer 2 connectivity cannot yet achieve.
On the other hand, iSCSI which is a standard Ethernet block-based storage protocol
that allows SCSI commands to be encapsulated in TCP/IP, give servers access to
storage devices over common Ethernet IP networks. It provides support for faster
speeds and improved reliability, making it more attractive. iSCSI offers increased
flexibility and a more cost effective solution by leveraging existing network components
(NICs, switches, etc.). In addition, Fibre Channel switches typically cost 50% more than
Ethernet switches. Overall, iSCSI is easier to manage than Fibre Channel, considering
most IT personnel familiarity with the management of IP networks.
The storage solutions will also continue to evolve with Software Defined Storage
(SDS). Traditionally the storage management software resides on the storage
controller, and with SDS the software is decoupled from the hardware and moves off
to a server. This decoupling enables more efficient resource management and offers
more flexibility in hardware selection, perhaps even commodity storage hardware,
and easier data moves between multiple storage vendors. SDS also provides the
ability to integrate storage more smoothly into data center orchestration solutions
and analytics tools that need access to large pools of data.
5.3 FIREWALLS
Data Centers need protection against DoS attacks and other threats that target the
users in the data center, and firewalls mitigate threats by inspecting data traffic and
following user-created rules to take action on the traffic. This firewall is responsible
for protecting the entire data center against any malicious attacks from the Internet
or even within itself. It is also responsible for controlling the traffic flow that originates
from the tenants and heads towards the Internet. It is imperative to couple this type
of firewall with a larger security infrastructure to protect the data center against
intelligent attacks like DDoS.
The aggregation layer is the best-suited enforcement point for firewall security, or
additional services like VPN, Intrusion Prevention System (IPS). All servers can access
these services with short but predictable latency and bandwidth in an equal fashion.
High performance and intelligent Layer 4-7 application switches provide always-on,
highly scalable and secure business critical applications or be part of that layer itself.
In a typical data center deployment, the firewall access layer consists of a cluster of
multiple firewalls. These firewalls have the ability to exchange traffic flow state between
them, thereby allowing for an active/active access topology. Such a topology allows for
a redundant access network and also provides the necessary resource required, serving
the entire data center with the bandwidth for Internet access.
The firewall physical infrastructure can be connected to the Data Center spine via
links on high-speed interfaces off the BDX8 or S-series. The VMs can then use the
spine switch as the default gateway and let the switch handle the routing. For traffic
destined within the Data Center, the spine switch routes the traffic back down to the
right leaf switches. For traffic destined outside the Data Center, the spine switch may
have a default route to transmit the traffic to the firewall for processing.
Virtual firewall services are also provided by some hypervisors that place firewall
filters on the virtual network adapters that provide stateful inspection of data
traffic and allow or prevent transmission based on user-defined rules. This happens
transparently to the network elements. As with the evolution towards NFV, a lot of the
firewalling is becoming more and more virtualized and distributed because it is less
expensive than doing it in hardware and enables service chaining.
• Vlan Membership
• Traffic QoS
• Firewall rules
• IDS/IDP rules
• Traffic mirroring
Together with ordering for these actions the generalized policy will allow to define
the traffic path in the network and how the traffic is handled from one service
to another to increase the value chain in an automated manner without tedious
individual configuration processes for each server.
Some of the concepts expressed here are present in the Group Policy concept in
OneController, some of them will be developed as enhancements to the Group Policy
framework in future releases of OneController. As some examples:
1. A firewall is added to the network and several firewalling rules are created. These
rules can be registered as extensions to the Policy framework so the policy can
be added to the end system like:
Policy_A:
As part of the firewall registration in the framework must include how the traffic is
handled by the controller and how flows must be created to chain the service such a
way that traffic from Endpoint_A is forwarded through Firewall and the Firewall rule
is applied.
2. A VoIP recording device is added to the network and so a new service definition
can include “recorder processing” as part of its definition indicating that all or
some traffic from that device must be mirrored to the recorder for recording.
3. A guest management system is added to the network to it can register with the
policy API and add the service “captive portal processing” to the list of services
that a user can receive.
©2014 Extreme Networks, Inc. All rights reserved. Extreme Networks and the Extreme Networks logo are trademarks or registered trademarks of Extreme Networks, Inc.
in the United States and/or other countries. All other names are the property of their respective owners. For additional information on Extreme Networks Trademarks
please see http://www.extremenetworks.com/company/legal/trademarks/. Specifications and product availability are subject to change without notice. 8916-1114