Академический Документы
Профессиональный Документы
Культура Документы
Table of Contents
1 Introduction ................................................................................. 1 1.1 Purpose of this document .................................................................... 1 1.2 Scope ............................................................................................ 1 2 Solution Architecture Overview ......................................................... 2 2.1 Physical Architecture Overview.............................................................. 2 2.2 Layouts and Breakpoints...................................................................... 2 2.2.1 Local Tier 1 Combined ................................................................ 3 2.2.2 Local Tier 1 Base....................................................................... 3 2.2.3 Base Expanded ........................................................................... 4 2.2.3.1 Local Tier 1 Network Architecture ............................................ 5 2.3 Cabling Diagrams .............................................................................. 5 2.3.1 Local Tier 1 Cabling ...................................................................... 5 3 Hardware Components .................................................................... 6 3.1 Network ......................................................................................... 6 3.1.1 Force10 S55 (ToR Switch) ............................................................... 6 3.1.1.1 Force10 S55 Stacking .............................................................. 6 3.2 Servers ........................................................................................... 7 3.2.1 Local Tier 1 Rack ......................................................................... 7 3.3 Storage .......................................................................................... 8 3.3.1 Equallogic Storage ....................................................................... 8 3.3.1.1 PS4100E ............................................................................. 8 3.4 End Points ...................................................................................... 9 3.4.1 Dell Wyse EndPoints ..................................................................... 9 3.4.1.1 Dell Wyse R10L ..................................................................... 9 3.4.1.2 Dell Wyse Z90D7 ................................................................... 9 3.4.1.3 Dell Wyse X90m7 .................................................................. 9 4 Solution Architecture for Microsoft Remote Desktop Services .................. 10 4.1 Overview ....................................................................................... 10 4.2 Storage Architecture ......................................................................... 10 4.2.1 Local Tier 1 ............................................................................... 10 4.2.2 Shared Tier 2 ............................................................................ 11 4.3 Management Server Infrastructure ......................................................... 11 4.3.1 Virtual Hard Disk Format ............................................................... 11 4.3.2 DNS ....................................................................................... 12 4.3.3 File Services .............................................................................. 12 4.3.4 User Profile Disks ....................................................................... 12 4.4 Hyper-V Configuration ...................................................................... 13 4.4.1 Core Components ...................................................................... 14 4.4.2 Hyper-V Networking (Local Tier 1) ................................................... 14 4.4.3 Conclusion .............................................................................. 16 Appendix 1-Pooled VDI Performance Analysis Results .............................. 17 CPU Resource Utilization Performance Analysis Results .................................... 17 Memory Resource Utilization Performance Analysis Results ............................... 18 Disk IO Resource Utilization Performance Analysis Results ................................ 19 Network Resource Utilization Performance Analysis Results ............................... 20 End-User Experience Performance Analysis Results ......................................... 20
Appendix 2 Remote Desktop Session Performance Analysis Results .......... 22 CPU Resource Utilization Performance Analysis Results .................................... 22 Memory Resource Utilization Performance Analysis Results ............................... 23 Network Resource Utilization Performance Analysis Results ............................... 23 About the Authors ................................................................................ 24
Introduction
Purpose of this document
1.1
This document describes the Dell DVS Reference Architecture for Windows Server 2012. This document addresses the architecture design, configuration and implementation considerations for the key components of the architecture required to the deliver RDS on Hyper-V 2012. 1.2
Scope
Define the detailed technical design for the solution Define the hardware requirements to support the design Define the design constraints which are relevant to the design Define relevant risks, issues, assumptions and concessions referencing existing ones where possible Provide a breakdown of the design into key elements such that the reader receives an incremental or modular explanation of the design. Provide solution scaling and component selection guidance
Relative to delivering the virtual desktop environment, the objectives of this document are to:
Page 1
2.1
The core architecture design consists of the Local Tier1 solution model. Tier 1 in the DVS context defines from which disk source the VDI sessions execute. Local Tier1 applies to rack servers only while Shared Tier 1 can be rack or blade. Tier 2 storage is utilized for user profile/data and Mgmt VM execution. Local Tier 1
2.2
The solution architecture will follow the traditional DVS distributed design model consisting of 4 primary layers: Network, Compute, Management, and Storage. The Network and Storage layers can be optionally provided by the customer if suitable infrastructure is already in place. The Compute layer contains the hosts that serve the VDI sessions and the Management layer contains the components required to support VDI broker infrastructure. The following highlights the key layout and scaling elements of the solution.
Page 2
2.2.1
In the combined architecture both the RD Virtualization Host (RDVH) and RD Session Host (RDSH) roles will coexist within dedicated VMs on a single Compute host. This will enable users requiring either shared RDP sessions or pooled VDI to be hosted on the same physical server. The value of this solution is a minimum infrastructure investment with maximum VDI flexibility easily tailored to shared and pooled user types. Additional information on the hardware components can be found in section 3 below.
2.2.2
In the base distributed architecture the RDVH or RDSH roles are assigned to a dedicated Compute host. This architecture can supports either a single RDVH or RDSH Compute host or one of each. This solution provides maximum Compute host user density for the each broker model and allows clean linear upward scaling. Additional information on the hardware components can be found in section 3 below.
Page 3
2.2.3
Base Expanded
The base expanded architecture provides linear upward scale for both the RDVH and RDSH roles optimized for 500 pooled VDI sessions or 700 shared. See Appendix for test results. This solution supports up to 4 Compute hosts running either RDVH or RDSH roles to meet the needs of the enterprise. Additional information on the hardware components can be found in section 3 below.
Page 4
2.2.3.1
In the Local Tier 1 architecture, a single Force10 S55 switch can be shared among all network connections for both mgmt and compute, for the upper limit of 500 pooled VDI sessions. Over 1000 users DVS recommends separating the network fabrics to isolate iSCSI and LAN traffic as well as making each stack redundant. Only the mgmt servers connect to iSCSI storage in this model. All ToR traffic has been designed to be layer 2/ switched locally, with all layer 3/ routable VLANs trunked from a core or distribution switch. The following diagrams illustrate the logical data flow in relation to the core switch
2.3
2.3.1
Cabling Diagrams
Local Tier 1 Cabling
Page 5
Hardware Components
Network
3.1
The following sections contain the core network components for the DVS local and shared Tier 1 solutions. General cabling guidance to consider in all cases is that TwinAx is very cost effective for short 10Gb runs and for longer runs fiber with SFPs should be used.
3.1.1
Force10 S55
Guidance: 10Gb uplinks to a core or distribution switch are the preferred design choice using the rear 10Gb uplink modules. If 10Gb to a core or distribution switch is unavailable the front 4 x 1Gb SFP ports can be used. The front 4 SFP ports can support copper cabling and can be upgraded to optical if a longer run is needed.
For more information on the S55 switch and Dell Force10 networking, please visit: LINK
3.1.1.1
The Top or Rack switch in the Network layer can be optionally stacked with a second switch, if greater port count or redundancy is desired. Each switch will need a stacking module plugged into the rear bay of each switch and connected with an appropriate cable. Switch stacks should be cabled in a ring configuration with the last switch in the stack cabled back to the first. Uplinks should be configured on all switches in the stack back to the core to provide redundancy and failure protection. Dell DVS Reference Architecture for Windows Server 2012 Page 6
Please reference the following Force10 whitepaper for specifics on stacking best practices and configuration: LINK 3.2
3.2.1
Servers
Local Tier 1 Rack
The server platform for the Server 2012 RDS solution is the best-in-class Dell PowerEdge R720. This dual socket CPU platform runs the fastest Intel Xeon E5-2600 family of processors, can host up to 768GB RAM, and supports up to 16 2.5 SAS disks. Uncompromising performance and scalability in a 2U form factor. In the local Tier 1 model, VDI sessions execute on local storage on each Compute server. Due to the local disk requirement in the Compute layer, this model supports rack servers only. In this model only the Management server hosts access shared storage to support the solutions mgmt role VMs. Because of this, the Compute and Mgmt servers are configured with different add-on NICs to support their pertinent network fabric connection requirements. Refer to section 2.3.1 for cabling implications. The management server host has reduced RAM, CPU and fewer disks, since its VMs execute on shared Tier 2 storage.
Local Tier 1 Compute Host PowerEdge R720 2 x Intel Xeon E5-2690 Processor (2.9Ghz) 192GB Memory (12 x 16GB DIMMs @ 1600Mhz) Microsoft Windows Server 2012 Hyper-V 2 x 146GB SAS 6Gbps 15k Disks (OS) 10 x 146GB SAS 6Gbps 15k Disks (VDI) PERC H710 Integrated RAID Controller RAID10 Broadcom 5720 1Gb QP NDC (LAN) Broadcom 5720 1Gb DP NIC (LAN) iDRAC7 Enterprise w/ vFlash, 8GB SD 2 x 750W PSUs
Page 7
Local Tier 1 Mgmt Host PowerEdge R720 2 x Intel Xeon E5-2680 Processor (2.7Ghz) 32GB Memory (4 x 8GB DIMMs @ 1600Mhz) Microsoft Windows Server 2012 Hyper-V 2 x 146GB SAS 6Gbps 15k Disks PERC H710 Integrated RAID Controller RAID1 Broadcom 5720 1Gb QP NDC (LAN/iSCSI) Broadcom 5719 1Gb QP NIC (LAN/iSCSI) iDRAC7 Enterprise w/ vFlash, 8GB SD 2 x 750W PSUs
For more information on the Dell PowerEdge R720 server and other server offerings from Dell, please visit: LINK 3.3
3.3.1
3.3.1.1
Storage
Equallogic Storage
PS4100E
Model Features 12 drive bays (NL-SAS/ 7200k RPM), dual HA controllers, Snaps/Clones, Async replication, SAN HQ, 1Gb Options 12TB 12 x 1TB HDs 24TB 12 x 2TB HDs 36TB 12 x 3TB HDs Uses Tier 2 array for 1000 users or less in local Tier 1 solution model (1Gb)
Equallogic PS4100E
Page 8
12xNLSASdrives
1GbEthernetports
Mgmtports
For more information on the Dell Equallogic PS4100E and other networked storage options from Dell, please visit: LINK
3.4
3.4.1
3.4.1.1
End Points
Dell Wyse Endpoints
Dell Wyse R10L
The Dell Wyse R10L thin client combines outstanding power and security with display performance and manageability to help maximize your investment in your virtual desktop infrastructure. With built-in Dell Wyse ThinOS, the R10L offers exceptional speed and security, while high-performance central and graphics processors offer an outstanding user experience. Enjoy scalable enterprisewide management, including simple deployment, patching and updates, and future-proof connectivity to accommodate a wide variety of peripherals and interfaces. Drawing only 12 to 15 watts in typical usage, the R10L can reduce carbon emissions for an environmentally-conscious comfortable working environment.
3.4.1.2
Dell Wyse Z90D7 is a super high-performance Windows Embedded Standard 7 thin client for virtual desktop environments. Featuring a dual-core AMD processor and a revolutionary, unified engine designed to eliminate performance constraints, the Z90D7 offers incredible speed and power for the most demanding embedded Windows applications, rich graphics and HD video. With Microsoft Windows Embedded Device Manager connectivity, one can leverage your existing Microsoft System Center Configuration Manager platform. The Z90D7 is an ideal thin client for the most demanding mix of virtual desktop or cloud applications.
3.4.1.3
Dell Wyse X90m7 mobile thin clients powered with Windows Embedded Standard 7 offers great performance and a crisp 14 LED backlit display, along with a dual-core high-performance AMD processor with Radeon HD 6310 graphics for exceptional HD multimedia capabilities. A built-in webcam, integrated wireless a/b/g/n, and support for 3G/4G cards and optional smart card reader offers you rich interactivity and flexibility. With no local hard drive to weigh it down, the X90m7 can eliminate the risk of losing or exposing sensitive data. And with Microsoft Windows Embedded Device Manager connectivity, one can manage it with your existing Microsoft System Center Configuration Manager platform. Seamlessly and securely connect to your IT infrastructure and access some of the most advanced web browser and media player capabilities available in a mobile thin client. Dell DVS Reference Architecture for Windows Server 2012 Page 9
4.1
Overview
This solution architecture follows a distributed model where solution components exist in layers. The Compute layer is where VDI desktop VMs execute, the Mgmt layer being dedicated to the broker management server role VMs. Both layers, while inextricably linked, scale independently. Server 2012 RDS provides a number of VDI options to meet your needs, all within a single, simple, wizard-driven environment that is easy to set up and manage.
4.2
4.2.1
Storage Architecture
Local Tier 1
In this model, Tier 1 storage exists as local hard disks on the Compute hosts. To achieve the required performance level, RAID 10 must be used across all local disks used for VDI. A single volume per local Tier 1 compute host is sufficient to host the provisioned desktop VMs along with their respective write caches. Two of the disks included on each host should be mirrored and used to install the OS.
RAID 1 10
Page 10
4.2.2
Shared Tier 2
Tier 2 is shared iSCSI storage used to host the mgmt server VMs and user data. The Equallogic 4100 series arrays will be used for smaller scale deployments up to 500 pooled VDI users (Local Tier 1 only). The table below outlines the minimum volume requirements for Tier 2 storage. Larger disk sizes can be chosen to meet the capacity needs of the customer. The user data can be presented either via a VHDX or native NTFS pass-through disk to simplify a future upgrade to NAS. All VM disks should be presented as VHDX.
Volumes
Size (GB)
RAID 50 50 50 50
Purpose RDS VMs, File Server File Server User profiles ISO storage (optional)
Management 500 User Data User Profiles Templates/ ISO 2048 20 200
4.3
The Mgmt role requirements for the base solution are summarized below. Data disks should be used for role-specific application files/ data, logs, IIS web files, etc and should exist in the Mgmt volume. Tier 2 volumes with a special purpose (called out above) should be presented in the format specified below:
vCPU 1 1 1 3
RAM (GB) 4 4 4 16
4.3.1
The VHDX disk format provides numerous advantages over the older VHD specification and should be used for all virtual disks used in the solution. Larger vDisk support, up to 64TB, corruption protection during power failures, and larger sector disk alignment are a few of the new features.
Page 11
4.3.2
DNS
DNS plays a crucial role in the environment not only as the basis for Active Directory but will be used to control access to the various Microsoft software components. All hosts, VMs, and consumable software components need to have a presence in DNS, preferably via a dynamic and AD-integrated namespace. Microsoft best practices and organizational requirements should be adhered to. To plan for eventual scaling, access to components that may live on one or more servers should be considered during initial deployment. The use of CNAMEs and the round robin DNS mechanism should be employed to provide a front-end mask to the back-end server actually hosting the service or data source.
4.3.3
File Services
The File Services role will be provided via a dedicated VM. In the interest of portability and providing a clean path to an optional HA upgrade, the volumes can be presented to the file server VM in the form of a Pass-Through Disk. This will ensure a cleaner transition for customers who upgrade to HA and add a NAS head to their environments by keeping the data on the storage array and not inside a VHDX that will need to be copied out.
4.3.4
User Profile Disks is a component of the Server 2012 RDS solution which is used to manage user profiles. Profile Disks provide a cohesive method to manage user documents and profile data in a VDI environment. Profile disks can be enabled in either session or virtual desktop collections and provide options to customize based on the need of the implementation. The solution file server will be used to host user profile disks and home drive data, which can be separated using an optional user data volume, if desired. Otherwise, all user data should be stored within a profile disk.
Page 12
4.4
Hyper-V Configuration
The Local Tier 1 solution will be built upon the Server 2012 Hyper-V hypervisor. All Microsoft best practices and prerequisites should be adhered to (NTP, DNS, Active Directory, etc).
Solution Architecture Components Hypervisor VDI Broker Server OS Desktop OS Microsoft Windows Server 2012 Hyper-V Remote Desktop Services 2012 Microsoft Windows Server 2012 Std edition for Mgmt, DC edition for Compute hosts. Microsoft Windows 7 Enterprise (x86)
Virtual Machine Configuration Hard Drive Disk Type Swap File vRAM Reservations (Mgmt) vRAM Reservations (VDI) SCSI Controller VHDX Dynamically Expanding Store with VM 50% of configured vRAM 500MB Basic, 1GB Std, 1.5GB Premium
Page 13
4.4.1
Core Components
Each Compute and Mgmt host will run the full GUI version of Server 2012 in this solution. All RDS component roles will exist as VMs yielding a 100% virtualized architecture in both server layers. The required RD Session Host or Virtualization Host roles will be enabled in dedicated VMs on the Compute hosts, while the RDS infrastructure components will be enabled in dedicated VMs on the Mgmt hosts.
Gold Image VMs will need to be generated for each base Operating System required within the solution and appropriately prepared for pooled RDS collection (Sysprep, etc). As a best practice recommendation, a template should only be used in one collection, so if creating multiple pools, you will need multiple templates as well.
4.4.2
The network configuration in this model will vary slightly between the Compute and Mgmt hosts. The Compute hosts will not need access to iSCSI storage since they are hosting the VDI sessions on local disk. The following outlines the VLAN requirements for the Compute and Mgmt hosts in this solution model: Compute hosts (Local Tier 1) o Mgmt VLAN: Configured for Hyper-V infrastructure traffic L3 routed via core switch o VDI VLAN: Configured for VDI session traffic L3 routed via core switch Mgmt hosts (Local Tier 1) o Mgmt VLAN: Configured for Hyper-V mgmt traffic L3 routed via core switch o iSCSI VLAN: Configured for iSCSI traffic L2 switched only via ToR switch o VDI Mgmt VLAN: Configured for VDI infrastructure traffic L3 routed via core switch An optional DRAC VLAN can be configured for all hardware management traffic, which should be L3 routed via core switch
In this solution architecture, LAN and iSCSI traffic will be segmented in dedicated VLANs but combined within a single switch to minimize the initial network investment. Following best practices and in solutions that may desire larger scales, this traffic should be separated into discrete switches. Each Local Tier 1 Compute host will have a quad port NDC as well as an add-on 1Gb dual port PCIe NIC. The LAN traffic from the server to the ToR switch should be configured as a LAG to Dell DVS Reference Architecture for Windows Server 2012 Page 14
maximize bandwidth. The Compute hosts will require 2 vSwitches, one for VDI LAN traffic, and another for the Hyper-V Mgmt.
vsw1 LAN
The Mgmt hosts have a slightly different configuration since they will additionally access iSCSI storage. The add-on NIC for the Mgmt hosts will be a 1Gb quad port NIC. 3 ports of both the NDC and add-on NIC will be used to form 3 virtual switches. iSCSI should be isolated onto its own vSwitch with teamed NICs and connections from all 3 vSwitches should pass through both the NDC and add-on NIC per the diagram below. Care should be taken to ensure that all vSwitches are assigned redundant NICs that are NOT from the same PCIe device. The LAN traffic from the server to the ToR switch should be configured as a LAG.
1Gb QPNDC
NIC teaming should be configured in the Hyper-V host using Dell drivers to ensure that the proper NICs from differing PCIe devices are bonded. The resulting teamed virtual NIC should then be assigned to the appropriate virtual switch within Hyper-V. A VLAN ID should be specified in the virtual switch used for management on the Hyper-V host. All NICs and switch ports should be set to auto negotiate.
Page 15
4.4.3
Conclusion
The Dell Windows Server 2012 RDS solution provides a robust and scalable VDI platform for pooled and Session hosts deployments. Using VDI-optimized hardware in a configuration that has been tested and proven by Dell DVS Engineering, you can deploy Microsoft based VDI that is both cost effective and high performing. Our layered architecture provides flexibility to maximize your infrastructure investment with the capability to expand and contract where necessary.
Page 16
Page 17
Page 18
Page 19
Page 20
Page 21
Page 22
Page 23
This Reference Architecture is for informational purposes only, and may contain typographical errors and technical inaccuracies. The content is provided as is, without express or implied warranties of any kind. Dell, the Dell logo, PowerEdge, Force10 and EqualLogic are trademarks of Dell Inc. Microsoft, Windows, WIndows Server 2012, andHyper-V are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries. Intel is a registered trademark and Core is a trademark of Intel Corporation in the U.S and other countries.
Page 24