Академический Документы
Профессиональный Документы
Культура Документы
www.citrix.com
Contents
1.0
1.1
Introduction ........................................................................................................................................... 3
Citrix Service Provider Reference Architecture ............................................................................................4
2.0
2.1 2.2 2.3 2.3.1 2.3.2 2.3.3 2.3.4 2.3.5 2.4 2.5 2.6
3.0
3.1 3.1.1 3.1.2 3.2 3.2.1 3.2.2 3.3 3.4
4.0
Conclusion ............................................................................................................................................34
Page 2
1.0 Introduction
XenApp is an on-demand application delivery solution that enables any Windows application to be virtualized, centralized and managed in the datacenter and instantly delivered as a service to users anywhere on any device. This means subscribers can use whatever device they chooselaptop, tablet, smartphonebut still access familiar Windows desktops and business applications that the service provider manages. XenApp enables service providers to centrally manage a single instance of each application and deliver it to users for online and offline use, providing a high-definition experience. It delivers 99.999 percent application availability and is proven with 25 million applications in production and over 100 million users worldwide. Citrix XenApp 6 introduces exciting new enhancements for advanced management and scalability, a rich multimedia experience over any network and self-service applications with universal device support from PC to Mac to smartphone. With full support for Windows Server 2008 R2 and seamless integration with Microsoft App-V, XenApp 6 provides session and application virtualization technologies that make it easy for service providers to centrally manage applications using any combination of local and hosted delivery to best fit their unique requirements. This whitepaper examines the architecture and design of the Citrix XenApp solution and its ability to provide a scalable and high availability infrastructure while delivering on-demand access to applications (SaaS) and desktops (DaaS) from the cloud.
Page 3
1.1
1.2
Multi-tenant SaaS/DaaS
The multi-tenant SaaS/DaaS module comprises four sub-components: 1. Windows applications and desktops 2. Web-based SaaS 3. Back-office SaaS 4. Third-party SaaS This module is the core component of the service provider datacenter(s) it enables multitenant delivery of virtual applications (SaaS) and desktops (DaaS). Within this module, applications and desktops are virtualized, and subscriber partitions and Active Directory boundaries are defined, which are centrally governed by XenApp systems. Windows desktops and applications are powered by XenApp, enabling service providers to deliver any applicationWindows, Web, SaaS, back office or third-partyto their subscribers.
Page 4
2.2
2.3
XenApp configuration
A review of the primary components of the XenApp farm and its architectural design is outlined below with details about each components role in the architected solution.
Page 5
Page 6
How much storage is needed for the data store database? To estimate the database storage requirements, it is necessary to be aware of the amount of disk space that common XenApp objects consume. This helps with estimating the initial size of the database which, in turn, reduces the frequency of database file size increases as new objects are created. This estimate is important because minimizing database file size increases stabilizes the performance of the data store. XenApp object Initial farm creation Add one server to the farm Application publication o 10 hosting servers o 500 users o default icon Application publication o 10 hosting servers o 5 user groups o default icon Application publication o 32-bit color icon Application publication o 256-bit color icon Create one worker group Create one load evaluator Apply the load evaluator o 10 servers
Table 1 - XenApp common object sizes
24
48 408 2 8 16
Given CSP Corporations goal, an environment consisting of 1,000 hosted or streamed applications with high resolution icons (256-bit), 100 worker groups to serve as tenant silos, and 10 load evaluators, will require an IMA data store of approximately 500MB.
What type of database should be used to host the data store? To accommodate a farm of this scale, an enterprise-capable database server should be selected. The database selection should be made based on expertise among farm administrators. Based on the environment described above and on current in-house
Page 7
expertise, CSP Corporation will deploy the XenApp environment utilizing Microsoft SQL Server 2008. However, if the company had Oracle database expertise, Oracle could have been used for the deployment.
What type of hardware is needed to support 1,000 servers? For 1,000 servers in a single XenApp farm, it is recommended that a database server with an Intel Xeon class or better quad-core processor be dedicated to hosting the data store.. The processing power of the database server determines the speed of administrative activities such as: o starting the IMA Service o enumerating servers via the Delivery Services Console o adding a published application The database server CSP Corporation selected was an Intel quad-core 1.6GHZ processor with 4GB of RAM.
How many zones are needed to support 1,000 servers? Proper XenApp zone design is one of the most important steps in building a stable, high-performing farm. For CSP Corporations purposes, a site can be associated with a separate geographical region. Figure 4 outlines key decision points when architecting the zone design based on the topology of the infrastructure.
Page 8
Consider these zone design guidelines: Minimize the number of zones in your farm. The fewer zones in a farm, the more scalable the farm. Every time a dynamic event occurs, such as a logon, logoff, or disconnect, an update is sent to the data collector. The data collector must then forward the update to all other data collectors in the farm, which consumes bandwidth and CPU. Data collectors must keep up with the events in other zones as well as their own. Create zones for major datacenters in different geographic regions. If a site has a small number of servers, group that site in a larger sites zone. If your organization has small sites with low bandwidth or unreliable connectivity, do not place those sites in their own zone. Instead, group them with other sites that have better connectivity. When combined with other zones, this might form a hub-and-spoke zone configuration. If you have more than five sites, group the smaller sites with the larger zones. Citrix recommends a maximum of five zones.
In the case of CSP Corporation, a single zone of 1000 servers is optimal as the environment consists of a single site hosting all of the XenApp servers. This design remains true regardless of how tenants are isolated at the network level. TCP port 2512 must be open to allow IMA communications to and from the member servers and data collector. As shown in Figure 4, zones should be based on physical topology rather than on network subnets or VLANs.
Page 9
In the case where the XenApp servers are located in different sites, the flow chart shown in Figure 3 provides guidance on the optimal zone design for that environment. Is a backup data collector needed for this farm? In order to satisfy the business requirements for CSP Corporation, a dedicated backup data collector has been installed in each zone. In the event the primary data collector goes offline, this dedicated server is available to assume the data collector role. If the data collector role is assumed by a server that is not dedicated to the task, resource contention between application users and the data collector operations can result in data collector events getting queued. What type of hardware is recommended for the data collector? The data collector stores all dynamic information in memory; therefore, the data collector should have enough RAM to store all of the records. Memory usage will vary based on the number of published applications, number of servers and number of user sessions in the farm. The CPU plays an important role in determining the number of resolutions the data collector can process in conjunction with managing dynamic information. The data collector for the CSP Corporation was an Intel Xeon 2.83 GHz quad-core processor with 4GB of memory. Figure 6 shows the published application memory usage on the data collector. The average published application consumes about 39KB of memory.
Page 10
Figure 7 shows the memory usage based on the number of connected sessions. The average session consumes about 1.52KB of memory on the data collector.
Figure 8 references the memory usage based on varying numbers of servers. Each server that joins a farm consumes about 329KB of memory.
Page 11
For CSP Corporation, the administrators expect to host 2,000 published applications in the 1,000 server farm. They also expect that the maximum number of user sessions during peak hours of operation will be approximately 100,000 sessions. Using the data from the charts above, the data collector will consume about 560MB of memory during peak time.
Note: In a multi zone design, all data collectors in the farm should be sized to accommodate the largest zone. Data collectors manage the global state of the farm, so all servers acting in this role should have the same processing capability, regardless of the size of their particular zone. Likewise if the data collector for one zone is dedicated, data collectors for the other zones in the farm should be dedicated as well.
Page 12
To support the burst logon requirement for the 100,000 users, CSP Corporation configured the XML Service role on the data collector, backup data collector, and two additional member servers. In addition, the Web Interface site was configured to load balance requests across all servers providing the XML Service.
Page 13
How many license servers are necessary? A single license server can adequately handle the load placed on it by a thousand XenApp servers (single or multiple farms) and tens of thousands of users. Multiple license servers can also be deployed for a single farm. However, the drawback to having multiple license servers is that licenses are not shared between servers. Note: In the case where a farm spans multiple sites, the license server should be placed at the site that hosts the most users.
What type of hardware is recommended for the license server? One of the most important considerations in determining license server requirements is processor speed. Although CPU usage is not usually high, CPU time increases as license check-out requests are made and License Management Console activity increases. The time it takes to execute these transactions is dependent on the speed of the CPU. In general, the size of the farm and the number of simultaneous client connections dictate the power of the server needed for the licensing feature. To appropriately size the license server, determine the number of client logins per second in the farm deployment. To do this, you can use the Performance Monitor counters available within XenApp and the load evaluator logging feature. This analysis determines the processor speed needed for optimal license server performance. Additionally, the license server process is single threaded. So, multiple processors do not increase performance. The license server uses approximately 4.5KB of memory
Page 14
for every session license and 39KB of memory for every start-up license that is inuse. The license server is capable of processing 248 license check-out requests per second. In a scenario where all users log in over the course of 30 minutes, a single license server would be able to handle 446,400 users. The license server configured for CSP Corporation is a standalone Intel Xeon 2.83 GHz quad-core processor with 4GB of RAM.
How much bandwidth is used during license consumption? When deploying a license server, it is important to understand the communication paths and bandwidth costs associated with licensing, especially when communication is over a WAN. When a XenApp server is brought online, it establishes a static connection to the license server and checks out a Citrix startup license. This action consumes 1.87 KB of bandwidth and occurs for every server in the farm. Once a startup license is checked out, the server holds this license until it is taken offline or the license server location is changed. When a user logs in, the XenApp server requests a license from the license server on behalf of the client device. The amount of bandwidth consumed for a license checkout request or check-in request is 0.745KB. Every 5 minutes, each XenApp server checks to ensure the license server remains available. The amount of bandwidth in this transaction is 416 bytes for each server. The timing of this verification is based on the start time of the IMA Service.
Is the license server a single point of failure? If a XenApp server cannot contact the license server, the XenApp server enters a grace period in which user connections are allowed for 720 hours. If the XenApp server is unable to contact the license server during the grace period, connections are denied when the grace period expires. For CSP Corporation, the 720-hour grace period provides more than enough time for recovery in the event of a failure. However, for some environments, the standard grace period might not be adequate. In such cases, a cold standby of the license server can be built into the farm. If the license server goes offline, the administrator can bring up a backup license server. In the cases where failover with no administrative interaction is required, Microsoft Clustering Services can be used to deliver hardware-based fault tolerance for the license server. For more information on setting up the license server in a clustered environment, refer to the Citrix Licensing section of Citrix eDocs.
Page 15
How should the farm be managed from remote locations? The Delivery Services Console can be run from a XenApp server in the farm or from a standalone computer running either Windows 2003, Windows 2008, Windows 2008 R2, Windows 7, or Windows Vista. To administer the farm from a remote location, the console can be accessed as a published application. By connecting to the console through an ICA session or through RDP, the static and dynamic information is queried from the console locally, dramatically increasing the performance of the console. This is particularly useful in larger server farms.
Page 16
2.4
In general, the number of users that a single Web Interface server can support is dependent on the processor speed rather than the number of processors in the system.
2.5
Page 17
A worker group is a collection of XenApp servers in the same farm, where administrators can associate objects like published applications, published desktops, and policies. Worker groups allow a set of similar servers to be grouped together and managed as a single entity. Worker groups are closely related to the concept of application silos. However, they streamline the creation of application silos by providing a way to synchronize the published applications and server settings across a set of XenApp servers. Worker groups are dynamic. For example, when AD containers are associated with a worker group, changes in the AD container are automatically reflected in the server's worker group memberships. Servers can be added to worker groups by AD Organizational Units or Server Groups. This allows worker groups to be dynamically updated based on the servers AD memberships. That is, as servers are added or removed from the AD containers, they will be automatically added or removed from the respective worker groups. The CSP Corporation administrators have chosen to manage their XenApp farm through Active Directory. Figure 12 shows the Active Directory Organizational Unit (OU) structure for the XenApp farm.
Page 18
In anticipation of future expansion, CSP Corporation administrators created two sets of worker groups: one set to group servers by tenant and one set to group applications by tenant. When the administrators add capacity for an existing tenant, they do not need to modify the servers list of published applications or desktops assigned to that tenant. Instead, they simply add another XenApp server to the tenants OU. Figure 14 illustrates CSP Corporations worker group structure in the DSC.
Page 19
With dynamic provisioning, this step can be automated using AD, by creating a base image for XenApp with all of the applications installed. To add capacity, simply create a new instance of the base image and add it to the desired tenant OU. The server receives its server settings from AD, joins the appropriate worker groups, and begins hosting published applications or desktops. Creating separate worker groups for desktops and applications gives CSP Corporation the flexibility to easily expand their tenant base. Worker Groups and Citrix policy filters All Citrix server policies can be filtered by worker groups, which allow CSP administrators to restrict GPOs to a specific set of servers in the farm. For policies configured in the Delivery Services Console, this is the only way to assign different settings to different groups of servers as all policies are replicated to all servers, completely independent of AD. Since CSP Corporation administrators have control over their XenApp OU, they use AD GPOs to manage the settings in the XenApp farm. For all user and site settings, they can link the GPO to the XenApp OUs without any filters. However, if they wish to deploy a setting specifically to Tenant1s servers or Tenant2s servers, they can add a Worker Group filter to the policy to limit it to the appropriate tenant.
Page 20
2.6
In XenApp 6, nearly all server, farm, and user settings are governed by Citrix group policies, which can be configured in three different ways: 1. Local Machine Policy (gpedit) 2. Active Directory Group Policy (gpmc) 3. XenApp Farm Group Policy (Policies node of the Citrix Delivery Services Console) Local Machine Policy can be used for managing small farms, but large farms will use either AD or the Delivery Services Console to manage settings across multiple servers. AD offers the most powerful solution for administrators and supports managing settings across multiple XenApp farms. Administrators create a Group Policy Object (GPO) containing the desired Citrix policy settings and link the GPO to the appropriate tenant OUs. However, for Citrix administrators who do not have control over their AD environment or whose organizations dont use AD for directory services, XenApp 6 provides farm-based group policies through the Policies node in the management console. Policies configured here are written to the XenApp data store and propagated to all servers in the farm. If multiple types of policies are created, the priority of policy enforcement (from low to high) is as follows o Local GPO o Farm GPO o Domain GPO Based on CSP Corporations multi-tenancy requirements, the only option is to utilize XenApps AD Group Policy option. The administrators create Citrix policies at different
Page 21
OU structure levels as displayed in Figure 16. In this case, the priority of policies enforcement is as follows: 1. 2. 3. 4. Policy created at the Default Domain Policy Policy created at the top OU level Policy created at the middle level OU Policy created at lowest level OU
The XenApp General GPO was created to apply to the XenApp farm as a whole. For each tenant, the CSP administrator creates new or links existing GPOs to the tenants OU structure. For example, the Tenant1 GPO is a general tenant GPO that was created to apply policies to all of Tenant1s downstream OUs. The CTXRestrictedComputer GPO is linked to Tenant1s computer OU (Tenant1_Computers) and CTXRestrictedUser and XASession GPOs are linked to Tenant1s user OU. The resultant Citrix policies applied to Tenant1s computer and user OUs will be the merged settings from all four GPOs. If there is a conflict among the policy settings of these GPOs, the settings in the Computer and User GPOs have the highest priority and will overwrite the settings in the Tenant1 GPO, XenApp General GPO and Domain GPO.
Page 22
Policy refresh interval Understanding how configured policies are refreshed and applied to the XenApp server can help the Citrix administrator troubleshoot policy-related issues. When Citrix policies are managed from the AD domain group policy, the sequence of policy refresh and update is as follows: 1. A change is made in the Group Policy Management Console (GPMC). 2. Within 1 to 2 hours, member servers pull and apply updates. 3. Every 3 hours, AD replication occurs between domain controllers.
Page 23
When Citrix policies are managed from the Delivery Service Console, the sequence of policy refresh and update is as follows: 1. 2. 3. 4. A change is made in Delivery Service Console. A member server writes the policy change to the data store and updates its LHC. All farm servers pull policy information from the data store and update their LHCs. Within 1 to 2 hours, member servers apply updates to the registry.
If needed, the IMA service can be restarted to refresh computer policies immediately. User policies are refreshed when users logon or reconnect.. The gpupdate /force command can be executed to force the policy synchronization and update.
Page 24
3.1
Page 25
When the XenApp servers startup, they must establish a connection to the data store to read all of the configuration information needed to initialize the IMA Service. Next, the servers check out a server license from the license server. This allows the XenApp servers to grant grace period licenses in the event the license server becomes unreachable. If the license server is unavailable at startup, as long as the XenApp server had been able to previously connect to it at least once, the server can still grant grace period licenses. Next, the servers that provide access to applications and desktops register with the data collector.
Page 26
After the XenApp server starts, the local host cache performs a consistency check every 30 minutes to ensure it is in sync with the data store, just in case it missed a directory change notification. If the data collector has not received an update from a member server within the last 60 seconds, it sends an IMAPing to ensure the server is still available. Also, every five minutes plus some randomized interval, the XenApp servers contact the license server to ensure it is still available.
Page 27
In this case, a user wants to connect to an application or desktop using Citrix Receiver or Web Interface. The data collector performs the resolution and directs the client device to connect to the least loaded server. When the user connects, the XenApp server contacts the license server to check out a concurrent license on behalf of the user. Once the user is connected, several things have changed on the server. The number of sessions on the server has changed and the servers load has changed. The member server then updates the data collector with this information.
Page 28
In the event that the data collector in Zone1 has a failure, the member servers recognize the server is offline by one of the many monitoring mechanisms and start the election process. When the new data collector is elected, all the member servers for the zone rebuild all of their dynamic tables with the new data collector. Although the original data collector went down, the user connection that was established in Figure 21is unaffected. Once the resolution process is complete, the data collector will only passively track the session.
Page 29
3.2
At this rate, the data collector would be able to handle 30,000 user logons in less than 25 minutes while the CPU usage is only about 45%. If the logon rate is increased to consume 90% of the data collectors CPU, the farm would be able to log on 60,000 users in less than 25 minutes.
Page 30
Figure 24shows the data collector failover performance while serving user application requests. It takes about 25 seconds for the failover to occur.
Page 31
3.3
Worker Groups
For XenApp 6, Citrix eLabs ran a variety to tests to ensure scalability and performance in large farms of up to 1,000 XenApp servers. The addition of worker groups did not add a significant performance overhead, even in complex environments. Some of the key metrics found during testing: Application publishing to worker groups and load balancing policies had no measurable impact on application enumeration or load balancing times. The number of worker groups had minimal impact on discovery times for the management console. Adding 200 worker groups increased discovery time by 2.5 seconds, while 500 worker groups increased the time by 4.2 seconds. Worker groups and their memberships are cached in memory for performance. This results in an 8 KB increase in memory consumption for every worker group in the farm.
3.4
(DSC) in XenApp. The following analysis was done in an effort to understand the network impact of policies to a user logon; bandwidth usage was compared against a varying number of policies and policy settings.
Policy Configuration No Policies 1 Policy / 5 Settings 1 Policy / 10 Settings 2 Policies / 5 Settings 2 Policies / 10 Settings 5 Policies / 10 Settings Bandwidth (KB) 127 (Baseline) 158 147 161 154 193 Incremental bandwidth over baseline (KB) NA 31 20 34 27 66
Table 3compares the bandwidth consumption of a single user logon with Citrix policies enabled and settings configured, and a single user logon with no polices enabled. Based on this data, additional policies and settings associated with a user add a small amount of bandwidth to each logon. Also, policy settings that are unchanged from their defaults are not transferred over the wire.
Assign policies to groups rather than individual users. If you assign policies to groups, assignments are updated automatically when you add or remove users from the group. Do not enable conflicting or overlapping settings in Remote Desktop Session Host Configuration. In some cases, Remote Desktop Session Host Configuration provides similar functionality to Citrix policy settings. When possible, keep all settings consistent (enabled or disabled) for ease of troubleshooting. Disable unused policies. Policies with no settings added create unnecessary processing.
Page 33
4.0 Conclusion
Companies of all sizes are looking for a smarter approach to managing the applications and data they use to run their business. More devices, more applications, and more places to work means business owners have to spend an increasing amount of time on IT. Citrix Service Providers can shift the focus for their subscribers back to where it matters the mostgrowing the business. By offering a bundle of applications, desktops, and IT services, customers get what they want in a familiar, pay-as-you-go subscription model. XenApp can scale to meet the most demanding and complex business environments. With core architectural improvements made to XenApp from release to release, XenApp 6 is the most scalable, high-performing release to date. XenApp provides the foundation for aggregating over 1,000 servers or tenants into a single management scope, ultimately providing a solution that enables service providers to build a flexible, scalable, and cost effective architecture to meet their customers needs.
Page 34
About Citrix Citrix Systems, Inc. (NASDAQ:CTXS) is the leading provider of virtualization, networking and software as a service technologies for more than 230,000 organizations worldwide. Its Citrix Delivery Center, Citrix Cloud Center (C3) and Citrix Online Services product families radically simplify computing for millions of users, delivering applications as an on-demand service to any user, in any location on any device. Citrix customers include the worlds largest Internet companies, 99 percent of Fortune Global 500 enterprises, and hundreds of thousands of small businesses and prosumers worldwide. Citrix partners with over 10,000 companies worldwide in more than 100 countries. Founded in 1989, annual revenue in 2008 was $1.6 billion.
2011 Citrix Systems, Inc. All rights reserved. Citrix, Access Gateway, Branch Repeater, Citrix Repeater, HDX, XenServer, XenApp, XenDesktop and Citrix Delivery Center are trademarks of Citrix Systems, Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent and Trademark Office and in other countries. All other trademarks and registered trademarks are property of their respective owners.
Page 35