Вы находитесь на странице: 1из 24

WHITE PAPER | Citrix XenApp

Advanced Farm Administration with XenApp Worker Groups


XenApp Product Development

www.citrix.com

Contents
Overview ............................................................................................................................................................. 3 What is a Worker Group? ................................................................................................................................. 3 Introducing XYZ Corp ..................................................................................................................................... 5 Creating Worker Groups .................................................................................................................................. 6 Application Publishing ...................................................................................................................................... 9 Load Balancing Policies ..................................................................................................................................12 Citrix Policy Filters ..........................................................................................................................................15 Worker Groups and Delegated Administration ..........................................................................................16 XenApp Load Balancing .................................................................................................................................17
Troubleshooting Load Balancing .............................................................................................................................18

Worker Group Internals .................................................................................................................................20


Data Store Synchronization .....................................................................................................................................20 Application Installation Check .................................................................................................................................22 Performance Metrics ...............................................................................................................................................22

Conclusion ........................................................................................................................................................23

Page 2

Overview
The release of XenApp 6 adds powerful new features for XenApp administrators through integration with Active Directory (AD). All user and server settings can now be managed through AD policies, while applications and load balancing can be managed through a new container known as a worker group. Worker groups allow similar XenApp servers to be grouped together to greatly simplify the management of XenApp farms. By publishing applications and managing server settings via AD and worker groups, administrators can reduce the time to deploy new XenApp servers and increase the agility of their environment. In this white paper, we consider a fictitious company with a large, geographically-distributed XenApp farm. This company must deliver applications to two distinct groups of users with different needs. This white paper outlines the new worker group features in XenApp 6 and shows how any company can leverage worker groups to simplify their farm management. Throughout the paper, we detail the best practices for creating and managing worker groups and how these can be applied in an enterprise XenApp deployment.

What is a Worker Group?


A worker group is simply a collection of XenApp servers in the same farm. Worker groups allow a set of similar servers to be grouped together and managed as one. Worker refers to the servers in a XenApp farm that host user sessions. Worker groups are closely related to the concept of application silos. Many XenApp farm designs group hosted applications into silos, where a silo consists of a single worker image cloned to multiple machines in order to meet the capacity needs of that set of applications. All workers in the silo share the same list of published applications and identical XenApp server settings. Worker groups streamline the creation of application silos by providing a way to synchronize the published applications and server settings across a set of XenApp servers. In previous releases of XenApp, servers were grouped into two containers: zones and server folders. These containers still exist in XenApp 6, and worker groups are added in addition to these. In some

Page 3

cases, the worker group hierarchy may be similar to that of zones and server folders, but worker groups serve a separate purpose and are managed independently. Zones in XenApp are used to control the aggregation and replication of data in the farm. A XenApp farm should be divided into zones based upon the network topology, where major geographic regions are assigned to separate zones. Each zone elects a data collector, which aggregates dynamic data from the servers in its zone and replicates the data to the data collectors in the other zones. However, best practices dictate that zones are only created for large sites interconnected by adequate WAN linkssmaller sites should be consolidated into a larger zone to avoid replicating all of the network traffic of the farms dynamic data to the smaller site. In past releases, zones were also used to control load balancing via the Zone Preference and Failover feature, but this has been replaced with finer-grained load balancing policies in XenApp 6. These policies eliminate the need to create zones for load balancing purposes. Worker groups should now be created to define load balancing policies, while zones should only be configured to control the data replication between data collectors. Server folders in XenApp serve two purposes. First, they provide a tree hierarchy in order to organize servers in the management console. Second, they are used to control permissions for delegated administrators. Like server folders, worker groups allow arbitrary groupings of servers, but worker groups are much more flexible. The decision to create a new worker group container offers the following benefits: 1. A single server may belong to multiple worker groups. Unlike server folders, where a server can only belong to a single folder, servers can be grouped into worker groups for multiple reasonsfor instance, servers may be grouped into worker groups both by their geographic region and by the applications they host. 2. Worker groups are more fine-grained than zones. Worker groups can be created to control load balancing within a single site. A worker group may even consist of a single server. 3. Worker groups can be dynamic. As we will see in the next section, when AD containers are added to a worker group, changes in the AD container are automatically reflected in the servers worker group memberships.

Page 4

Introducing XYZ Corp


XYZ Corps XenApp farm is distributed across three sites as follows: London 4,000 employees and 100 servers Miami 8,000 employees and 150 servers Atlanta 500 employees and 10 servers XYZs XenApp farm is divided into two zones: USA and UK. This is because the Atlanta site is much smaller than the other two, and best practices for zone design dictate that it be combined with a larger, nearby sitein this case, the Miami site. For more details on zone design, refer to the section Planning for WANs by Using Zones in the XenApp 6 product documentation. XYZ is a rapidly-growing company and has plans to expand their capacity at their existing three sites and to add additional sites in the future. They want to ensure that their farm design is scalable and allows them to rapidly add new XenApp servers. In addition, XYZs XenApp farm contains two types of server images: one server image hosts over 100 productivity applications delivered to all employees of the company, while another image also includes 20 specialized CAD and other applications for the engineering group. However, because the CAD applications integrate with some of the productivity applications, both sets of applications must be installed on the engineering images. The diagram below illustrates XYZ Corps farm configuration:

Page 5

Miami Site
USA Zone Data Collector

London Site
UK Zone Data Collector

WAN

Productivity Apps

Productivity + Engineering Apps

Productivity Apps

Productivity + Engineering Apps

WAN

Atlanta Site

Productivity Apps

Figure 1 XYZ Corp's XenApp Farm In the following sections, we will look at how XYZ Corp can leverage the new features of XenApp 6 to meet their business needs and to simplify the management of their XenApp farm.

Creating Worker Groups


In XenApp 6, worker group objects have been added to the Citrix Delivery Services Console and XenApp PowerShell cmdlets. Only full Citrix administrators may create, modify, or delete a worker group. There are two ways to add servers to a worker group: 1. Servers may be explicitly added to a worker group by name. This allows administrators to add specific servers to a worker group and is the only option in non-AD environments. 2. Servers may be added by AD Organizational Units or Server Groups. This allows worker groups to be dynamically updated based on the servers AD memberships. That is, as servers are added or removed from the AD containers, they will be automatically added or removed from the respective worker groups.
Page 6

Worker groups can only consist of XenApp servers in the same farm. Non-XenApp servers, XenApp servers in other farms, users, and other objects in the AD container will not become part of the worker group. For this reason, creating a worker group with XyzCorp\Domain Computers will map to all servers in the farm belonging to the XyzCorp domain. Note: Worker group operations are not instantaneous, particularly when worker groups are managed via AD containers. For more details on the expected delays, refer to the section Data Store Synchronization on page 20.

At XYZ Corp, administrators choose to manage their XenApp farm through Active Directory. They create an organizational unit (OU) for their XenApp farm and structure their servers as follows:

Figure 2 - OU structure for XYZ Corp's XenApp farm Since XYZ Corp plans to add new sites in the future, they decide to create two sets of worker groups: one set of worker groups to group servers by application and one set of worker groups by geographic location. This way, when XYZ adds a new site, they do not need to modify the servers list of all 120 published applications. Instead, they simply add the sites OU to the appropriate worker groups for the applications. The diagram and table below illustrate XYZ Corps worker group structure:

Page 7

Figure 3 - Worker group structure for XYZ Corp's XenApp farm Worker Group Apps\Engineering Apps Apps\Productivity Apps Organizational Units
OU=Engineering Apps,OU=London,OU=XenApp,DC=XyzCorp OU=Engineering Apps,OU=Miami,OU=XenApp,DC=XyzCorp OU=Productivity Apps,OU=Atlanta,OU=XenApp,DC=XyzCorp OU=Engineering Apps,OU=London,OU=XenApp,DC=XyzCorp OU=Productivity Apps,OU=London,OU=XenApp,DC=XyzCorp OU=Productivity Apps,OU=Miami,OU=XenApp,DC=XyzCorp OU=Engineering Apps,OU=Miami,OU=XenApp,DC=XyzCorp OU=Productivity Apps,OU=Atlanta,OU=XenApp,DC=XyzCorp OU=Engineering Apps,OU=London,OU=XenApp,DC=XyzCorp OU=Productivity Apps,OU=London,OU=XenApp,DC=XyzCorp OU=Engineering Apps,OU=Miami,OU=XenApp,DC=XyzCorp OU=Productivity Apps,OU=Miami,OU=XenApp,DC=XyzCorp

Sites\Atlanta\Atlanta - Productivity Sites\London\London - Engineering Sites\London\London - Productivity Sites\Miami\Miami - Engineering Sites\Miami\Miami - Productivity

Table 1 Worker group structure for XYZ Corps XenApp farm In the following sections, we will see how these worker groups are integrated with three XenApp features: application publishing, load balancing policies, and Citrix policy filters.

Page 8

Application Publishing
Each published application in XenApp contains a list of servers hosting that application. XenApp 6 supports adding worker groups to an applications server list, which greatly simplifies management of application silos and capacity management. In previous releases of XenApp, managing a silo of servers required ensuring each application in the silo was published to all servers in the silo. For example, the diagram below illustrates the application/server mappings of a 3-server silo hosting Microsoft Office applications.

Word

Excel

Outlook

Figure 4 - Application/Server mappings of a Microsoft Office silo with XenApp 5 With XenApp 5, each of the three servers had to be added to the servers list of each of the Microsoft Office applications. However, with XenApp 6, this deployment can be simplified using worker groups. Instead of publishing each application to each server, a worker group can be created containing the servers hosting the Microsoft Office applications. Instead of adding individual servers, the worker group is added to the servers list of each of the applications.

Page 9

Word

Excel

Outlook

Worker Group

Figure 5 - Application/Server mappings of a Microsoft Office silo with XenApp 6 In the future, to increase capacity in the application silo, a new server is added to the worker group. This eliminates the need to manually modify the properties of each published application hosted by the server. With dynamic provisioning, this step can even be automated using AD. Create a base image for each application silo containing XenApp and all applications installed. To add capacity, create a new instance of the base image and add it to the desired OU. The server will receive its server settings from AD, join the appropriate worker groups, and begin hosting published applications. It is important that all applications are installed before a new server is added to an application silos OU. XenApp 6 adds a new application installation check at load balancing time. However, this check is only intended to prevent a few misconfigured servers from accepting user connections and is not meant to be used for normal load balancing. For more information about this check, see the section Application Installation Check on page 22. One side effect of publishing applications via worker groups is that XenApp 6 does not allow customizing the applications command line, working directory, or application load evaluator on a per-server basis. Administrators may continue to use system environment variables in the command line to support per-server customizations.
Page 10

Another important change between XenApp 5 and XenApp 6 is the behavior of the users list of published applications. Users are no longer required to have access to all servers that host the application and may be restricted to a subset of servers using load balancing policies, covered in the next section. In many circumstances, this change eliminates the need to publish multiple copies of the same application.

At XYZ Corp, the administrators create two worker groups specifically for publishing applications: the Productivity Apps and Engineering Apps worker groups. As noted in Table 1, the administrators configured Productivity Apps to contain both the servers from the productivity and engineering silos. With previous releases of XenApp, XYZ would have had to publish separate copies of all 100 productivity applications, one for the engineering users and one for all others, because the users use different servers. However, with XenApp 6, XYZ can publish the productivity apps to both servers and use load balancing policies to direct users appropriately. The application properties on XYZs farm would appear as follows: Productivity Applications o Servers: Worker Groups\Apps\Productivity Apps o Users: XYZCorp\All Employees Engineering Applications o Servers: Worker Groups\Apps\Engineering Apps o Users: XYZCorp\Engineering Creating separate worker groups for application publishing gives XYZ Corp the flexibility to expand their farm in the future. To add additional capacity to their existing sites, XYZ can simply add new servers to the appropriate OUs. When they expand their farm to another site, they can create OUs for the new site, and add these to the two worker groups above. There is no need to change individual application settings.

Page 11

Load Balancing Policies


Most, but not all, user settings have been moved into AD Group Policy Objects (GPOs) in XenApp 6. Settings used during load balancing are required before the users session is launched and are needed before the GPOs are evaluated for the user. These settings are now located in the Load Balancing Policies node of the Citrix Delivery Services Console. Load balancing policies include a new feature in XenApp 6: Worker Group Preference. This feature solves the general use case where a specific set of users need to be load balanced to a specific set of servers. Some of the reasons include: Directing users to the closest site to reduce WAN traffic and maximize the user experience Directing users to a backup site for disaster recovery Dedicating servers for a specific group of users When the box Configure application connection preference based on worker group is checked in a load balancing policy, the administrator can configure a prioritized list of worker groups. When a user defined by the policy launches a published application, load balancing will return servers in the order of the priorities configured. Servers at a lower priority level will only be returned if all servers at a higher priority level are offline or fully-loaded (10,000 load). This feature is a superset of, and replaces, the Zone Preference and Failover feature in previous releases with two major differences: 1. This feature is not tied to zones. While worker groups may be created based upon sites and contain the same servers as a zone, worker groups may also be more fine-grained than zones. 2. Unlike the Zone Preference and Failover feature in previous releases, users are not directed to servers in worker groups that are not included in the worker group preference list, even if all servers in the preference list are unavailable. Note: To replicate the behavior of Zone Preference and Failover, simply create an All Servers worker group, and place this group at the lowest priority of all worker groups in the preference list. This will ensure that users are always directed to any available server as a last resort.

Page 12

Load balancing policies may be used to restrict which servers are returned to a user by load balancing, but note that this does not prevent users from directly connecting to servers outside of their policy. To restrict users access to specific servers, always configure user groups on published applications and/or the Remote Desktop Users group in conjunction with load balancing policies. Load balancing policies are evaluated when a user logs in to Web Interface or refreshes applications in the Citrix online plug-in. For performance, the resultant settings are then cached on the Web Interface server or on the users Citrix online plug-in and used during each application launch. In the case where multiple load balancing policies apply to a single user, the worker group preference list from the highest-priority policy will be used. Only servers in this preference list will be returned by load balancingXenApp will not consider preference lists from lower-priority policies in the load balancing calculations. Note: Load balancing policies behave differently than user policies when no filters are applied. In a user policy, a policy with no filters applies to all users. For load balancing policies, a policy must have at least one filter; otherwise it applies to no users.

At XYZ Corp, the administrators must ensure that engineering users are always load balanced to one of the engineering images and that all users are load balanced to servers at the nearest site and fail over to a remote site if the nearest site goes down. In this example, the site is selected by IP range: 10.4.0.0/16: London 10.6.0.0/16: Miami 10.8.0.0/16: Atlanta

Page 13

The administrators then configure five Load Balancing Policies: Policy Name London - Engineering USA - Engineering London Productivity Miami Productivity Atlanta - Productivity Priority 1 2 3 4 5 Policy Filters Client IP Address: 10.4.0.0/16 Users: XyzCorp\Engineering Client IP Address: 10.6.0.0/16, 10.8.0.0/16 Users: XyzCorp\Engineering Client IP Address: 10.4.0.0/16 Client IP Address: 10.6.0.0/16 Client IP Address: 10.8.0.0/16 Worker Group Preferences 1: London Engineering 2: Miami - Engineering 1: Miami Engineering 2: London Engineering 1: London Productivity 2: Miami Productivity 3: Atlanta Productivity 1: Miami Productivity 2: Atlanta Productivity 3: London Productivity 1: Atlanta Productivity 2: Miami Productivity 3: London - Productivity

Table 2 XYZ Corps Load Balancing Policies Users will receive the worker group preference list from the highest-priority policy with matching filters. Thus, users from the 10.8.0.0/16 IP address range will receive the USA Engineering policy if they are a member of XyzCorp\Engineering, while they will receive the Atlanta Productivity policy if they are not. With this configuration, XYZ Corp can deliver separate sets of applications to the two different groups of users within the company and also ensure proper failover if a failure at one site occurs.

Page 14

Citrix Policy Filters


In XenApp 6, nearly all server, farm, and user settings are governed by Citrix policies, which can be configured in three different ways: Local Machine Policy (gpedit) Active Directory Group Policy (gpmc) Policies node of the Citrix Delivery Services Console The local machine policy can be used for managing small farms, but large farms will use either AD or the Delivery Services Console to manage settings across multiple servers. AD offers the most powerful solution for administrators and supports managing settings across multiple XenApp and XenDesktop farms. Administrators create a GPO containing the desired Citrix policy settings and link the GPO to the appropriate OUs. However, for Citrix administrators who do not have control over their AD environment, XenApp 6 provides the Policies node of the management console. Policies configured here are written to the XenApp data store and propagated to all servers in the farm. All Citrix server policies can be filtered by worker groups, which allows administrators to restrict GPOs to a specific set of servers in the farm. For policies configured via the Delivery Services Console, this is the only way to assign different settings to different groups of servers, since all policies are replicated to all servers, completely independent of AD. Note: The Worker Group filter works based on the name of the worker group. If the worker group is renamed or deleted, the policy will no longer apply to any servers in the farm.

Since XYZ Corps administrators have control over their XenApp OU, they use AD GPOs to manage the settings in their XenApp farm. For user and per-site settings, they can link the GPO to the appropriate site OUs without any filters. However, if they wish to deploy a setting specifically to the engineering servers or productivity servers, they can add a Worker Group filter to the policy to limit it to the appropriate server type.

Page 15

Worker Groups and Delegated Administration


Worker groups may only be created or modified by full Citrix administrators. Since policy filtering and application publishing can now be controlled via worker groups, the ability to add or remove an AD object from a worker group gives administrators control of nearly all XenApp features. To delegate control of a worker group to an administrator, the full administrator should create an OU in Active Directory, create a worker group pointing to this OU, and then delegate control of that OU to the delegated administrator. This way, the delegated administrator can add or remove servers and create GPOs for that specific OU. Two additional delegated administration tasks are provided in XenApp 6: View Worker Groups and Assign Applications to Worker Groups. The first task is self-explanatory. The second works like the Assign Applications to Servers task in previous releasesit allows the worker group to appear in the servers browser of the application publishing wizard. Finally, note that the AD browser in the management console runs using the user credentials of the user running the console. Unlike the Citrix User Selector which is used for browsing AD users, XenApp does not do trust routing to allow Citrix administrators running as local users to browse AD objects. Citrix administrators running as non-AD users will see SIDs and GUIDs of AD containers when browsing worker groups, and will not be able to add an AD container to a worker group without entering domain credentials1.

At XYZ Corp, the company has dedicated IT staff at each of its three sites. In order to allow the site administrators to add and remove servers, XYZ delegates control of the three site-specific OUs to the appropriate IT staff. Since the applications are already published to the appropriate OUs via worker groups, the delegated administrators can add or remove servers from the silos they manage using ADthey do not need the Publish Applications and Edit Properties or Assign Applications to Servers permissions in XenApp. Session management tasks continue to be controlled via the folder-level delegated administration tasks in XenApp.

The same holds true for the XenApp PowerShell cmdlets. To add an AD object to a worker group, the Citrix administrator running the cmdlets must be a domain user. Page 16

XenApp Load Balancing


The servers list of hosted applications, load balancing policies, worker groups, and load evaluators all control how users are directed to XenApp servers. When a user clicks on an application, the data collector is responsible for selecting a server to host the users session from the list of servers and worker groups in the application properties. When an application is published to multiple servers, one of the servers is selected in the following order: 1. If the user has a disconnected session with the desired application, that server is returned. 2. If the user already has a session where the application can be launched with session sharing, the existing session is used. 3. A server is selected from the highest-priority worker group of the load balancing policys preference list. If multiple servers in the worker group host the published application, the least-loaded server is returned. 4. If no servers that host the application are available in the highest priority worker group because there are no servers in the worker group that host the application or all servers are offline or fully loaded (10,000 load)lower-priority worker groups are tried in order in of the load balancing policy. Before returning the server to the user, a check is done to ensure that the application is installed on the server. If this check fails, the data collector continues searching for another server to return using the priorities above. In addition to understanding the criteria for load balancing listed above, it is also important to note what is not considered in the load balancing algorithm. The administrator responsible for publishing applications and configuring load balancing policies should consider the following: 1. XenApp 6 does not check whether the user has permission to log on to the returned server. This is an issue particularly in farms spanning multiple untrusted domains. To avoid issues, either publish separate copies of applications for each domain or configure Worker Group Preference policies to ensure users are always directed to servers in the correct domain. 2. XenApp 6 does not check whether the graphics settings on the application (such as color depth and resolution) match the servers settings configured via Citrix policies. The application settings will not be honored if the server enforces lower values.
Page 17

Troubleshooting Load Balancing


Load balancing involves multiple settings working together to direct a user to a server, so it can be difficult to troubleshoot issues when load balancing behaves unexpectedly. Because of this, Citrix has released a new tool with XenApp 6, LBDiag, which assists administrators in diagnosing load balancing issues. To download this tool, see CTX124446 in the Citrix Knowledge Center. LBDiag simulates the load balancing for a user launching a specific application and shows the load balancing process that XenApp will use. The most reliable way to use this tool is to create a test user account belonging to the same groups as the actual user. If this is not an option, LBDiag also supports listing local and domain groups explicitly on the command line. Administrators may also specify the name of a load balancing policy, if they are certain of which policy is being applied to that user.

For example, XYZ Corp is experiencing a problem where users in Atlanta are being directed to servers in Miami when they launch the Payroll application. To troubleshoot this, they use the XYZCorp\TestUser account which has the same group memberships as a typical employee, and run LBDiag on one of the servers in their XenApp farm:

Page 18

LBDiag first uses the test users credentials to enumerate all groups to which the user belongs. This is useful for understanding which load balancing policy applies to the user. In this case, the combination of the user not belonging to XYZCorp\Engineering plus the client IP in the 10.8.0.0/16 range causes the Atlanta Productivity load balancing policy to be applied, whose worker group preference list is shown next. Finally, LBDiag displays the servers in the order they would be returned by load balancing. In this case, LBDiag indicates the problem with the Atlanta site: of the 10 servers, two servers are missing the application, six are fully-loaded, and two are offline. Since none of the servers are available, users are failing over to servers in the Miami site.

Page 19

Worker Group Internals


This section covers the low-level details of the worker groups implementation in XenApp 6 and their impact on scalability in a large environments.

Data Store Synchronization


Worker groups rely on the XenApp data store to store the mapping between worker groups and servers. The Citrix IMA Service on each server in the farm is responsible for determining which worker groups it belongs to and keeping this membership up-to-date in the data store. This was done for multiple reasons: 1. It ensures all servers in the farm have a consistent view of worker group membership, regardless of AD replication latency. This is critical for application publishing to ensure that the data collector does not load balance users to a server before that server knows that it has been added to a published application. 2. The data store provides support for farms spanning multiple domains. A data collector can load balance to servers, even if the worker group contains AD objects from an untrusted domain. 3. It improves performance in complex load balancing scenarios. An enterprise deployment may have applications published to dozens of worker groups and numerous load balancing policies, each with worker groups consisting of multiple AD objects. Since the mapping between worker groups and servers is stored in the data store and cached in the Local Host Cache, these complex load balancing decisions can be made without any additional AD queries from the data collector. Servers update their worker group membership in the data store in three cases: 1. Servers recalculate their worker group membership when the Citrix IMA Service starts. 2. When a notification is received that an administrator modified a worker group, servers recalculate their membership for that specific worker group. 3. Every five minutes, each server checks whether its AD membership (OU and/or groups) has changed.

Page 20

Servers cannot update their worker group membership if the data store is down. Because of this, the data collector will continue load balancing using the worker group memberships stored in its LHC until the data store comes online, regardless of any changes made in AD. Once the data store is available, any AD changes will be reflected in load balancing within five to ten minutes. The latency of other operations is described in the diagram and table below:

Figure 1 - Expected latency of various worker group tasks

Task Add/remove a server from an OU or group in Active Directory Add/remove an OU, group, or individual server from a worker group Add/remove individual server or worker group from a published application Modify load balancing policies

Expected Delay Up to 96 minutes. Rebooting the server can force an update quicker. 15 to 55 seconds 5 to 40 seconds Next user logon (Web Interface) or application refresh (Citrix online plug-in)

Table 2 Expected latency of various worker group tasks

Page 21

Application Installation Check


XenApp 6 adds a new check during load balancing to ensure that the published application exists on the server being returned by load balancing. The Citrix Services Manager service now ensures that the file specified in the applications command line exists on the server selected by load balancing. If this check fails, the following error will be logged to the Application event log of the data collector: Application MyApp is published to server MyServer, but the command line "C:\Program Files\MyApp\MyApp.exe" is not valid on MyServer. Verify the correct servers and/or worker groups are assigned to MyApp and ensure that the application is installed on MyServer. Note: Because the application installation check is performed before the users session is created, user environment variables can no longer be used in an applications command line. Only system environment variables are supported in XenApp 6. The application installation check will retry load balancing up to five times to return a valid server to the user. This check is intended to prevent a few misconfigured servers from creating a black hole condition in the XenApp farm. However, administrators should always make sure that applications are installed at the correct locations on the correct servers, and not rely on this check for day-to-day load balancing.

Performance Metrics
For XenApp 6, Citrix eLabs ran a variety to tests to ensure scalability and performance in large farms of up to 1,000 XenApp servers. The addition of worker groups did not add a significant performance overhead, even in complex environments. Some of the key metrics found during testing: 1. Application publishing to worker groups and load balancing policies had no measurable impact on application enumeration or load balancing times. 2. The number of worker groups had minimal impact on discovery times for the management console. Adding 200 worker groups increased discovery time by 2.5 seconds, while 500 worker groups increased the time by 4.2 seconds.

Page 22

3. Worker groups and their memberships are cached in memory in every IMA service for performance. This results in an increase in memory consumption of 8 KB for every worker group in the farm.

Conclusion
Workers groups and the integration with Active Directory add powerful new features to XenApp 6. These features greatly simplify farm management by streamlining application publishing, providing fine-grained control of load balancing, and allow management of server settings across different groups of servers in the farm. Creating an AD and worker group hierarchy should be part of every XenApp 6 farm design. With appropriate planning, XenApp 6 greatly reduces the time to provision new servers and allows the farm to dynamically adjust to business and capacity needs.

Page 23

About Citrix Citrix Systems, Inc. (NASDAQ:CTXS) is a leading provider of virtual computing solutions that help companies deliver IT as an on-demand service. Founded in 1989, Citrix combines virtualization, networking, and cloud computing technologies into a full portfolio of products that enable virtual workstyles for users and virtual datacenters for IT. More than 230,000 organizations worldwide rely on Citrix to help them build simpler and more cost-effective IT environments. Citrix partners with over 10,000 companies in more than 100 countries. Annual revenue in 2009 was $1.61 billion. 2010 Citrix Systems, Inc. All rights reserved. Citrix, Access Gateway, Citrix Receiver, HDX, XenServer, XenApp, and XenDesktop are trademarks of Citrix Systems, Inc. and/or one or more of its subsidiaries, and may be registered in the United States Patent and Trademark Office and in other countries. All other trademarks and registered trademarks are property of their respective owners.

Page 24

Вам также может понравиться