Вы находитесь на странице: 1из 42

What are the commonly used Citrix commands?

dsmaint
dsmaint config [/user:username] [/pwd:password] [/dsn:filename]
dsmaint backup destination_path
dsmaint failover direct_server
dsmaint compactdb [/ds] [/lhc]
dsmaint migrate [{ /srcdsn:dsn1 /srcuser:user1 /srcpwd:pwd1}] [{/dstdsn:dsn2
/dstuser:user2 /dstpwd:pwd2}]
dsmaint publishsqlds {/user:username /pwd:password}
dsmaint recover
dsmaint recreatelhc
dsmaint verifylhc
driveremap
driveremap /drive:M
driveremap /u
driveremap /noreboot
driveremap /IME
dscheck
dscheck [Options] [ /full | /clean]
[ Servers | Apps | Printers | Groups | MSLicense | Folders | Licenses ]
dscheck /full Servers [Options] Verify/Clean or Delete the server. May be left blank.
Defaults to all servers.
/Clean Modify the data store to correct the errors.
/DeleteAll Delete the server entries from the data store.
/DeleteMF Delete the MetaFrame Server entry from the data store.
/DeleteComSrv Delete the Common Server entry from the data store.
dscheck /full Apps [Options]
< AppName> Verify/Clean or Delete the application. May be left blank. Defaults to
all applications.
/Clean Modify the data store to correct the errors.
/ServerCheck Verify that all applications are hosted by valid servers.
/DeleteMF Delete the MetaFrame Application entry from the data store.
/DeleteComApp Delete the Common Application entry from the data store.
dscheck /full Printers [Options]
/purge_replications Removes all printer replications from the data store.

/purge_client_printers Removes all Client Auto-Create printers pending deletion


from the data store.
/purge_drivers Removes all drivers that are not associated with any servers from
the data store.
dscheck /full Groups [Options]
/Clean Removes the group object. GroupName is the relative DN from the Context.
/Clean -Removes the group from the parent group.
Use the output of DSCHECK.exe GROUPS /verify for both ParentGroupName and
GroupName.
dscheck /full MSLicense [Options]
/purge_licenses Removes all Microsoft Licenses from the data store.
/list Lists all Microsoft Licenses in the data store.
dscheck /full Folders /clean Collapse orphaned folders in the data store.
dscheck /full Licenses /clean Removes all corrupt licenses from the data store.
altaddr
altaddr [/server:servername] [/set alternateaddress ] [/v]
altaddr [/server:servername] [/set adapteraddress alternateaddress] [/v]
altaddr [/server:servername] [/delete] [/v]
altaddr [/server:servername] [/delete adapteraddress] [/v]
query
query view information about server farms, processes, servers, ICA sessions and
users
query farm shows the servername, protocol and ip address
query farm /app shows the published applications
query farm /disc shows the disconnected session data for the server farm
query farm /load displays server load information
query user displays the current connections
queryhr is used to display info about the member servers in the farm. Executing
queryhr with no parameters lists all servers in the farm.
chfarm is used to change the farm membership of a Citrix server
icaport is used to query or change the TCP/IP port number used by the ICA protocol
imaport is used to change the IMA port used by the server
ctxxmlss is used to change the XML service port
enablelb is used to re-enable the server back to load balancing after it fails
twconfig confgure ICA display settings

auditlog is used to view the report of users logoff and logon activity. With
auditlog /time we can get time the users spent in the servers

Basics of vDisks and Provisioning desktops


In XenDesktop, MCS is one of the options for provisioning virtual desktops on the
underlying Hypervisor, Host Infrastructure. The other method is PVS. Before
XenDesktop 7 these virtual machines were based on client operating systems. From
XenDesktop 7, MCS can also provision virtual machines based on server operating
systems. However, Personal vDisks are only applied in VDI, i.e., client OS.
When creating virtual client OS based desktops using MCS, we could either create
Pooled (Random or Static) or Dedicated desktops:
Pooled-Random: Desktops are assigned randomly. Each time when a user logs on
he could possibly get a different desktop in a pool. When rebooted, any changes
made are deleted.
Pooled-Static: Desktops are permanently assigned to a single user. Once a user
logs on a desktop is assigned to that specific user. Each the time when a user logs on
he will get the same machine. When rebooted, any changes made are deleted.
Dedicated Desktop: As with Pooled-Static, desktops are permanently assigned to a
single user. During reboots, any changes made will persist across subsequent startups.
In Pooled desktops users cannot install or update applications themselves and their
changes will not be save when rebooting or logging off. But the management is easy
and storage requirements low. In dedicated desktops, users are happy but costs per
user rise because of increased storage size and management become more
complicated.
Provisioning Process
The first step in provisioning a Pooled (random or static), or Dedicated desktop is to
create the base (master), or golden image and this is used as a Template for future
VM provisioning. Then we will start creating a VM and assign vCPU, memory and
Disk space. Install the OS, applications, AV and the XenDesktop Virtual Agent
etc.Next thing is to create a Machine Catalog from XenDesktop Studio which will
launch MCS in the background or we can start the MCS wizard manually. We need to
decide what type of Catalog we want to create, and select the Machine Type. If we
select Pooled desktop then we have to select random or statically assigned desktop
to users when they log on.

Then we need to select the base (master) image and decide how many VMs we would
like to provision and the desktops base resources like the amount of vCPU, memory
and the hard disk size need to assign per VM. Choose either new AD computer
accounts need to be created automatically, or we will use existing ones instead.
On the Create accounts page, the next step, we can select the OU where these VMs
to be created in Active Directory and we need Administrative permissions to do this.
Finally we click Finish on the summary screen. Now MCS will create the number of
machines specified, it will add two disks to each machine: an identity disk (16 MB)
which provides the VM with a unique identity, also used for Active Directory, and a
differencing disk used to store all writes made to the virtual machine, which is linked
to the read only copy of the master VM or otherwise, the snapshot taken from it. If it
is supported by the storage solution then this disk can be thin provisioned, otherwise
it will be as big as the base (master) virtual machine mentioned before. Note that
each VM provisioned by MCS will get its own ID and differencing disk.
During the process MCS will take a snapshot of the base (master) VM, unless you
manually created and selected a snapshot while doing one of the previous steps.
Doing it this way, gives you the option to give it a name, otherwise XD will name it
automatically.
Next, MCS creates a full copy (or clone) of the snapshot and places this in storage
repository. This is a read only copy shared by all VMs. If we have multiple storage
repositories then each repository used by the catalog will receive its own copy. In
short, these are the steps needed to set up your Pooled or Dedicated desktop catalog.
We might have left out some smaller steps in between, for example, the machines
will also get registered in Active Directory, and this gives us the big picture.
Storage and Management
When managing hundreds of Pooled desktops, used at the same time, all these can
potentially grow as big as the base (master) image, thats a lot of storage. In practice
this probably wont happen that much, because when a Pooled desktop gets rebooted
all changes made to the VM (stored on the underlying differencing disk) will get
deleted. So we need to make sure that we have enough free space available.
When we use dedicated desktops, we start out the exact same way, but when the VM
gets a reboot all the writes to the underlying differencing disk wont get deleted. The
user re-logs in everyday, again and again making changes to the underlying base

(master) image (written to the differencing disk) and updating regularly it wont get
deleted. When VM gets rebooted, the underlying differencing disk will keep
expanding and consume more free space. So these dedicated machines will consume
more storage than their Pooled machines.
We need to manage these dedicated desktops on an individual basis, because with
dedicated desktops will not update the underlying base (master) image without
destroying the accompanying differencing disk.
Updating the Base Image
For Pooled desktop when we update the base image, we need to point the (new)
updated image to the Pooled desktops and reboot the machines.
When we update the base image of the dedicated desktops it works a bit differently.
Once a differencing disk is pointing to one of the master or the base (clone) image, it
cannot be changed. So when we update the base image, it will create a complete new
copy or clone. The newly provisioned dedicated desktops will use this new or updated
base image. The other dedicated desktops will continue to use the old base image.
So for dedicated desktops, management is not easier like pooled desktops.

Conclusion
Pooled desktops: Management is easy and the need for storage is less, but users will
only get what is on the base image and cant make any changes. In dedicated
desktops users can save their work and changes as they want but the management is
harder and can consume more storage. So most of the companies prefer Pooled
desktops for overall use and only offer dedicated desktops when really needed.

Basics of Citrix NetScaler


There are 2 NetScaler editions; Citrix NetScaler and Citrix NetScaler Gateway.
Although these two seem similar, there are some distinct differences depending on
the licenses used.
Citrix Netscaler refers to their Application Delivery Controller (ADC), while the
Netscaler Gateway, formerly known as the Citrix Access Gateway (CAG), is primarily
used for secure remote access. Its basically a Netscaler but with limited
functionality due to the Netscaler Gateway license we upload. Netscaler ADCs are
capable of doing much more than just secure remote access. It can be used for load
balancing and HA, content switching, application (SSL) offloading, application
firewall, cloud connectivity, hybrid cloud solutions and (a lot) more.
The Netscaler uses vServers to deliver different services. We can configure multiple
independent vServers on the same Netscaler serving different purposes or services,
like a load balancing, content switching and SSL offload etc.
The Netscaler IP Address (NSIP) is the IP address which is used by the
Administrator to manage and configure the Netscaler There can only be one NSIP
address, and it is used when setting up and configuring the Netscaler for the first
time. NSIP cannot be removed and cant be changed without rebooting the Netscaler.
The Subnet IP Address (SNIP) is used for server side connections, it means, SNIP
is used to route traffic from the Netscaler to a Subnet directly connected to the
Netscaler. The Netscaler has a mode named USNIP (Use SNIP), which is enabled by
default, this makes the SNIP address to be used as the source address when sending
packets from the NetScaler to the internal network.
When a SNIP address is configured, a corresponding route is added to the Netscaler
routing table, which is used to determine the optimal route from the Netscaler to the
internal network. If Netscaler finds the SNIP address in the routing table as a part of

the route, it will use it to pass-through the network traffic using the SNIP address as
its source address.
A SNIP address is not mandatory as NSIP. If we have multiple subnet we will have to
configure a SNIP address for each subnet separately. Also, when multiple SNIP
addresses are configured on the same subnet, they will be used in a Round Robin
fashion.
The Mapped IP Address (MIP) is similar to the SNIP. The MIP addresses are used
when a SNIP address isnt not available or when USNIP (Use SNIP) is disabled. It
will also be used as the source IP address as SNIP. Only when the configured MIP
address is the first in the subnet the Netscaler will add a route entry to its routing
table.
The Virtual IP Address (VIP) is the IP address of a vServer that the end users will
connect to, and through which they will eventually be authenticated etc. The VIP
address is never used as the source IP and so it is not involved in back-end server
communication, instead this will always be handled by a SNIP and MIP address.

An external user will contact the Netscaler Gateway over port 80 or 443 and connect
to the externally accessible virtual IP (VIP) address of the Netscaler (Gateway)
vServer. In the diagram above refer 1.VIP and 1. vServer. Once a connection is

established there are few options, for example, using a SNIP address the
(unauthenticated) user will connect to the StoreFront server located on the internal
network where authentication takes place.
If authentication takes place on the Netscaler, the users credentials are forwarded
using the NSIP, shown in 2. NSIP, to the internal authentication services (AD),
where they will be validated. Once validated, we may have two factor authentications
2. NSIP using SMS passcode tokens. In this way every user will have to fill the
username and password plus an additional auto generated token code which will
expire every few minutes, which is extremely secure.
Once the user is authenticated, the authentication services will pass through the user
credentials to the StoreFront server. The already authenticated user will connect to
the StoreFront server, 3. SNIP where it will enumerate the user applications.
Then this information will travel back into the Netscaler and through the Netscaler
Gateway vServer to the users screen as shown in 4. vServer
At last, when the user starts an application, the StoreFront server will generate a .ICA
file which is send back to the users device and is used to connect the user directly to
the requested resource on one of the XenDesktop / XenApp servers. During the last
phase of setting up this connection the Gateway server will check up on the earlier
generated STA file to validate the session, after that the application or Desktop will
be launched as shown in 5. App launch

Implementing XenDesktop 7.1 with Write Cache and


Personal vDisk using PVS 7.1

Prerequisites:
Install XenDesktop 7.1 and configure with a Site.
Install PVS 7.1 and configure with a farm.
Install and configure vCenter 5.1 and configure with XenDesktop Studio.
Create appropriate security group to make the members of local admins.

Create a Windows 7 VM to be used as the Master image and optimize as per the
recommendation.
Create appropriate a Group Policy and link to the OU that will contain the computer
accounts created by the XenDesktop Setup Wizard.
The Write Cache drive is always created as Drive D and the Personal vDisk is created
with the drive letter assigned during the Wizard.
1. Add 2 hard drives of difference sizes to the VM. For example, Write Cache: 10GB
and PvD: 20GB.
2. Login to the VM, in Disk Management initialize the 2 new drives with MBR
partition type. The two drives will be shown as unallocated disks, do not format the
disks.
3. Mount the PVS 7.1 ISO to the VM and install Target Device.
4. After installing Target Device disconnect the PVS ISO and launch Imaging Wizard.
5. Create a vDisk and optimize the device. So now the vDisk has been created in the
PVS and a Target Device is created with the MAC address of the VM.
6. Shutdown the VM and configure to boot from the network first and the hard drive
second.
7. When the VM is powered on, login with the same account to continue the Imaging
Wizard.
8. Once it is completed the Imaging Wizard has now copied the contents of the VMs
C drive into the vDisk.
9. Now detach the C drive from the VM. So the VM has no C drive.
10. Go to PVS console, in the Target Devices properties; change the Boot from order
to vDisk.

11. Do power on the VM and you can see the 10GB Write Cache and 20GB PvD drives
and the C drive (vDisk) in the Disk Management.
12. Now install XenDesktop 7.1 Virtual Delivery Agent (VDA) for PVS and shutdown
the VM.
13. Detach the XenDesktop 7.1 ISO and login to the VM.
14. Install PvD update 7.1.1, i.e., Personal vDisk 7.1.1 and reboot the VM.
By default, PvD uses two drive letters: V and P. V is hidden and is a merged view of
the C drive with the PvD drive. If drive V is already used, the drive letter can be
changed.
15. Now run the PvD Inventory, Click Start, All Programs, Citrix, Update personal
vDisk and then shut down the VM.
16. Make a copy of the VM for safer side and create a template from the VM.
17. In the PVS console, go to vDisk properties and change the Access mode to
Standard image and Cache type to Cache on device hard drive.
18. Do right-click the Site and select XenDesktop setup wizard.
19. While setup select The same (static) desktop, and also select Save changes and
store them on a separate personal vDisk and click Next.
20. Provide the number of VMs to be created by the setup and you can also see the
Local write cache disk and PvD disk.
21. Once you complete the Wizard it will start creating VMs and target devices. You
can see the target devices in the Device Collect in PVS Console and the VMs in the
AD OU.
22. In the XenDesktop Citrix Studio, you can see the new Machine Catalog which has
been created now.

23. Create a Delivery Group with appropriate Machine Catalog, User Groups and the
StoreFront server.
24. Now you can edit the Delivery Group to make it online according to your
requirement.
25. It is ready for the users to login. The users can customize their desktop and after
the reboot the customizations persist.

Step by step procedure for deploying XenApp servers with


Provisioning Services
Provisioning Services is used to have a consistency across all XenApp servers within
the farm. PVS is recommended for deploying XenApp servers in large
implementations. As we have a single vDisk for all servers, management and
administration of the XenApp farm becomes much easier.
With PVS we should only deploy XenApp Session-Only member servers and not the
first server in the farm or any farm servers that are also Zone Data Collectors.
Procedural steps in deploying XenApp servers with PVS:
1. Install XenApp Master Image server and join it to the XenApp farm with all the
applications.
2. Install the PVS Target Device from the media and shutdown.
3. Launch PVS Service Console in the PVS server and create a new vDisk in the PVS
store.
4. Create a new Device Collection in PVS for the Master Image server.
5. Create a target device in PVS for the XenApp Master Image server. Configure the
target device to boot from Hard Disk in Maintenance mode and assign the blank
vDisk created earlier to the XenApp Master Image target device.
6. In the hypervisor host, change the boot order of the XenApp Master Image VM to
boot from the network adapter. Now boot the XenApp Master Image server. It should
perform a PXE boot and retrieve bootstrap file from PVS. The server will proceed to
boot into Windows as normal. Login and make sure the Virtual Disk status in the
System Tray shows activity. This indicates that everything is fine. So now create and
add a new disk that will serve as the local Write Cache.
7. Shut down the XenApp Master Image server.
8. Remove the local C drive and create a backup

9. Convert the backup into a VM template and stored in the hypervisor. The VM
template should only have the Write Cache drive attached.
10. Power on the XenApp Master Image VM template, and launch the Citrix XenApp
Server Role Manager to Prepare this server for imaging and provisioning.
11. Now we need to capture the XenApp Master Image to a PVS vDisk.
12. Install XenConvert in the XenApp Master Image server, launch XenConvert and
select options This Machine and Provisioning Services vDisk.
13. Remove the Write Cache disk and select Autofit to automatically adjust the vDisk
space.
14. After conversion process is finished, shut down the XenApp Master Image server.
15. Launch PVS Service console, change the vDisk access mode from Private to
Standard and the cache type to Cache on device hard drive.
16. Launch the Streamed VM Setup wizard enter the hypervisor and the
credentials.
17. Select the VM template created which we created earlier, PVS Collection where
the XenApp VMs are to be created and the XenApp vDisk to assign to them.
18. Specify number of virtual machines to be created with hardware specification and
provide the Active Directory OU, where the accounts will be created.
19. After finishing the build process, we can see new XenApp VMs in the hypervisor.
20. Now everything is completed. Power on the VM. We should assign a new IP
address if we use static IP for the VM Template.

Explain Provisioning Services Architecture


Provisioning Services works differently than Machine Creation Services to provide
resources to the users. PVS allows machines to be provisioned and re-provisioned in
real time from a single shared vDisk. So the administrators can manage and update
only on the master vDisk, in some case we can remove from the system itself.
The MCS provisioning machine is all about storage and PVS relies on network.
Simple process with PVS is, start off with Master Target Device, capture the disk as a
new vDisk and then provision the vDisk to the target devices. The AD-identity (SID)
comes from an additional disk in MCS and PVS uses SQL database for this.
After installing and configuring PVS, vDisk can be created by imaging a hard disk
(contains OS with Applications) and this vDisk file is stored on the network. The
device which is used to create the vDisk is called Master Target Device and the
devices that use the vDisk are called target devices.
Updates and writes with MCS are saved to a Differencing Disk, while writes with PVS
are saved to a Write Cache.
The target devices download a boot file from the PVS and then use that boot file to
start. Based on the device boot configuration settings the appropriate vDisk is located
and then mounted on the PVS server. The application and the desktop OS on the
vDisk are streamed by the PVS server to the target device.
Instead of pulling down all of the vDisk contents to the target device, the date is
brought across the network in real time which dramatically reduces the amount of
network bandwidth thereby supporting a larger number of target devices on the
network without impacting overall network performing.
Difference between MCS and PVS

PVS working process


1.

A target device powers on and uses TFTP to download bootstrap file


(ARDBP32.BIN) which provides the target device with the connection required
to get its vDisk.

2. TD uses Bootstrap file to request that PVS to send the boot sector from the
vDisk.
3. PVS access the vDisk from the Store (storage) and dynamically merges the
boot sector with the SQL server data to apply appropriate SID based on the
MAC address of the TD.
As the target device starts up, further requests for additional sectors from the vDisk
are access in the same method. With PVS the entire vDisk is not streamed instead
sectors are sent to the target device as needed.

What is new in XenApp 7.5?

The XenApp 7.5 now moves to the FlexCast Management Architecture (FMA) as
same as XenDesktop, brings conceptual and more flexible architecture. Here are the
differences between XenApp 6 entities and terminology in a XenApp 7.5 world.
FlexCast Management Architecture
The FlexCast Management Architecture (FMA) is a service-oriented architecture that
allows interoperability and management modularity across Citrix technologies. FMA
provides a platform for application delivery, mobility, services, flexible provisioning,
and cloud management.
FMA replaces the Independent Management Architecture (IMA) used in XenApp 6.x
Elements in the new architecture
Delivery Sites
Farms were the top level objects in XenApp 6.x. In XenApp 7.5, the Delivery Site is
the highest level item. Sites offer applications and desktops to groups of users.
The FMA requires that you must be in a domain to deploy a site. For example, to
install the XenApp servers, your account must have both local administrator
privileges and be a Domain Administrator in the Active Directory.
Session Machine Catalogs and Delivery Groups
Machines hosting applications in XenApp 6.x belonged to Worker Groups for
efficient management of the applications and server software. Administrators could
manage all machines in a Worker Group as a single unit for their application
management and load balancing needs. Folders were used to organize applications
and machines.
In XenApp 7.5 we use a combination of Session Machine Catalogs and Delivery
Groups to manage machines, load balancing, and hosted applications or desktops.
A Session Machine Catalog is a collection of machines that are configured and
managed alike. A machine belongs to only one catalog. The same applications or
desktops are available on all machines of the catalog.

Delivery Groups are designed to deliver applications and desktops to users. A


Delivery Group can contain machines from multiple machine catalogs, and a single
machine catalog can contribute machines to multiple Delivery Groups. However, one
machine can belong to only one Delivery Group. You can manage the software
running on machines through the catalogs they belong to. Manage user access to
applications through the Delivery Groups.
Virtual Delivery Agents
The Virtual Delivery Agent (VDA) enables connections to applications and desktops.
The VDA is installed on the machine that runs the applications or virtual desktops
for the user. It enables the machines to register with Delivery Controllers and
manage the High Definition eXperience (HDX) connection to a user device.
In XenApp 6.5, worker machines in Worker Groups ran applications for the user and
communicated with data collectors. In XenApp 7.5, the VDA communicates with
Delivery Controllers that manage the user connections.
The VDA installs on Server OS machines and Desktop OS machines
Delivery Controllers
In XenApp 6.x there was a zone master responsible for user connection requests and
communication with hypervisors. In XenApp 7.5 connection requests are distributed
and handled by the Controllers in the site. XenApp 6.x zones provided a way to
aggregate servers and replicate data across WAN connections. Although zones have
no exact equivalent in XenApp 7.5, we can provide users with applications that cross
WANs and locations. You can design Delivery Sites for a specific geographical
location or data center, and then allow users access to multiple Delivery Sites. App
Orchestration with XenApp 7.5 provides capabilities for managing multiple sites in
multiple geographies.
Citrix Studio and Citrix Director
Citrix Studio console is used to configure the environments and provide users with
access to applications and desktops. Studio replaces the Delivery Services Console
and AppCenter in XenApp 6.x.Administrators use Director to monitor the
environment, shadow user devices, and troubleshoot IT issues.

Delivering applications
XenApp 6.x used the Publish Application wizard to prepare applications and deliver
them to users. In XenApp 7.5, you use Studio to create and add applications to make
them available to users who are included in a Delivery Group. Using Studio, you first
configure a site, create and specify machine catalogs, and then create Delivery
Groups within those machine catalogs. The Delivery Groups determine which users
have access to the applications you deliver.
Database
XenApp 7.5 does not use the IMA data store for configuration information. It uses a
Microsoft SQL Server database to store configuration and session information and
no more support for MS Access and Oracle databases.
Load Management Policy
In XenApp 6.5, load evaluators use predefined measurements to determine the load
on a machine. User connections can be matched to the machines with lesser load. In
XenApp 7.5, use load management policies for balancing load across machines.
Delegated Administrators
In XenApp 6.5, we created custom administrators and assigned them permissions
based on folders and objects. In XenApp 7.5, custom administrators are based on
role and scope pairs. A role represents a job function and has defined permissions
associated with it to allow delegation. A scope represents a collection of objects.
Built-in administrator roles have specific permissions sets, such as help desk,
applications, hosting, and catalog. For example, help desk administrators can work
only with individual users on specified sites, while full administrators can monitor
the entire deployment and resolve system-wide IT issues.
altaddr

specify server alternate ip address

app

run application execution shell

auditlog

generate server logon/logoff reports

change client

change ICA client device mapping

chfarm

change the server farm membership of the server

clicense

maintain Citrix licenses

clrprint

set the number of ICA client printer pipes

ctxxmlss

change the XML service port number

dsmaint

configure the IMA data store

dsmaint backup destination path

creates a backup copy of the access database that is the


servers farm's data store.
Beware that the path is correct, else you get an error.

dsmaint recover destination path

recovers the latest copy of the access database

icaport

configure TCP/IP port number

query

view information about server farms, processes, servers,


ICA sessions and users

query farm

shows the servername, protocol and ip address

query farm /app

show the published applications

query farm /disc

shows the disconnected session data for the server farm

query farm /load

displays server load information

query user

displays the current connections

twconfig

confgure ICA display settings

Difference between Machine Creation Services and


Provisioning Services
XenDesktop 5 has two different features to provide single image management i.e.,
Machine Creation Services and Provisioning Services.
Machine Creation Services can only provide hosted VDI desktops (pooled or
dedicated).
If we want to utilize a hosted shared desktop model, a streamed VHD model or a
Hosted VDI model then we would require Provisioning Services.
MCS uses built-in technology to provide each desktop with an unique identity and
also thin provisions each desktop from a master image. Only changes made to the
desktop consume additional disk space
PVS uses built-in technology to provide each desktop with a unique identity, but also
utilizes a complete copy of the base desktop image in read/write mode. Each copy
consumes disk space, which also increases as additional items are added to the
desktop image by the user.

Create scheduled server reboot policy in XenApp 6.5


1. Create the Worker Groups
2. Add Servers to the Worker groups
3. Create the Citrix Policies
4. Select Reboot Logon Disable Time and click Add.
5. Select Reboot Schedule Frequency and click Add.
6. Select Reboot Schedule Start Date and click Add.
7. Select Reboot Schedule Time and click Add.
8. Select Scheduled Reboots and click Add.
9. Add this policy to Worker Group, been created before.
10. Check Allow and Enable this filter element
11. Check Enable this policy and Save.
12. From a command prompt on one of the XenApp servers, type: gpupdate /force.
Policies created with the Group Policy Management Console are stored in Active
Directory and are propagated to XenApp servers using group policy processing.
Policies created with the Delivery Services Console are stored in the XenApp data
store and are propagated to XenApp servers by the Citrix IMA service. Policy settings
are stored in the registry on each XenApp server. The Citrix Group Policy Modeling
Wizard can be used to verify policy settings through a resultant set of policies.
Alternatively, you might verify the policy settings on the XenApp server using the
registry editor.
Verify Policy Settings with Regedit:
HKEY_LOCAL_MACHINE/SOFTWARE/Policies/Citrix/IMA/Restart Options
The same settings are also stored under
HKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/Policies/Citrix/IMA/Rest
art Options
Schedule reboot in Citrix Presentation Server 4.5

Schedule reboot in XenApp 5.0

Citrix XenDesktop Login Process and Ports Used


Citrix XenDesktop login process and ports used
1. The user submits the credentials to the Citrix Web Interface site (http/https port
80/443).
2. Web Interface passes the user credentials to the Desktop Delivery Controller with
XML service (port 80/443).
3. DDC verifies the user authorization with Microsoft Active Directory (LDAP and GC
port 389/636 and 3268/3269).
4. DDC queries the database for the users desktop groups and users profile
information (ports 1433/1434). Now user sees the desktop groups, he has access to
5. When user clicks one of the desktop groups, DDC queries (port 80/443) the
hypervisor (ESXi or Hyper-V) about the status of desktops within that group.
6. Controller provides the corresponding desktop to the Web Interface for this
particular session (80/443).
7. Web Interface sends an ICA file (port 80/443) to the online plug-in in the client
machine, which points to the virtual desktop identified by the hypervisor.
8. Citrix client establishes an ICA connection to the specific virtual desktop that was
allocated by the controller for this session (port 1494/2598).
9. Virtual Desktop Agent (VDA) in the Virtual Desktop verifies the license file with
the DDC (port 80).
10. DDC queries Citrix license server to verify that the end user has a valid ticket
(port 27000).
11. DDC passes the session policies to the VDA, which then applies those policies to
the virtual desktop (port 80).
12. Citrix client (port 1494/2598) opens the virtual desktop to the end user.
13. The users session information and the servers status are being controlled and
managed by the administrators from Desktop Director and Desktop Studio
(Powershell/RDP- port 5985/3389) from the management server.

Explain Independent Management Architecture (IMA)?


Independent Management Architecture (IMA) provides the framework for server
communications and is the management foundation for MetaFrame Presentation
Server. IMA is a centralized management service comprised of a collection of core

subsystems that define and control the execution of products in a server farm. IMA
enables servers to be arbitrarily grouped into server farms that do not depend on the
physical locations of the servers or whether the servers are on different network
subnets.
IMA runs on all servers in the farm. IMA subsystems communicate through
messages passed by the IMA Service through default TCP ports 2512 and 2513. The
IMA Service starts automatically when a server is started. The IMA Service can be
manually started or stopped through the operating system Services utility.
IMA can be defined as a SERVICE, PROTOCAL and as a DATASTORE.
IMA Service: IMA Service is the central nervous system of Presentation Servers. This
service is responsible for just about everything server-related, including tracking
users, sessions, applications, licenses, and server load.
IMA Data store: Which stores Presentation server configuration information, such as
published applications, total licenses, load balancing configuration, security rights,
Administrator Accounts, Printer configuration, etc?
IMA Protocol: Which is used for transferring the ever-changing background
information between Presentation servers, including server load, current users and
connections, and licenses in use.
Ports used by IMA:
2512: Used for Server to Server Communication
2513: Used for CMC to Data store Communication
Independent Management Architecture is a term Citrix uses to describe the various
back-end components that make up a CPS environment. In the real world, IMA
consists of three components that we actually care about.
It is a database (called the IMA Data Store) used for storing Citrix Presentation
server configuration information, such as published applications, load balancing
configuration, security rights, policies, printer configuration, etc.
A Windows service (called the IMA Service) that runs on every Presentation Server
that handles things like server-to-server communication.
A protocol (called the IMA Protocol) for transferring the ever-changing background
information between Presentation Servers, including server load, current users and
connections, licenses in use, etc.
In Presentation Server, the IMA protocol does not replace the ICA protocol. The ICA
protocol is still used for client-to-server user sessions. The IMA protocol is used for
server-to-server communication in performing functions such as licensing and server
load updates, all of which occur behind the scenes.
If we open IMA data store database with SQL Enterprise Manager, well see it has

four tables:
DATATABLE
DELETETRACKER
INDEXTABLE
KEYTABLE
IMA data store is not a real relational database. Its actually an LDAP database. IMA
Data Store Size 1MB per server.
We cant access the IMA data store directly through SQL Enterprise Manager.
(technically you can, but if you run a query youll get meaningless hex results.) If we
try to edit any of the contents of the data store directly in the database, it will be
definitely corrupt.
Theres a tool on the Presentation Server installation CD called dsview. There is
another tool called dsedit a write-enabled version of dsview
There are many reasons that the IMA Service doesnt start
1. IMA Service load time
2. IMA Service subsystem
3. Missing Temp directory
4. Print spooler service
5. ODBC configuration
6. Roaming Profile
Check the Windows Registry setting:
HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\IMARuntime\CurrentlyLoadingPlu
gin
If there is no value specified in the CurrentlyLoadingPlugin portion of the above
Windows Registry entry then the IMA service could not connect to the data store or
the local host cache is missing or corrupt.
If a CurrentlyLoadingPlugin value is specified the IMA Service made a connection to
the data store and the value displayed is the name of the IMA Service subsystem that
failed to load.
If administrators see an IMA Service Failed error message with an error code of
2147483649 when starting the Presentation Server the local system account might be
missing a Temp directory which is required for the IMA Service to run.
Change the IMA service startup account to the local administrator and restart the
server. If the IMA Service is successful in starting under the local administrator
account then it is likely that a missing Temp directory for the local system account is
causing the problem.

If the Temp directory is not present then manually create one as >Temp. For
example: C:\Windows\Temp
Also verify that the TMP and TEMP system environment variables point to the
temporary directory. Restart the server to restart the IMA Service

What are the different types of Citrix Load Evaluators and


how it works?
1. CPU Utilization
2. Memory Utilization
3. Page Swap
4. Application User Load
5. Context Switches
6. Disk Data I/O
7. Disk Operations
8. IP Range
9. Page Faults
10. Scheduling
11. Server User Load
QFARM /LOAD command executed in a Presentation Server farm will display all
servers in the farm along with each servers respective load value. Each and every
Presentation Server generates its own score and sends this information to the data
collector in the respective zone. This score will be a decimal number between 0 and
10,000, with zero representing a no load situation, and 10,000 indicating the
particular server is fully loaded and is not accepting any more connections. Citrix
Load Management is handled by load evaluator and its simply a set of rules that
determine a particular servers score, or current load value. It is the score that
determine the decisions that distribute loads within the server farm. Load evaluators
can be applied to servers and/or published applications. If any servers in the Zone go
down then Load Evaluators are used to overcome the situation. In default XenApp
installation there are Advanced and Default Load Evaluators are there.D
Default Load Evaluator includes only two rules, Load Throttling and Server User
Load.
Advanced Load Evaluator includes four rules, CPU Utilization, Load Throttling,
Memory Usual and Page Swaps.

Preferred Load balancing is the feature in XenApp Platinum edition, which allows
you to configure preference for the particular users to access the applications in the
XenApp farm.
We can see this in Server properties in Advanced Management Console. In
Memory/CPU > CPU Utilization Management, there will be the third option called
CPU sharing based on Resource Allotments
To give more resources to particular application in the server, we can configure in
Application properties > Advanced > Limits and Application important in Access
Management Console. So if you set the Application importance to High, then when
those application is used by the users will get more CPU cycles than the users
accessing other applications
To give more resources to the users, we can configure it in Citrix Policies in XenApp
Advanced Configuration. To enable it go to the policy properties > Service Level >
Session Importance > enable, and assign preferred Importance Level (High,
Medium, Low)

IMA Startup error due to Data Store


The IMA service running is necessary for the discovery of the XenApp
server, XenApp functionality and for normal operability of XenApp. In the registry,
you can see the CurrentlyLoadingPlugin string to identify if the issue is related to the
datastore.
Check if the IMA service is started. If not started see the error message that may help
to find the root cause.
Use the registry editor and look at the following registry string:
\HKLM\SOFTWARE\Citrix\IMA\Runtime\CurrentlyLoadingPlugin. In 64-bit
\HKLM\SOFTWARE\Wow6432Node\Citrix\IMA\Runtime\CurrentlyLoadingPlug
in.
If the IMA service fails to start and this value is blank or not present, the IMA service
could not connect to the datastore or the local host cache is missing or corrupted.
Procedure 1
Check the database connectivity and authentication by using the Windows ODBC
Configuration. Select the System DSN Data Source and click Add to add a new

datasource. If we get connection failed error message then the database


authentication and db_owner roles needed to be set on the database in the SQL
server. Address the connectivity issue and attempt to restart the IMA service.
Procedure 2
Run the XenApp Role manager in XenApp 6.0 to reconfigure your XenApp
Configuration. Select Edit Configuration in the XenApp selection. Click Create new
server farm or Join an existing server farm. Creating a new database and Farm will
rule identify existing farm as problematic. Join an existing working farm if you have
a working farm but better create a new farm to address the issue. When the server
farm was recreated and configured, the IMA service started and the appropriate
registry entries were present
If there is no connectivity between the SQL server and the Local Host Cache, we need
to configure it once again to establish the connection to work again normally.
1. Run the below command:
dsmaint config /user:username /pwd:password /dsn:dsnfilename
2. Stop the IMA service
3. Recreate LHC. dsmaint recreatelhc
4. Start the IMA service.

Load Evaluators included in XenApp


Default: After adding license to the server farm, the Default load evaluator is
attached to each server in the farm.It contains two rules: Server User, which reports
a full load when 100 users log on to the attached server and Load Throttling, which
specifies the impact of logging on users to the server and limits the number of
concurrent connection attempts the server is expected to handle
Load Throttling
Limits the number of concurrent connection attempts that a server handles. This
prevents the server from failing when many users try to connect to it simultaneously.
The default setting (High impact) assumes that logons affect server load significantly.
This rule affects only the initial logon period, not the main part of a session.

The Load Throttling rule can be applied only to a server, not to an individual
application.
Advanced: This load evaluator contains the CPU Utilization, Memory Usage, Page
Swaps, and Load Throttling rules.
We cannot delete the Citrix-provided Advanced or Default load evaluators and can
create new load evaluators based on the rules available. Each server or published
application can have only one load evaluator attached to it.
qfarm /load command displays the load for all servers in the farm
The qfarm /app command displays the load for all applications and servers in the
farm. The load, 99999 means there is no load evaluator assigned to the application.

What is a Data Store and Zone Data Collector?


This is the place where all the static information are stored. The Data Store provides
a repository of persistent information about the farm (Farm configuration
information, Published Application configurations, Server configurations, Static
policy configuration, XenApp administrator accounts, and Printer configurations)
that all servers can refer.
The data store is the central repository where almost the entire Citrix
implementation is invested. The Administrators of the farm, the license server to
point to, the whole farm configuration, the published applications, all their
properties, the security of who gets access to what, the custom load evaluators,
custom policies, configured printers and print drivers, all this is stored in the central
repository called the Data Store.
Data Collector stores all the dynamic information like session, load and published
applications in the servers in their zones and communicates the zone information to
the Data Collectors in other zones in the farm.
Data collector is a Citrix Presentation Server whose IMA service takes on the
additional role of tracking all of the dynamic information of other Presentation
Servers. This information is stored in memory and called the dynamic store.
The data store is a database on disk. The dynamic store is information stored in
memory.

To look the contents of the in-memory dynamic store on the data collector, use
queryds command. QueryDS can be found in the support\debug folder of your
Presentation Server installation source files.
To determine which server is acting as the data collector in the zone run query
farm /zone from the command line

Describe ZDC (Zone Data Collector) election process


in detail?
In case ZDC is not be available, another server in the zone can take over that role.
The process of taking the role is called ZDC election. Server Administrators should
choose the Zone Data Collector strategy carefully during the farm design itself. When
an election needs to occur in a zone, the winner of the election is determined by the
following criteria
1. Highest version of Presentation Server first
2. Highest rank (as configured in the Management Console)
3. Highest Host ID number (Every server has a unique ID called Host ID).
When the existing data collector for Zone failed unexpectedly or the communication
between a member server and the Zone Data Collector for its zone failed or the
communication between data collectors failed, then the election process begins in the
Zone. If the server is shutdown properly, it triggers the election process before it goes
down. The servers in the zone recognize the data collector has gone down and starts
the election process. Then the ZDC is elected and the member servers send all of
their information to the new ZDC for the zone. In turn the new data collector
replicates this information to all other data collectors in the farm.
Note: The Data Collector election process is not dependent on the Data Store. If the
data collector goes down, sessions connected to other servers in the farm are
unaffected .The data collector election process is triggered automatically without
administrative interference. Existing as well as incoming users are not affected by the
election process, as a new Data Collector is elected almost instantaneously.
C:\ QueryHR.exe
Showing Hosts for 10.22.44.0

Zone Name: 10.22.44.0


Host Name: TEDDYCTX02
Admin Port: 2513
IMA Port: 2512
Host ID: 4022
Master Ranking: 1
Master Version: 1

To see the Host ID number and its version, run queryhr.exe utility (with no
parameters).
Each server in the zone has a rank assigned to it. The administrator can configure
such that the servers in a zone can be ranked to make the server as the most desired
to serve as the zone master or ZDC. The ties between servers with the same
administrative ranking are broken by using the HOST IDs assigned to the servers.
When a Presentation Server starts or when the IMA service starts, the IMA service
starts trying to contact other servers via the IMA protocol on port 2512 until it finds
one thats online. When it finds, it queries it to find out which server is acting as the
data collector. The winner of this Zone Data Collector election is determined by the
newest version of the IMA service. We can control which server will act as data
collector by keeping that server the most up-to-date.
Data Collection Election Priority
Whichever server has the most recent version of the IMA Service running. (This may
include hotfixes) and the server with the highest preference set in the data store
Basically data collectors and data store are not really related. The Data Store holds
permanent farm configuration information in a database, and the data collector
tracks dynamic session information in its RAM.
In addition to their primary role to provide dynamic farm information for admin
consoles or for incoming connection requests, data collectors also take part in the
distribution of configuration changes to Presentation Servers in the farm. When we
make a changes in a presentation server that change is written to the local host cache
of whichever server we connected to, and then immediately replicated to the data
store. Presentation Server only looks for changes in the central data store every 30
minutes. Whenever a change is made to the Data Store, that change is sent to the
data collector for the zone.

The Data Collector then distributes that change (via IMA port 2512) to all of the
servers in its zone, allowing each server to update its own local host cache
accordingly. Furthermore, if we have more than one zone, the initial data collector
contacts the data collectors in the other zones. It sends its change to them, and in
turn those data collectors forward the change to all of the servers in their zones.
Coolest part is if the change is larger than 64k, the data collectors dont send the
actual change out to its zone. Instead they send out a notification which causes the
servers in the zone to perform an on demand sync with the central data store.
However its rare for a single change to be more than 64k in size.
The data collector election priority settings in the management console:
Presentation Server Java Management Console > Right-click on farm
name >Properties > Zones > highlight server > Set Election Preference
We can totally control which server is our data collector by manually setting the
preferences in the Java console. We can manually configure four levels of Zones
Data Collector election preference options:

Most Preferred

Preferred

Default Preferred

Not Preferred

The important thing to remember is that these preferences will be ignored if a newer
server is up for election.

What is a Farm and Zone in Citrix?


A Farm is a group of Citrix servers which provides published applications to all users that
can be managed as a unit, enabling the administrator to configure features and settings for
the entire farm rather than configuring each server individually. All the servers in the farm
share a single data store.
A server farm is a grouping of servers running Citrix Presentation Server that can be manage
as a unit, similar in principle to a network domain. When designing server farms, keep in
mind the goal of providing users with the fastest possible application access while achieving
the degree of centralized administration and network security that you need.

Zone is subset of Farm. It is a grouping of Presentation Servers that shares the common Data
Collector. Zones are very helpful in controlling traffics. It collects data from member servers
and distributes changes to all servers in the farm. A zone in the Presentation Server farm
elects a zone data collector for the zone and it is responsible to communicates between other
ZDCs in the farm. It is used to redirect the users to least busy server. The ZDC maintains all
load and session information for every server in the zone. ZDCs keep open connections to
other ZDCs changes in the member servers of a zone and are immediately propagated to the
other ZDCs in the farm. Zone has server members and one of them is ZDC (Zone Data
Collectors) in each zone. These ZDCs communicate between zones. Zones are very help full
in controlling traffic. We can move the servers among the zones and after moving the servers
from one Zone to another the servers must be restarted to get settings and configurations
from the Data Store.

Troubleshooting Client Drive Mapping

This article will give you a vast idea about many common client drive mapping
inquiries and issues along with their respective explanation or resolution.
Client Drive Mappings Do Not Create For Any User

If the ICA-tcp port properties are set to Inherit User Config make sure the
Active Directory profile for the users having the issue have the Connect client
drives at logon box checked. (Which is the default setting.)

Ensure the option to disable client drive mappings on the ICA-tcp listener in Terminal
Services Configuration is not enabled. A Group Policy may gray out the check box selection.

Removable drives must be inserted / attached to the client computer before


the ICA connection. After the removable drive is inserted / attached, ensure
the client is not reconnecting to a disconnected session or that the drive is not
being restricted by a policy.

For Windows 2000 and 2003 Terminal Server Installations, ensure the
following registry entry exists and that the process, wfshell.exe, is running
inside the
session:HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\Cur
rentVersion\Winlogon
Key Name: AppSetup
Value: Cmstart.exe
CTX983798 What Does the CMSTART Command Do?

Ensure the Client Network Service is started. Do not attempt to restart the
Client Network Service when there is an existing ICA connection to the server.
If the Client Network Service does not appear within services, verify that the
key, CdmSerivce, and its subcatergories, Enum and networkProvider, along
with their values are present under:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\.
Check another working server for proper registry settings.

Ensure the RPC Service is started.

Ensure that Client Network is visible under Network Neighborhood. If it is


not, follow the steps listed below:

a. Start Registry Editor (Regedt32.exe) and go to the following key:


HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\NetworkProvider
\Order
The value for ProviderOrder contained only LanmanWorkstation.
Add CdmService, so that the Value now reads CdmService,LanmanWorkstation.
b. For Presentation Server 4.5, ensure the path defined under the CommonFilesDir
value from
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion is
correct.
c. Restart the server

Ensure Cdmprov.dll is in the \system32 directory.

Ensure Microsoft files Mpr.dll, the Multiple Provider Router dll, and Mup.sys
(the Multiple UNC Provider driver) are present.

Does drive mapping fail for the administrator? If not, ensure users have
sufficient rights to the dlls, exes, and registry settings outlined in this section.

Does the command chgcdm /default work?

Does the command net use * \\client\c$ work? If it does not, a System
Error 67 appears.

Is a local Windows 2000/2003 policy Strengthen default permissions of


global system objects disabled? If so, Enable this policy or apply Citrix Hotfix
XE104W2K3R01 / MPSE300W2K3R03 or the Operating System equivalents.
Citrix Presentation Server 4.0 includes the fix.

Check the event log for CDM error messages.

Can a similar function be performed in a Microsoft network scenario?

Verify that the Cdm.sys file is in the \Program Files\Citrix\System32\drivers


directory.

For Terminal Server 4.0 installations, check to see if the following registry
entry exists:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVers
ion\Winlogon
Key Name: Userinit
Value: Ctxlogon.exe

If using Web Interface, does the template.ica or default.ica file have a value of
CDMAllowed=Off (for Presentation Server Client version 9.x or earlier) or
CDMAllowed=False (for Presentation Server Client version 10.x or later)

CTX117481 Manually Mapped Client Drives are not Mapped when


Reconnecting to a Disconnected Session

CTX113480 Error: Cannot copy (file name):Invalid MS-DOS Function


when using Client Drive Mapping and Files Larger than 2 GB

CTX103825 Changes to Client Drive Files are not Immediately Updated

CTX121110 Client Drive Mapping Fails if Symantec Endpoint Protection is


Installed

CTX124356 How to Enable Read-only Client Drive Mapping and Clipboard


Mapping for XenApp Feature Pack 2

Configuring Server Drive Letters For Client Drive Mapping


The Citrix XenApp plugin/ICA Client supports client drive mapping functionality.
Client drive mapping allows users logged on to a XenApp server from a client device
to access their local drives transparently from the ICA session. Client devices can
transparently access files contained on the local machine and data can be cut and
pasted between local and remote sessions using the clipboard. During the initial

installation of XenApp, the administrator is prompted to modify the server drive


letter assignments to avoid conflicts with user drive letters (except with Windows
Server 2008 where drive remapping is not supported).
CTX457309 MetaFrame/Presentation Server Drive Remapping Best Practices
The default drive letters assigned to client drives start with V and work backwards
assigning a drive letter to each fixed disk and CD ROM. (Floppy drives are assigned
their existing drive letters.) This method yields the following drive mappings:
Client drive letter

Is accessed by the Citrix server as

If the Citrix server drive letters do not conflict with client drive letters, the client
drive letters can be accessed with their existing drive letters. So that the Citrix server
drive letters do not conflict with the client drive letters, you need to change the server
drive letters to higher drive letters. For example, changing Citrix server drives C to M
and D to N allows client devices to access their C and D drives directly.
How to Map Client Workstation Network Drives in an ICA Session
Use the Net Use command in a logon script to map client network drives, even when
the Citrix Management Console policy is enabled. For design and performance
reasons, if the client mapped network drive is accessible on the network from the
Citrix server, Citrix prefers that you do not following the solution below and that the
network drive be mapped in a regular Windows NT logon script.
The below point items are valid for all versions of XenApp.

During logon, the ICA Client informs the server of the available client drives,
COM ports, and LPT ports.

Client drive mapping allows drive letters on the Citrix server to be redirected
to drives that exist on the client device; for example, drive H in a ICA user
session can be mapped to drive C of the local computer running the Citrix ICA
Client. These mappings can be used by the File Manager or Explorer and your
applications just like any other network mappings. Client drive mapping is
transparently built into the standard Citrix device redirection facilities. The
clients disk drives are displayed as share points to which a drive letter can be
attached. The Citrix server can be configured during installation to
automatically map client drives to a given set of drive letters. The default
installation mapping maps drive letters assigned to client drives starting with
V and works backwards, assigning a drive letter to each fixed disk and CDROM. (Floppy drives are assigned their existing drive letters.)

You can use the net use and change client commands to map client devices not
automatically mapped at logon.
Here is the command and syntax:
net use y: \\client\c$
where y is the drive letter in a session and c is the client drive letter you want
to map.

Presentation Server 4.0 with Hotfix Rollup Pack 1 automatically maps Network
Drives. [From PSE400W2K3R02][#127532]: Network drives for client devices
incorrectly map automatically as local client drives.
How to Disable Specific Client Drive Mappings such as the A: drive
Perform the following steps:

Open the Module.ini file in a text editor (for example, Notepad) on the client
device. In most cases, this file is in the \Program files\Citrix\ICA client
directory.

Add the following entry to the end of the [ClientDrive] section:


DisableDrives =A,D,F

Save the changes and exit the text editor.

Restart the ICA Client and establish a connection to the Citrix server.

This entry prevents the client side drive letters A, D, and F from being mapped. The
entry is not case-sensitive. If someone attempts to map a disabled drive through
the client network within an ICA session (that is, net use * \\client\D$), the
following error message appears: System Error 55 has occurred. The specified
network resource is no longer available.
The same restriction can be applied to an .ica file (used with published applications)
by adding DisableDrives= in the [Wfclient] section. Again, use a text editor to make
this change.
Another solution is to enable a policy through the management console.
How to Map Only One Client Drive at Logon

From Terminal Services Configuration, double-click your connection type.

Select Client Settings.

Clear Inherit user config.

Clear Connect Client drives at Logon.

Click OK.
Note: Do not select Disable Client Drive Mapping; this will disable all future
client drive mappings.

Create a logon script (.bat file) in the following format:


net use y: \\client\c$
where y is the drive in a session and c is the client drive you want to map.
Note: This does not permanently disable clients from mapping another drive
when they are logged on.

How to Map Client Drives in Ascending Order


By default, when server drives are not remapped (C and D) or the above
initialclientdrive registry value is set, client drives are mapped in descending order.
See Configuring Citrix Server Drive Letters for Client Drive Mapping for more

information. The methodology explained in How to Map Only One Client Drive at
Logon can be used to create the mapping in ascending order.
How to Make the Server Drives Appear as a Client Drive When Using the
PassThrough Client
From the 6.20.986 ICA Win32 Client ReadMe:
Client drive mapping on the pass-through client was restricted to the drives on the
client device. The client could not map local or network drives configured on the
MetaFrame server in a pass-through session.
Local or network drives configured on the MetaFrame server can now be mapped by
the pass-through client.
For version 9.xx, open the Module.ini file in a text editor and add the following line
to the [ClientDrive] section of the file: NativeDriveMapping=TRUE
For version 10.xx

Run Regedit.

Navigate to:
HKEY_LOCAL_MACHINE\SOFTWARE\Citrix\ICA
Client\Engine\Configuration\Advanced\Modules\ClientDrive

Create the Reg Value: NativeDriveMapping


Reg Type: REG_SZ
Add the Value: True

When this flag is set, the client drives on the client device are not mapped and
are not available. The drives configured on the MetaFrame server are mapped
and are available to the pass-through client.
CTX126763 Client Drive is Not Mapped Using ICA Client Version 12 as PassThrough Client

@echo off
rem *
rem * Wait on redirector to connect client drive.

rem * In this case, we are using the V: drive as the client C:.
rem * We also need something to look for on the client drive.
rem * Adjust the settings accordingly.
rem * echo Connecting
:Delay
DIR %homedrive% /w > V:\tag.txt
IF EXIST V:\tag.txt GOTO :Connected
goto :Delay :Connected
DEL V:\tag.txt
START /NORMAL /WAIT Explorer.exe
Files saved to a client drive is successful but the file is corrupt or the
saved file reports an invalid memory location.
If the client drive or disk does not have enough space, the file copy passes but the file
is truncated or the file will not copy and gives an invalid memory location error.
Client Drives content may disappear in Windows Explorer and at a
command prompt when applications open more than 20 file handles
Add the bolded entry to the Module.ini [ClientDrive] section. The Module.ini is in the
\Program Files\Citrix\ICA Client directory.
MaxOpenContext = (A number ranging from 21 to 1024.)
Example:
[ClientDrive]
DriverName = VDCDM30.DLL
DriverNameWin16 = VDCDM30W.DLL
DriverNameWin32 = VDCDM30N.DLL
MaxWindowSize = 6276
MaxRequestSize = 1046
CacheTimeout = 600
CacheTimeoutHigh = 0
CacheTransferSize = 0
CacheDisable = FALSE
CacheWriteAllocateDisable = FALSE

MaxOpenContext = 50
DisableDrives =
Note: The default is 20 file handles per drive. If it becomes necessary to increase this
number, it is possible there is a handle leak with the applications accessing the client
drives.
Cannot Save Word97 Docs with Long Filenames to Citrix Drive A:
When the File Open or Save As dialog box is opened, Word brings up the last drive
letter used. If that drive was a remote share, Word starts a search for the correct
remote share at drives C through Z, because drive letters A or B are not usually
referenced as network shares. If Word cannot find the correct remote share, it makes
a new connection with a NULL local drive name.
Saving Long Filenames with the DOS Client
The standard 8.3 format must be used in saving to local drives with the ICA DOS
Client. The Citrix server does not physically write the file, rather, the ICA DOS client
is sent the file and the ICA Client writes it. Thus, the ICA Client cannot write a long
filename because the DOS operating system does not support long filenames.
Internet Explorer 5.0 saves HTML pages with all images by creating its own
directories and file names. These file names are long file names that are not
compatible with the DOS Client.

Вам также может понравиться