Вы находитесь на странице: 1из 10

Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops 1/5/16, 11:59 AM

This research note is restricted to the personal use of Cedric Courteix (Cedric_Courteix@kenan-flagler.unc.edu).

! ! ! ! ! !!!! ! !! ! ! !! ! ! ! ! !!! ! !! ! ! ! ! ! ! ! !! ! !! ! ! !! ! !! !! !! ! !!! ! ! ! !! ! !


22 July 2014 G00248856
Analyst(s): Nathan Hill | Mark A. Margevicius | Sergis Mushell

! ! ! ! ! !!
GPU-enabled HVD technology continues to develop promising, high-value use cases for IT leaders who deliver CAD and multimedia-
intensive solutions. Mainstream adoption is hampered by fragmented shared-architecture options, which must consolidate and mature to
drive buyer confidence.

! ! ! ! ! !! ! Additional Perspectives
Impacts
A Compelling Use Case for Server-Based, GPU-Enabled
Hosted virtual desktop (HVD) solutions using pass-through graphics Technology for Oil and Gas Upstream Modeling Suites
present IT leaders with a viable computer-aided design/computer- (http://www.gartner.com/document/code/271054?
aided manufacturing (CAD/CAM) workstation replacement and are a ref=grbody&refval=2805717) (30 October 2014)
good starting point for deploying graphics processing unit (GPU)-
powered virtual desktops.

IT leaders' adoption of GPU shared models will be limited until standards emerge that deliver the right performance levels and support
necessary graphics APIs.

Low user density and high infrastructure costs make GPU-accelerated desktops too expensive for IT leaders to justify for most
mainstream use cases in the short to medium term.

Recommendations

Deploy GPU-enabled servers for high-value HVD workloads where graphic performance is essential, and performance and productivity
needs outweigh infrastructure cost.

Only proceed with shared-GPU architectures if the business case is clear and compelling because those architectures are currently
proprietary to the software stack (specifically, the vendor hypervisor), thus creating implications for performance, compatibility,
scalability and management.

Assess whether GPUs are necessary for more cost-sensitive task and knowledge worker desktop replacement use cases.

! ! ! !! ! !!
In order to avoid any ambiguity, there are three main use cases for server-based GPUs:

To provide enhanced graphics capabilities for HVDs, which is the focal point of this research note

http://www.gartner.com/document/2805717?ref=solrAll&refval=160811401&qid=6ed8009fd4a21b96cf0f95a7a260633d# Page 1 of 10
Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops 1/5/16, 11:59 AM

GPU acceleration for high-performance computing to enable CPU offload and to reduce processing time, especially for complex
computation tasks

For delivery of remote graphics capabilities, including online gaming and graphic-rendering farms for the media and entertainment
industry

These separate use cases are often confused, and although the HVD architecture can be leveraged in remote graphics scenarios (such as
online gaming), it is one of many architectures that can be leveraged for this use case.

HVDs have not been a viable replacement for PCs and workstations where GPU performance is needed. Standard HVD deployments
leverage the CPU of the server platform to provide software-emulated GPU. Although this may be fine for standard, two-dimensional office
applications, it's rarely sufficient when graphical workload demands increase. This has been particularly challenging when adopting HVDs,
where only a small percentage of applications may make the architecture inviable.

In "Virtual GPUs Power Multimedia for HVD Environments" (http://www.gartner.com/document/code/235784?


ref=grbody&refval=2805717) (published in 2012), Gartner analyzed the arrival of server-based GPUs that were designed to power virtual
desktops to meet more challenging workloads as well as to remove some key barriers limiting mainstream adoption of the HVD technology.

Developments in the last two years have highlighted that the high-end designer use cases use of software design products such as
Autodesk and Catia, especially in the automotive and aeronautic industries are where organizations are focusing their attention.

In "Migrating CAD Applications to HVDs Presents Barriers to Use" (http://www.gartner.com/document/code/235893?


ref=grbody&refval=2805717) (also published in 2012), Gartner concluded that the technology was not ready for the most demanding of use
cases. However, proof of concept (POC) activities and production deployments are demonstrating that this is no longer the case. Indeed,
Nvidia, the principal pioneer for server-based, shared-GPU technology, has seen a tenfold increase in POC activity from February 2013
through February 2014. Although such a dramatic change is not reflected at the same level with Gartner client inquiry on the topic, the
volume of interest has increased significantly.

In this research, we review which server-GPU-enabled use cases are compelling for adoption today and which hold promise for the future.
See Figure 1 for the impacts and top recommendations concerning those use cases.

Figure 1. Impacts and Top Recommendations for Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops

http://www.gartner.com/document/2805717?ref=solrAll&refval=160811401&qid=6ed8009fd4a21b96cf0f95a7a260633d# Page 2 of 10
Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops 1/5/16, 11:59 AM

Source: Gartner (July 2014)

!! ! ! ! !! !! ! ! !! ! ! ! ! ! ! ! ! ! !!! ! !
HVD solutions using direct (pass-through) GPU present IT leaders with a viable CAD/CAM
workstation replacement and are a good starting point in deploying GPU-powered virtual desktops
Assigning a GPU to an HVD virtual machine (VM) is not a new concept. The issue with this approach has primarily been the high-cost, very
low-density solution based on the physical limitations of the host, or it has provided a limited set of capabilities with a significant overhead
when translating virtual to physical GPU calls at the hypervisor level.

The introduction of multi-GPU Peripheral Component Interconnect Express (PCIe) server cards, such as Nvidia GRID, and the ability to
deploy multiple cards per server have allowed organizations to increase GPU density, and thus users per server, making this architecture
more viable for high-value design use cases.

Nvidia, with its GRID technology, is not the only vendor offering GPU-based solutions. Advanced Micro Devices (AMD) has a range of
server GPUs called FirePro, specifically the S7000, S9000 and S10000 model graphics cards, that are targeted at server-based
deployments that support certain virtual deployment options, including Microsoft RemoteFX sharing of a GPU across server-based
computing sessions and GPU direct pass-through to power individual VMs with dedicated GPU hardware support. Despite this, much of
the marketing, affinity and development of HVD GPU architectures have been led by Nvidia, resulting in strong growth of its OEM
partnerships with vendors such as Dell, HP, IBM and Cisco, with much attention focused on its GPU compute density. The most recent and
potentially transformational announcement is the entry of Intel into the GPU-enabled server market. On 7 May 2014, Intel announced the
introduction of its Iris Pro graphics processors into Xeon E3 server chips, due for release later in 2014.

http://www.gartner.com/document/2805717?ref=solrAll&refval=160811401&qid=6ed8009fd4a21b96cf0f95a7a260633d# Page 3 of 10
Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops 1/5/16, 11:59 AM

Although most IT leaders still run CAD applications on dedicated workstations and blade servers, Gartner sees increasing activity in
assessing migration of design use cases into an HVD environment. These leaders want to take advantage of what HVDs can offer, including
flexibility for delivery of workspace and applications, reduced total cost of ownership (TCO) through effective and efficient operations and
management, and reduced support complexity for desktops in geographically dispersed locations. Most important of all is better security
and data protection through centralization allowing organizations to reduce their exposure to intellectual property theft of high-value
designs while still allowing highly available and flexible access to those designs for editing and viewing by internal employees and external
contractors.

One important point for this use case is that design datasets can be huge. Consequently, centralization can significantly improve
performance compared with distributed workstations that have to pull vast datasets over a distributed network. The performance of display
rendering shouldn't be ignored, especially for multimonitor, high-resolution visuals that can challenge all of today's remote display
protocols and be particularly bandwidth-hungry, with limited latency tolerance. However, the net effect is a positive shift in performance
and, most importantly, end-user experience and productivity. In essence, these use cases are not the traditional cost-sensitive desktop
deployment scenario.

This technology presents an attractive use case, but we shouldn't forget that the performance requirements of different design applications
vary considerably. Some organizations continue to leverage dedicated workstation solutions because they deliver best-in-class
performance. It is not uncommon for designers to have workstation configurations with multiple processors, 8GB to 16GB of RAM,
multiple solid-state drives (SSDs) and multiple discrete graphics cards. These configurations provide an environment that is highly
optimized and responsive to the needs of those users; provisioning anything else would be a downgrade in performance and functionality.

Understanding the performance requirements of designers, the configuration options for server-based GPU solutions and the business case
for adopting a centralized architecture is fundamental to ensuring a successful deployment. For example, Nvidia's higher-end K2 GRID
card provides two high-performance Kepler GPUs, with 1,536 GPU (Compute Unified Device Architecture [CUDA]) cores and 4GB graphics
double data rate 5 (GDDR5) frame buffer memory per GPU. A server configuration with three K2 cards that assigns two CPU cores, 8GB
RAM and GPU pass-through that provides Quadro K5000-equivalent performance could possibly support six demanding users on the
single platform.

Testing is essential to avoid infrastructure bottlenecks at any point in the infrastructure. However, that is easier said than done when
different model complexities and different design package features (for example, shading and realistic rendering options) can generate very
different resource requirement profiles. Despite the shared platform complexities and considerations, the increased compute density for a
high-performance workspace is attracting organizations to the model.

Both VMware and Citrix support direct pass-through architectures for HVD, where native GPU vendor software drivers are installed in the
VM, and where the hypervisors allow direct GPU hardware calls from the VM. In doing so, those vendors create a hard dependency
between the platform and the workload, creating limitations in the higher-level functions of the virtualization stack (for example, creating a
persistent virtual desktop where, in the case of VMware's Virtual Dedicated Graphics Acceleration [vDGA] direct-access architecture
vMotion, high availability [HA] and Distributed Resource Scheduler [DRS] functionality cannot be used). The upside is that this model
supports all standard APIs, including DirectX, Open Graphics Library (OpenGL) and, with Nvidia cards, even CUDA, which may be
essential to accelerate performance for certain applications.

Recommendations:

Deploy GPU-enabled servers for high-value HVD workloads where graphic performance is essential, and performance and productivity
needs outweigh infrastructure cost.

Start your adoption with dedicated GPU pass-through models because these provide the highest GPU performance, have the simplest
architecture, present GPU vendor choice (Nvidia and AMD) and use native GPU drivers, which reduce vendor support challenges.

Ensure that the performance needs of designers are matched by the VM resource profile that is created, understanding that these
performance needs can vary significantly between design packages and are dependent on design features that are selected within a
package.

http://www.gartner.com/document/2805717?ref=solrAll&refval=160811401&qid=6ed8009fd4a21b96cf0f95a7a260633d# Page 4 of 10
Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops 1/5/16, 11:59 AM

IT leaders' adoption of GPU shared models will be limited until standards emerge that deliver the right
performance levels and support necessary graphics APIs
Dedicated GPU pass-through is certainly of interest for more-demanding-user needs, but can sharing GPUs across much greater numbers
of users provide a good solution to enhancing performance for far more cost-sensitive use cases?

Over the last two years, Gartner has seen several shared-GPU architectures emerge and develop:

Session shared GPU

Hypervisor shared GPU

Emulated/pass-through shared GPU

Figure 2 summarizes the relative performance and the use case applicability for all server-based GPU architectures, including software-
based, the three shared architectures and direct pass-through.

Figure 2. The Performance and Use Case Applicability for Server-Based GPU Architectures

Source: Gartner (July 2014)

Session Shared GPU


The session shared GPU doesn't actually power HVD deployments; rather, it is focused on leveraging Microsoft's RemoteFX software
feature that allows GPU hardware to be shared across Windows Server shared sessions. Consequently, there is a one-to-one, GPU-to-OS
architecture that creates a one-to-many, GPU-to-user architecture. The limitations of server-based computing (SBC), compared with HVD,
still persist, but this architecture does provide improvements over a software-based GPU that has to rely on the CPU in a very inefficient
manner to support graphic computation. The session shared GPU uses native GPU software drivers that are installed on the server, in
common with the pass-through model, but it is more limited in API support unless Citrix XenApp is used to power the SBC architecture.
Based on current capabilities on the Windows Server 2012 R2 platform, Microsoft Remote Desktop Session Host (RDSH) and RemoteApp
do not support DirectX before version 11 and have limited OpenGL support after version 1.1.

It's also worth noting that the session shared GPU is at the opposite end of the GPU enhanced-performance range but may assist in coping
with an application refresh to increasingly graphical and multimedia-rich versions, thereby helping to ensure that SBC remains relevant as
a task and process worker application delivery strategy.

http://www.gartner.com/document/2805717?ref=solrAll&refval=160811401&qid=6ed8009fd4a21b96cf0f95a7a260633d# Page 5 of 10
Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops 1/5/16, 11:59 AM

The current fragmentation of architecture options has an impact today on downstream infrastructure refresh and deployment decisions.
The refresh cycle for design workstations is typically more frequent than for desktop PCs (two to three years versus three to five years), and
without tight integration and continuing vendor collaboration across the underlying server platform, GPU hardware and the total software
stack, a server-based workstation refresh may become significantly more complex than today's refresh challenges. The good news is that
these vendor partnerships appear to be healthy as the market opportunity reveals itself. The ongoing strength of integration will be largely
dependent on the market opportunity being realized.

Hypervisor Shared GPU


The hypervisor shared GPU is an architecture that VMware has embraced with its Virtual Shared Graphics Acceleration (vSGA)
architecture on vSphere. More powerful than the session shared GPU, this architecture enables the sharing of a GPU across multiple VMs
by having the hypervisor intercept API calls from the VMs. It also ensures that no VM can dominate hardware calls and destabilize the
physical host. The amount of users supported by the platform varies by workload and is dependent on CPU, GPU and system memory. The
maximum amount of video memory that can be assigned per VM is 512MB, but memory allocation is divided equally between the GPU
frame buffer and the VM video RAM (VRAM). For example, when using the GRID K1 card (four entry-level Kepler GPUs, each with 4GB
per GPU), if all machines are configured with 512MB, 256MB is reserved on the GPU, giving a maximum theoretical density of 64 users. In
a production environment, the actual number of users varies with workload demands, which ultimately determines the number of users per
card and per server. The same is true when leveraging AMD-based server GPUs.

A big advantage with the vSGA approach is that it is vendor-GPU-agnostic and allows VMware to ensure the functionality of its vCenter
management stack (including vMotion) because the solution uses VMware drivers at the hypervisor level, not in the individual VMs. This
also means that the solution can fail over to non-GPU-accelerated platforms and use a software-based GPU, which is particularly useful for
business continuity and failover scenarios. This functionality is not available with VMware's vDGA pass-through approach. This solution is
significantly less powerful than vDGA and the other shared-GPU HVD solutions. It has limited API support (that is, DirectX 9 and OpenGL
2.1 only) and incurs an API release lag while VMware continues to update its hypervisor drivers after native-driver releases.

Emulated/Pass-Through Shared GPU


Lastly, the emulated/pass-through shared GPU offers a solution that Nvidia has developed jointly with Citrix on its XenServer
virtualization platform. This architecture is often referred to as virtual GPU (vGPU). Here, Nvidia drivers are installed in each VM,
providing better API support and generally more powerful graphic capabilities compared with vSGA.

vGPU enables the sharing of a GPU across multiple VMs by having the hypervisor manage VMs with equal resource time, thereby sharing
GPU frame buffer memory equally across VMs. The amount of frame buffer that is allocated is variable and defines the resource profile for
the VM. A common example given by Nvidia for its K1 GRID cards is 32 VMs per card (the recommended maximum number of VMs per
card), sharing the 16GB DDR3 RAM (four GPUs, each with 4GB per GPU), with each VM receiving 512MB VRAM (equivalent to a K120Q
profile). In a production deployment, it's unlikely that the full 32 VMs can be provisioned per card. What is more likely is a range of
between 20 and 30 VMs; however, as workload demands increase, different profiles, and thus increasing allocation of frame buffer
memory, become appropriate.

To contrast the GRID cards, each GPU on K1 has 192 GPU processor (CUDA) cores. Sharing each K1 GPU between eight users delivers
many magnitudes of order less performance than a fully dedicated K2 GPU (24 cores versus 1,536 cores). Note that the GPU cores are not
actually shared between VMs; rather, the full complement of cores is available to each VM for a time that is shared equally between all
VMs, but this makes a fundamental point about relative performance for different GPU architectures.

The VMware and Citrix shared architectures have pros and cons that are dependent on an adopter's priorities (for example, management
flexibility, API support and raw performance). For many IT leaders, the split or fragmentation in technology approach is frustrating. These
leaders would like the performance of vGPU on their preferred platform, with full support of existing enterprise virtualization management
features. Indeed, the dominance of vSphere as an enterprise virtualization platform is creating hopes that all these requirements will be
met when VMware launches its vGPU solution with Nvidia in 2015. Limited adoption of both vSGA and vGPU is occurring today.
Nonetheless, Gartner anticipates that much higher levels of adoption may occur if VMware and Nvidia tick all requirements boxes with
their next release, ensure it is compatible with both Horizon View and Citrix XenDesktop brokers, and deliver it at a price point that makes
it equitable not just for design teams, but also for the much larger market opportunity composed of power users and knowledge workers.

Recommendations:

http://www.gartner.com/document/2805717?ref=solrAll&refval=160811401&qid=6ed8009fd4a21b96cf0f95a7a260633d# Page 6 of 10
Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops 1/5/16, 11:59 AM

Only proceed with shared-GPU architectures if the business case is clear and compelling because those architectures are proprietary to
the software stack (specifically, the vendor hypervisor), thus creating implications for performance, compatibility, scalability and
management.

Ensure that required technical and functional features are included in the selection process.

Select the right GPU profile for your users' workloads. Application vendors may provide guidance for specific design tools, but ensure
that you test these thoroughly before production release.

Low user density and high infrastructure costs make GPU-accelerated desktops too expensive for IT
leaders to justify for most mainstream use cases in the short to medium term
GPU-enabled HVD can help IT leaders to deliver a desktop experience that is equal to or better than what most PC users encounter today.
Because user acceptance and user experience are such critical success factors with a desktop technology transformation, this is a very
welcome addition to the HVD technology stack.

In moving below the designer and power user categories, it is important to question whether GPU hardware acceleration is needed in task
and knowledge worker deployment scenarios. Another way to pose that concern is whether software-based GPU limits deployment due to
underwhelming performance. For modern server-, blade- and fabric-based infrastructure platforms, IT leaders can use CPU core density to
help guide initial estimates of density. This is only a guide, as Gartner has established in other research; workload demands, memory,
network and storage performance have a bearing on viable production platform density. When we incorporate GPU hardware, we
significantly increase hardware capital spend, but does GPU density take over as a less scalable primary sizing unit?

For a modern two-socket server that has 10 cores per socket, we can make a preliminary assessment of 10 VMs per core or, potentially, 200
VMs on the server platform. The production volume may be somewhat lower, but this serves as a starting point for sizing purposes. For a
GRID K1 card that uses the VMware sVGA architecture, there is a maximum of 64 users. For two K1 cards, the maximum is 128 users. Both
maximums are likely to reduce with production workloads. However, in both cases, the GPU sharing potential becomes the density limiting
factor. Is this a fair comparison? Perhaps not, because GPU-enabled desktops are likely to use two virtual CPUs (vCPUs) or more in their
VM profile, thereby skewing the CPU density equation and making both CPU and GPU capacity equally important in the equation, which
complicates the calculation as you move to higher-performance VM profiles. Nonetheless, capital costs increase and density decreases
both acting to increase the cost per user.

There is a danger in the previous example that we are comparing "apples with pears" comparing a high-performance, GPU-accelerated
VM (equivalent to a high-end desktop PC) with a lower-performing, software-based GPU VM (equivalent to a low-end PC). However, as a
mainstream PC technology replacement, will an IT decision maker select this technology if capital costs increase, making the TCO harder to
justify and resulting in an ROI that isn't compelling enough to adopt? For an existing HVD deployment that is underperforming, the
answer could involve incremental investment in GPU technology, but this is wholly dependent on the existing platform supporting GPUs.
For many organizations, this may require a platform refresh anyway.

At the performance end, allocating GPUs directly to VMs will result in drastically reduced density; however, this is a separate use case with
a very different cost profile. The design use case is compelling for many reasons apart from cost, but mainstream PC replacement will
continue to be extremely cost-sensitive.

Gartner has highlighted concerns about the capital investment that is needed to build out an HVD infrastructure within the data center (see
"Toolkit: Hosted Virtual Desktop Infrastructure Planner" (http://www.gartner.com/document/code/234536?ref=grbody&refval=2805717)
). The inclusion of another solution component exacerbates the infrastructure cost issue, and Gartner will look to include GPU
considerations within this toolkit in the future. Indeed, in the interim, we expect vendors in this market to develop dedicated TCO tools.
Certainly, falling server and storage costs may counterbalance new cost elements, such as dedicated GPU cards, but it's likely that adoption
will be limited in the short to medium term.

If GPU onboard Intel processors can deliver a richer virtual desktop solution without the need for discrete graphics cards and can provision
equal or greater volumes of virtual desktops than CPU-only systems at a reduced price point, it can only be good news for organizations
that are looking at mainstream use case adoption. It strengthens the business case and reduces the risk that specific application

http://www.gartner.com/document/2805717?ref=solrAll&refval=160811401&qid=6ed8009fd4a21b96cf0f95a7a260633d# Page 7 of 10
Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops 1/5/16, 11:59 AM

performance issues will invalidate the solution; however, it is unlikely to change the fact that the technology needs to deliver value and be
compelling on multiple fronts, just like today's best-fit use cases.

Recommendations:

Assess whether GPUs are necessary for more cost-sensitive task and knowledge workers desktop replacement use cases.

Ensure that you review the business case for GPU adoption. It is unlikely to be financially viable for mainstream utilization in the short
term, but GPUs may become standard server components in future platform iterations.

Review GPU-based architectures as a potential remediation to existing deployments that fail to deliver adequate graphic performance.

! ! ! ! ! ! ! !! ! ! !! ! ! !! !! ! ! ! ! ! !! ! ! ! !

CAD computer-aided design

CAM computer-aided manufacturing

CUDA Compute Unified Device Architecture

DDR double data rate (RAM)

DRS Distributed Resource Scheduler

GDDR graphics double data rate (RAM)

GPU graphics processing unit

HA high availability

HVD hosted virtual desktop

PCIe Peripheral Component Interconnect Express

POC proof of concept

RDSH Remote Desktop Session Host

SBC server-based computing

SSD solid-state drive

TCO total cost of ownership

vCPU virtual CPU

vDGA Virtual Dedicated Graphics Acceleration

vGPU virtual GPU

VM virtual machine

VRAM video RAM

vSGA Virtual Shared Graphics Acceleration

! ! ! !! ! ! !! ! ! ! ! ! ! ! ! ! ! !! ! ! ! !! !
Some documents may not be available as part of your current Gartner subscription.

http://www.gartner.com/document/2805717?ref=solrAll&refval=160811401&qid=6ed8009fd4a21b96cf0f95a7a260633d# Page 8 of 10
Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops 1/5/16, 11:59 AM

"Migrating CAD Applications to HVDs Presents Barriers to Use" (http://www.gartner.com/document/code/235893?


ref=ggrec&refval=2805717)

"Toolkit: Hosted Virtual Desktop Infrastructure Planner" (http://www.gartner.com/document/code/234536?ref=ggrec&refval=2805717)

"Seven Stages to a Successful Hosted Virtual Desktop Rollout" (http://www.gartner.com/document/code/229492?


ref=ggrec&refval=2805717)

"Virtual GPUs Power Multimedia for HVD Environments" (http://www.gartner.com/document/code/235784?ref=ggrec&refval=2805717)

2014 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. This publication may
not be reproduced or distributed in any form without Gartner's prior written permission. If you are authorized to access this publication, your use of
it is subject to the Usage Guidelines for Gartner Services (http://www.gartner.com/technology/about/policies/usage_guidelines.jsp) posted
on gartner.com. The information contained in this publication has been obtained from sources believed to be reliable. Gartner disclaims all
warranties as to the accuracy, completeness or adequacy of such information and shall have no liability for errors, omissions or inadequacies in such
information. This publication consists of the opinions of Gartner's research organization and should not be construed as statements of fact. The
opinions expressed herein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner
does not provide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and its shareholders
may include firms and funds that have financial interests in entities covered in Gartner research. Gartner's Board of Directors may include senior
managers of these firms or funds. Gartner research is produced independently by its research organization without input or influence from these
firms, funds or their managers. For further information on the independence and integrity of Gartner research, see "Guiding Principles on
Independence and Objectivity. (http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp) "

http://www.gartner.com/document/2805717?ref=solrAll&refval=160811401&qid=6ed8009fd4a21b96cf0f95a7a260633d# Page 9 of 10
Compelling Use Cases for GPU-Enabled Hosted Virtual Desktops 1/5/16, 11:59 AM

http://www.gartner.com/document/2805717?ref=solrAll&refval=160811401&qid=6ed8009fd4a21b96cf0f95a7a260633d# Page 10 of 10

Вам также может понравиться