Академический Документы
Профессиональный Документы
Культура Документы
Virtualizing
NAS
File storage is getting out
of hand and most NAS systems
can’t scale to accommodate the
growth. It’s time to consider
virtualizing your NAS storage.
P. 13
ALSO INSIDE
5 ILM is back!
8 Gotta have primary storage dedupe
21 Time for a network tune-up
27 Readers rate their midrange arrays
33 Backup for the 21st century
36 Just say no to storage stacks?
38 RAID is still first line of defense
1
STORAGE september 2010
R E G I O NA L S O L U TI O N P RO V I D E R
STORAGE 5
inside | september 2010
Virtualizing NAS
13 Traditional file storage typically lacks the scalability most
companies require and results in disconnected islands of
storage. File virtualization can pool those strained resources
and provide for future growth. by JACOB GSOEDL
Vendor Resources
39 Useful links from our advertisers.
3 Storage September 2010 Cover illustration by ENRICO VARRASSO
3PAR Inc. | 4209 Technology Drive, Fremont, CA 94538
510.413.5999 | www.3PAR.com
storage stacks? editorial | rich castagna
Just say no to
t
Information lifecycle management faded into oblivion
without getting serious notice. But it’s back with a
new name and more realistic goals.
Midrange arrays
Quality Awards:
value?
Most shops do care and are taking a hard
no-no, but whatever
look at where they put their data. You don’t you call it—storage
hear a lot of “ILM” chatter but, hey, that’s
exactly what it is. When the idea of ILM rolled
tiering or simply
around to open systems—hijacked from the smart storage
mainframe world’s hierarchical storage man-
agement (HSM)—more people seemed to be
management—
hung up on determining the value of the data it’s back.
Virtualizing NAS
the equivalent of storage Siberia if you don’t know its true worth. But to
data dedupe
know all that, you would need to get your business units involved, which
is about the time ILM gets laid to rest.
But you can’t keep a good idea down, and ILM is back and being taken
more seriously than ever. Saying “ILM” in public is still a no-no, but whatever
you call it—storage tiering or simply smart storage management—it’s back.
What’s different this time is that we’re focused on the problem. We’re looking
5 Copyright 2010, TechTarget. No part of this publication may be transmitted or reproduced in any form, or by any means, without permission in writing from
Storage May 2010
the publisher. For permissions or reprint information, please contact Mike Kelly, VP and Group Publisher (mkelly@techtarget.com).
STORAGE
storage stacks?
Just say no to
at location, the placement of data, much more closely. We’ve essentially stopped
looking for a perfect solution long enough to consider what might be good enough
or at least expedient.
But that explanation is a little too simple; ILM is back because we have more
choices about where to put things than we did before. Solid-state storage might
be the key catalyst for ILM’s renewal. When solid state began to trickle into
enterprise storage systems, the debate was over
how to determine what applications, if any, were
Not every company
Midrange arrays
Quality Awards:
And now that LTO-5 is here, tape is suddenly cool again. LTO-5’s 3 TB capacity
and 240 MBps throughput (both with compression) definitely reinforce tape’s
status as a bona fide storage tier.
If your storage vendor doesn’t offer some form of automated data movement,
ask when it will. Just as thin provisioning is already entrenched in most enter-
prise storage systems, and the way data reduction is moving along that same
route, automated tiering will become a basic part of a storage vendor’s system
management set. If it isn’t, then you might want to consider another vendor. 2
Media Group.
* Click here for a sneak peek at what’s coming up in the October 2010 issue.
It’s about disk. It’s about networks. It’s about time. EMC is the leader in disk-based backup and recovery.
Learn more now. www.EMC.com/products/category/backup-recovery.htm
EMC2, EMC, and where information lives are registered trademarks or trademarks of EMC Corporation in the United States and other countries. © Copyright 2010 EMC Corporation. All rights reserved.
storage stacks? StorWars | tony asaro
Just say no to
t
help users cope with capacity demands; but more drastic
Midrange arrays
Quality Awards:
who claimed they would have an exabyte of data in the next three years.
Having that much physical storage in the data center is ultimately un-
tenable. So how do we solve the problem? A big part of the answer will be
provided through a number of technologies. Hard disk drives will continue
to become denser. Higher capacity disk drives
have the ability to store more data within the Having that much
same given physical space. However, fatter disk
drives impact application performance. There- physical storage
fore, intelligent tiering that enables demotion in the data center
Virtualizing NAS
of capacity being saved. If you have 10 PB, then we’re talking a savings of
3 PB to 5 PB.
These are great leaps, and I submit that another major leap will be data
reduction (data deduplication) for primary storage. The math is simple and
the value proposition is a no-brainer. Even moderate dedupe is economi-
cally attractive. If your data is consuming 100 TB of disk space and you’re
able to cut that in half, you would reclaim 50 TB of capacity. That’s a fairly
modest 2:1 ratio, which should be easily achievable. If you were able to
get a 5:1 ratio, you’re talking approximately 80 TB of reclaimed capacity.
If we consider a petabyte data center, you can save 500 TB on the conser-
vative side (a 2:1 reduction ratio), and 800 TB if you’re more optimistic (5:1
ratio). For 10 PB of data, the result could be a capacity savings of up to 8 PB.
The savings are staggering when you consider just the capital costs,
Midrange arrays
Quality Awards:
but it also drives down your maintenance costs. When you factor in
the impact on operations and people
resources, the value proposition becomes The savings are
even more compelling. And if you add all
of that to power, cooling and floor space
staggering when
savings, primary dedupe can completely you consider
change the IT landscape.
You would expect every storage system
just the capital
vendor to have deployed primary deduplica- costs, but it also
tion by now, but there are some significant
drives down your
network tune-up
There are two storage system vendors that provide primary dedupe
today. While both vendors have modest adoption, it certainly isn’t exten-
sive. The reason for this is that their deduplication products have distinct
limitations in terms of scalability and performance. However, we’re on the
threshold of more and better products coming to market. You’ll see an-
nouncements later this year and in 2011, and it will grow from there.
Primary storage
AdvizeX has the technical information and know how you need.
Creative IT Solutions
3 to 1 Technical to Sales Staff Contact us today
No Cost Assessments Available for your no fee
assessment
800.366.6096 | www.advizex.com
STORAGE
storage stacks?
Just say no to
STORAGE
Integrating Cloud and
Traditional Backup Apps
COMING IN OCTOBER
10 Tips: Managing
Storage for Virtual
What Storage
Managers Are Buying
Servers and Desktops
Midrange arrays
Quality Awards:
STORAGE
Vice President of Editorial Site Editor Site Editor
Mark Schlack Ellen O’Brien Susan Troy
Features Writer
Creative Director Senior Managing Editor Todd Erickson TechTarget Conferences
Maureen Joyce Kim Hefner
Executive Editor and Director of Editorial Events
Contributing Editors Associate Site Editor Independent Backup Expert Lindsay Jeanloz
Tony Asaro Megan Kellett W. Curtis Preston
James Damoulakis Editorial Events Associate
Steve Duplessie Editorial Assistant Jacquelyn Hinds
Jacob Gsoedl David Schneider
Storage magazine
Storage magazine 275 Grove Street
Subscriptions: Newton, MA 02466
www.SearchStorage.com editor@storagemagazine.com
Your entire enterprise depends on the integrity of your data. Choose Sony LTO
Ultrium™ 5 cartridges, backed by our 60 years of magnetic tape expertise.
The LTO specifications don’t require our A3MP magnetic particles, high-tech
chemical binder and high-strength base film. But your data deserves no less.
Visit sony.com/LTO
© 2010 Sony Electronics Inc. All rights reserved. Reproduction in whole or in part without written permission is prohibited. Features and specifications are subject to change without notice.
Sony, the Sony logo and make.believe are trademarks of Sony. LTO, the LTO logo, Ultrium and the Ultrium logo are trademarks of HP, IBM and Quantum.
Virtualizing
storage stacks?
Just say no to
NAS
Midrange arrays
Quality Awards:
has become one of the top challenges for IT departments. Market data
data dedupe
vide file system and NAS scale-up NAS and server-based file stores
Time for a
tween back-end file stores system that provides scaling by adding nodes
data dedupe
and clients, and providing a to the cluster. Available in N+1 (single redundant
global namespace is clearly node) or N+M (each node has a redundant node)
the most promising approach high-availability configurations, they provide a
to tackling the unstructured namespace that spans multiple nodes, allow-
data challenge. It’s akin to ing access to data throughout all nodes in the
block-based storage virtual- namespace.
nodes, typically starting with three nodes and scaling to petabytes of file
storage by simply adding additional nodes. The clustered file system
glues the nodes together by presenting a single file system with a single
global namespace to clients. Among the vendors offering NAS systems
based on clustered file systems are FalconStor Software Inc.’s HyperFS,
Hewlett-Packard (HP) Co.’s StorageWorks X9000 Network Storage Systems,
IBM’s Scale Out Network Attached Storage (SONAS), Isilon Systems Inc.,
Oracle Corp.’s Sun Storage 7000 Unified Series, Panasas Inc., Quantum
Corp.’s StorNext and Symantec Corp.’s FileStore.
Clustered NAS is a third way of virtualizing file access. Clus-
3.
Primary storage
storage, as well as for the research and educational sector, but they’re usually
Time for a
product into a single namespace, more often than not justifies the
additional effort and cost.
ization products that aggregate the various file stores into a single
global namespace can be viewed as complementary to scale-out and
traditional NAS systems, especially during the extended time of transi-
tioning from legacy file stores. “Many customers buy a NAS to get fea-
tures like replication, archiving and snapshots, but they don’t require
these for all files,” said Brian Gladstein,
vice president (VP) of marketing at Even in companies
AutoVirt Inc. “We give them the ability
that can centralize
Midrange arrays
Quality Awards:
outside the data path until a migration is required and then switches
to in-band operation.
F5 ARX Series: Acquired from Acopia in 2007 and rebranded as F5
ARX, the F5 ARX series is an inline file-system virtualization appliance.
Usually deployed as an active-passive cluster, its located between
CIFS/NFS clients and heterogeneous CIFS/NFS file stores, presenting
virtualized CIFS and NFS file systems to clients. Unstructured data is
presented in a global virtualized namespace. Built like a network
Midrange arrays
Quality Awards:
2003, and DFS Replication (DFSR) in Server 2003 R2, Server 2008 and
data dedupe
later versions.
Microsoft DFS supports only Windows CIFS shares and has no
provision for bringing NFS or NAS shares into the DFS global namespace.
Furthermore, it lacks a policy engine that would enable intelligent data
movements. As part of Windows Server, it’s free and a good option for
companies whose file stores reside mainly on Windows servers.
e
network that links servers to disk arrays. These
10 tips will help you find and fix the bottlenecks in
your storage network infrastructure. By George Crump
Virtualizing NAS
The first few tips that follow have more to do with being prepared than
data dedupe
actually tinkering with your storage-area network (SAN), but all of our experts
agreed that trying to fine-tune a SAN without adequate preparation is like
driving down a freeway without headlights. Before you can roll up your
sleeves and get under the hood, you have to do some preparation. The rest
of our tips go into more detail, describing specific steps (often at no cost)
that you can take to improve SAN performance, efficiency and resiliency.
hadn’t taken the time to do an inventory, this obvious mistake may never
have come to light.
This could be a zero-cost tip because the information can be captured
and stored in spreadsheets. While manually keeping track of this informa-
tion is possible, in today’s rapidly changing, dynamic data center it’s be-
coming a less practical approach. Storage environments change fast and
IT staffs are typically stretched thin, so manually maintaining an infra-
structure isn’t realistic. Vendors we spoke to, and many others, have soft-
ware and hardware tools that can capture this information automatically.
Of course, those tools aren’t free or as cheap as a spreadsheet. But
Primary storage
if you weigh their cost against the cost of manually capturing the data,
data dedupe
These tools can be used for trend analysis and, in some cases, they
Time for a
lished, the next step is to figure out what network changes will provide
the most benefit to the organization. You may have discovered SAN
features that need to be enabled, or perhaps you have new applications
or an accelerated rollout of current initiatives that need to be planned.
Knowing how activities such as those will impact the rest of the envi-
ronment and what role the storage infrastructure has to play in those
tasks is critical. Generally, the goals come down to increasing reliability
or performance, but they may also be to reduce costs.
When you feel you’re at the stage where you’re ready to make changes
data dedupe
ISLs (interconnects between switches) are critical areas for tuning, and
as a storage-area network grows, they become increasingly important
to performance. The art of fine-tuning an ISL is often an area where
different vendors will have conflicting opinions on what a good rule
of thumb is for switch fan-in configurations and the number of hops
between switches. The reality is that the latency between switch con-
nections compared to the latency of mechanical hard drives is dramati-
cally lower, even negligible; however, in high fan-in situations or where
there are a lot of hops (servers crossing multiple switches to access
data), ISLs play an important role.
Primary storage
The top concern is to ensure that ISLs are configured at the correct
data dedupe
with it, even through virtual machine migrations from host to host. With
NPIV, you can use your switches’ statistics to identify the most active
virtual machines from the point of view of storage and allocate them
appropriately across the hosts in the environment.
If queue depth is set too low, the ports and the SAN infrastructure itself
aren’t used efficiently. When a storage system isn’t loaded with enough
pending I/Os, it doesn’t get the opportunity to use its cache; if essen-
tially everything expires out of cache before it can be accessed, the
majority of accesses will then be coming from disk. Most HBAs set the
default queue depth between 32 to 256, but the optimal range is actually
said they found multipathing isn’t working at all or that the load isn’t
balanced across the available paths. For example, if you have one path
carrying 80% of its capacity and the other path only 3%, it can affect
availability if an HBA or its connection fails, or it can impact application
performance. The goal should be to ensure that traffic is balanced fairly
evenly across all available HBA ports and ISLs.
You can use switch reports for multipath verification. To do this, run
a report with the port WWNs, the port name and the MBps sorted by
the port name combined with a filter for an attached device type equal
to “server.” This is a quick way to identify which links have balanced
network tune-up
tions and ever-shrinking backup windows. They’re also the most likely
processes to put a continuous load across multiple segments within
the SAN infrastructure. The backup server is the most likely candidate
to receive data that has to hop across switches or zones to get to it.
All of the above tips apply doubly to backup performance. Also con-
sider adding extra HBAs to the backup server and have ports routed to
specific switches within the environment to minimize ISL traffic. 2
QUALITY AWARDS V:
Compellent regains
top midrange arrays spot
Midrange arrays
Quality Awards:
MIDRANGE MIGHT
But even with such an excellent showing, Compellent must still share
at least of little of the Quality Awards spotlight. While not seriously
challenging Compellent’s overall score of 7.12, all eight finalists finished
with overall scores higher than 6.00—the first time that has happened
for midrange arrays and a rarity for any Quality Awards survey.
Hewlett-Packard (HP) Co. rode its EVA and P4000 lines to a strong
Midrange arrays
Quality Awards:
recent midrange array survey. With tighter budgets and often urgent
Time for a
needs, storage managers expect sales reps and their support teams
to be responsive and well-informed. Just a few years ago on our second
midrange survey, the overall average for the sales category was a tepid
5.28, indicating that users’ expectations were likely met but rarely
exceeded. This time around, the category average is 6.47, suggesting
strong—and probably effective—
efforts by vendors’ sales forces.
Compellent picked up its first
ABOUT THE SURVEY category win with a 6.81, buoyed
Virtualizing NAS
tify the most reliable products on the mar- place finisher Hitachi (6.59) was
ket regardless of vendor name, reputation
high scorer for two of the category
or size. Products are rated on a scale of 1.00
to 8.00, where 8.00 is the best score. A total statements (“My sales rep is
of 315 respondents provided 497 midrange knowledgeable about my industry”
storage array evaluations. and “My sales rep understands my
business”).
• Pillar Data Systems Axiom 300/500/600* product was installed without any
data dedupe
HP (6.68) which it turned nudged out IBM (6.61) by the same margin.
Compellent’s Storage Center scored the highest on all seven category
statements in the product features category, ranging from a 7.07 (“This
product’s snapshot features meet my needs”) to a 7.36 (“This product’s
remote replication features meet my needs”). By delivering these
“bread-and-butter” features along with its signature Fluid Data auto-
mated tiering, Compellent may be raising the bar a bit for all midrange
systems vendors.
That’s not to suggest that any of the product lines are slackers
when it comes to features. The overall average for the category was a
network tune-up
6.64, the highest we’ve seen and substantially higher than the previous
Time for a
mark of 6.33. The average scores for key midrange array requirements
were high for all eight products, such as a 6.79 for “This product’s
capacity scales to meet my needs,” highlighted by Hitachi’s 7.11 (the
only other 7.00-plus score in the category) and Compellent’s 7.31.
S
Q
MIDRANGE ARRAYS
TO
IN
E
S
RAG
E MAGAZ
n Dell CX Series or
OVERALL RATINGS
Dell EqualLogic PS Series
n EMC Clariion CX Series
Compellent
HP
n Hewlett-Packard StorageWorks HDS
EVA Series and P4000 Series
NetApp
n Hitachi Data Systems USP VM Dell
or AMS Series
IBM
n IBM DS4000/DS5000/DS6000 EMC
n NetApp FAS200/FAS900/FAS2000
Midrange arrays
Quality Awards:
Oracle
Dell IBM
Time for a
EMC NetApp
Oracle EMC
4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50 4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50
IBM EMC
HDS Dell
Oracle NetApp
Dell IBM
EMC Oracle
4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50 4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50
HP Dell
Dell EMC
EMC HP
IBM HDS
NetApp IBM
Oracle Oracle
4.00 4.50 5.00 5.50 6.00 6.50 7.00 7.50 65% 70 75 80 85 90 95 100
*% Yes
Based on a 1.00-8.00 scoring scale
technical support category that hovered around the lows seen in the
sales-force competence category. This survey isn’t any different, with
support getting the second-lowest overall category average. But the
twist here is that the score is still fairly high at 6.59, led once again by
Compellent (7.02). Hitachi (6.71) racked up its second second-place fin-
ish, with HP (6.69) hard on its heels with another strong performance.
HP finished second or third in all five ratings categories.
The only statement in the support category that Compellent didn’t
score top marks on was “Vendor’s third-party partners are knowledge-
able”; instead, HP and Oracle (Sun) tied for the lead with a score of 6.72.
network tune-up
and 37% of Oracle respondents said they purchased their systems from
VARs.
Midrange vendors are also delivering on their support promises. One of
Compellent’s two 7.29 category scores was for “Vendor supplies support
as contractually specified,” a statement that all vendors scored well on
for a group average of 6.79 (high in the category). Well-trained support
staffs were also recognized on the survey, with Compellent (7.07), HP
(6.87) and Hitachi (6.80) all standing out for the statement “Support
personnel are knowledgeable.”
Virtualizing NAS
DO IT AGAIN
In addition to the specific statements in each rating category, we
asked survey respondents a more subjective question: All things con-
sidered, would you buy the product again? Over our five surveys for
midrange arrays, the responses have been generally positive and very
steady, with an average of 77% to 79% saying “Yes” across all product
lines. This time, the “buy again” numbers jumped, reflecting the higher
category ratings and, undoubtedly, greater satisfaction with the entire
class of midrange storage products.
Primary storage
Overall, 89% of respondents said they would take the plunge again with
data dedupe
the same product, led by Compellent’s eerily perfect 100%, NetApp and Dell
both at 94% and the rest of the field ranging from EMC’s 87% to Oracle’s
(Sun) 83%. Not too shabby when it comes to satisfied customers. 2
t
help storage managers meet their backup recovery
time objectives (RTOs) by making the first steps—
data capture and transfer—simpler and more efficient.
Midrange arrays
Quality Awards:
HE FOCUS ON backup modernization during the last few years has been
squarely on the backup target device: tapes and disks. That’s where the
majority of users have made the most changes. But now that so many
users and IT shops have become disk friendly, there’s a new focus on
the front end of the backup process: the capture and transfer phase.
In 2004, nearly 60% of Enterprise
Storage Group (ESG) survey respondents Now that so many
reported backing up directly to tape. By
network tune-up
2010, only 20% were using tape exclu- users and IT shops
Time for a
and stored.
CDP
CDP technology continuously captures changes to data at a file, block or
application level, supporting very granular data capture and recovery options.
It time stamps each write and mirrors it to a continuous data protection
retention log. When a recovery is needed, the CDP engine creates an image
of the volume for the point in time requested without disrupting the production
application.
Block-level CDP operates at the logical volume level and records every
network tune-up
write. This type of continuous data protection stands out at transparent data
Time for a
REPLICATION
Replication is the bedrock of these strategies and it’s increasingly being
used for data protection as a standalone process to provide operational and
disaster recovery for applications with tight RPOs or RTOs; as a method of
consolidating distributed data for centralized file-level backup; or in con-
junction with snapshot or CDP to maintain an off-site copy and facilitate
Primary storage
SOURCE-SIDE DEDUPLICATION
Deduplication identifies and eliminates redundancy, storing only unique
data and shortcuts to unique data for duplicates. Data deduplication’s role
in optimizing backup processes is fairly well documented; however, the
focus has mostly been on target-side deduplication solutions. Source-side
deduplication ensures that only changed segments are backed up after the
initial full copy. That means significantly less data is captured, transferred
and stored on disk. This reduces the time needed to perform backups. Be-
cause the backup window requirements are minimal, it’s possible to back
up more frequently, which increases the number of recovery points on disk
network tune-up
Storage vendors
stacking the deck
t
Storage vendors have been busy creating
server-to-application product stacks. It looks
like the type of ploy that will give them
Midrange arrays
Quality Awards:
HERE’S A FUNDAMENTAL SHIFT of titanic proportions taking place in IT. No, I don’t
mean the massive shift toward using disk in favor of tape to protect data.
I’m also not referring to the fundamental changes occurring in storage ar-
chitectures to improve its interaction with virtual server technologies nor
the increased usage of solid-state storage or automated storage tiering.
What’s causing this big shift is the crazed passion with which the industry
network tune-up
seems to be heading into building proprietary stacks from the server all the
Time for a
virtualization technology.
Even Hitachi Data Systems, seemingly content being a best-of-breed
high-end and midrange storage supplier, felt it needed to do something. It
reached back to its parent company and announced its own vertical stack
using Hitachi servers, which will have a special console to integrate the
stack. NetApp then went on to do its deal with Cisco and VMware as a
counterpoint to EMC’s moves.
Storage vendors are scurrying to line up partners so they aren’t left out.
The question is if any of this craziness is necessary or warranted. My answer
is a flat “No.” I have the advantage of having seen the minicomputer revolu-
Primary storage
OSes, APIs are available for managing devices and printers work with every
system in the market. In addition, TCP/IP opened up a new world. We’ve finally
arrived at an era where choice matters, where best of breed matters. You still
place your bet on a vendor, but not for everything.
Now it seems we’re heading back to the ’70s. It doesn’t matter who
started the “stack war” or who’s partnering with whom. What matters is
that your choices are about to be taken off the table. For example, keep an
eye on Oracle over the next decade;
they now control hardware, database, It doesn’t matter who
storage and server virtualization.
network tune-up
I can understand how the vertical stack strategy is in the best interest
of Cisco or Oracle. What I don’t see is why vendors such as EMC, Hitachi
Data Systems, NetApp and VMware would want to play this game. Their
success was built on delivering best-of-breed products and being able to
play with everyone. So why limit yourself by choosing partners?
You will be the final arbiter. You’ll either let the big guys dictate what you’ll
buy or you won’t. It might seem innocuous right now, but it does matter.
I like choices. I like that VMware has Microsoft Corp. and Citrix Systems
Inc. to compete with, and that 3PAR, EMC, Hitachi Data Systems, IBM and
NetApp are contenders for high-end storage.
Primary storage
Ultimately, you’ll vote with your dollars. Don’t forget: It was users who
data dedupe
threw out the proprietary stacks a few decades ago. You have the same kind
of leverage now, but at an earlier stage in the process. It’s up to you. 2
Arun Taneja is founder and president of the Taneja Group, an analyst and
consulting group focused on storage and storage-centric server technologies.
He can be reached at arunt@tanejagroup.com.
and nearly 20% juggle four different RAID configurations in their shops. But that’s not
to suggest users are totally enamored with RAID, as their two biggest gripes are in-
efficient use of disk capacity (36%) and lengthy rebuild times (32%); however, 10% of
respondents didn’t see any particular shortcomings. RAID appears to be doing its job
well: 72% had to perform RAID rebuilds at least once in the last year and although re-
builds took a little while (54% said three hours to 12 hours), 93% reported that they
didn’t lose any data. To quote one respondent: “RAID rocks!” —Rich Castagna
0% 10 20 30 40 50 60
%
Rate the following data protection technologies
45
in order of their importance to your company.
(Least important = 1.0, Most important = 5.0)
2.8 Replication
fail in the same RAID
data dedupe
3PAR, page 4
Use 10 Best Practices to Improve Your Storage Management
Midrange arrays
Quality Awards:
VMware vSphere with 3PAR Utility Storage: Simple and Efficient VMware vSphere Deployment with 3PAR InServ
Storage Servers
E-Guide: Best Practices for Data Protection and Recovery in Virtual Environments