You are on page 1of 18


Issue 2

Hyperconverged Infrastructure:
The CxO View
Simple, cost-effective infrastructure for today’s
business climate

In this issue

2 Welcome Fellow CxO

3 Research From Gartner: How to Determine When

Hyperconverged Integrated Systems Can Replace
Traditional Storage

11 911 Dispatch Center Improves Emergency Response

Times with DataCore™ Hyper-converged Virtual SAN 

18 About DataCore Software


Welcome Fellow CxO

Today’s business climate carries ■■ 345% faster response time enabling faster reports,
a great deal of uncertainty queries and accelerating decision insights
for companies of all sizes and
industries. Unpredictable demand ■■ 220% more IOPS enabling more powerful workloads
makes it difficult to focus on to be accomplished in the same timeframe
long-term planning. Instead,
companies are looking for more ■■ 230% better value on price-performance for
short-term and shifting their greater savings
investments into only their most compelling projects.
What does all this performance get you? Here are
There is a strong push to simplify and reduce costs of some of the benefits:
IT infrastructure. Server virtualization was supposed to
consolidate and simplify IT infrastructure in data centers. ■■ Faster applications
But, that only “sort of happened”. Companies do have
fewer servers but they never hit the consolidation ratios ■■ Consolidate your infrastructure, cutting your
they expected. Why? In one word, performance. CAPEX

Surveys show that 61% of companies have ■■ Less operational costs (power, cooling and space
experienced slow applications after server efficiency)
virtualization with 77% pointing to I/O problems as
the culprit. ■■ Less time spent managing infrastructure

Now, with hyperconverged infrastructure, companies This means companies can run all their applications,
have another opportunity to fulfill their vision of even enterprise applications and databases, on the
consolidating and reduce the complexity of their fewest nodes of single-tier infrastructure, providing the
infrastructure. But, this will only happen if their highest ROI by minimizing both CAPEX and OPEX.
applications get the I/O performance they need.
It’s these kind of results and the advances in
DataCore Hyper-converged Virtual SAN is a high-
performance and efficiency due to DataCore’s
performance, easy-to-use hyperconverged solution revolutionary Parallel I/O technology within our
that enables companies to massively consolidate hyperconverged solution that have led to over 30,000
their virtualized infrastructure. Unlike other customer deployments globally and 96% of CxOs
hyperconverged vendors, DataCore is the World’s surveyed stating they recommend DataCore.
fastest hyperconverged solution. Compared to All-
Flash Arrays (AFAs), DataCore has been proven to Sincerely,
have, at minimum:
George Teixeira
President and CEO, Co-founder

Research From Gartner

How to Determine When

Hyperconverged Integrated Systems
Can Replace Traditional Storage
Hyperconverged integrated systems are all the IT
rage these days as vendors tout “data center in the
box” benefits. This research will help I&O leaders
distinguish the differences in HCIS and traditional
storage deployment strategies, and will provide them
with selection guidelines.

■■ Hyperconverged integrated systems’ (HCIS’)
current lack of integration with existing traditional
infrastructures causes I&O leaders to position it as
silo deployments within enterprise data centers.

■■ As a result of workloads and economic analyses,

I&O leaders often deploy HCIS alongside existing
server/storage infrastructures, resulting in an
additional data center platform.

■■ Most users do not go through formal HCIS

benchmarking and technical evaluation processes to
uncover differences in storage design and hardware
implementation that result in unquantified HCIS
storage efficiency, performance and ownership costs.

■■ The migration of existing workloads onto HCIS is

likely to make I&O leaders to update their existing
vendor agreements, SLAs, data center design,
backup and disaster recovery strategies, staffing
and organization responsibilities.

■■ Deploy HCIS to either consolidate all of the
midsize data center and remote office/branch
office (ROBO) workloads, or to address the specific
need for self-contained, high-impact workloads
such as VDI or virtual server infrastructure in large
enterprise data centers.

■■ Integrate HCIS as a new platform deployed to Today’s Hyperconverged systems range from
support well-defined, well-matched workloads and reference architecture software-only products (BYOS)
not as a one-size-fits-all server/storage alternative. to enterprise-grade hardware appliances, and
are targeted at enterprises of all sizes. By taking
■■ Create HCIS software-defined storage (SDS) advantage of the distributed scale-out nature of SDS
evaluation criteria, a test plan, and an analysis and elimination of single point of failure, HCIS is
tool that assigns heavier weighting to: data designed for high availability virtualized workloads.
reduction ratio; performance and scalability (all Vendors (see Note 1) include late-stage startups, tier-
for the worst-case scenario); customer support one server and storage OEMs, and enterprise software
capabilities and the HCIS vendor’s overall and hardware vendors.
supported ecosystem.
When deployed correctly, for appropriate workloads
■■ Create impact analyses of switching from and in the right deployment model, Hyperconverged
traditional storage to HCIS based on vendor infrastructure is a powerful architectural choice that
proposals and bids in the areas of procurement, can transform the modern data center. This research
facilities, networking, security, backup and disaster will explain the impact of hyperconvergence as an
recovery, and future technology deficits. alternative storage platform and how to achieve the
best possible outcomes from adopting this technology.
Strategic Planning Assumption
The first order of business is to understand how HCIS
By 2019, more than 50% of the storage capacity
address current pain points and deliver on simplicity,
installed in enterprise data centers will be deployed
flexibility, selectivity and economic promises.
with SDS or HCIS architectures based on x86
commodity hardware systems, up from 10% today.
HCIS systems have gained mind share and are being
considered as alternatives for traditional server
and storage systems in the midmarket data center,
In today’s data-driven economy, more data creation greenfield opportunities, ROBO, data center renovation
translates immediately into increased storage and modernization projects for highly virtualized data
demands. In order for a business to grow rapidly, center workloads. Table 1 shows the benefits and
storage needs to be able to expand in an on-demand limitations of HCIS systems.
manner. Interest in HCIS is growing as organizations
of all sizes and market verticals seek to simplify,
speed up delivery, improve manageability and satisfy
user demand for more availability, performance and
storage capacity on tight IT budgets and with lean

Table 1. IT Benefits and Limitations of HCIS Versus Traditional Storage and Server Environment

Technology IT Benefits Limitations

Traditional Servers/Storage Ability to select from broad choices Integration and refresh is time and
of storage and servers (selectivity) resource consuming (economic)
Scale storage and compute Scale out is difficult (simplicity)
independently as needed (flexibility)
HCIS Seamless deployment, management Limited ability to independently
and expansion (simplicity) scale compute and storage
Build in enterprise features such (flexibility)
as data reduction, backup and SSD Storage and server hardware vendor
caching (economic) lock-in (selectivity)
Green highlighting: characteristics of strength; Red highlighting: characteristics of weakness
Source: Gartner (January 2016)

Hyperconvergence is a relative newcomer to data platform with its own provisioning, management,
center platforms modernization. Figure 1 below backup and DR, and capacity planning tools.
shows critical differences that I&O leaders must know
before making a final decision to move away from the In order to avoid the data silo effect, the next
traditional storage/server environment. generation of HCIS will have to include some
integration capabilities with infrastructure outside of
Impacts and Recommendations the HCIS platform. For example, HCIS products will
have to gain the ability to ingest and control storage
HCIS’ current lack of integration with
on traditional storage arrays; present their own pool
existing traditional infrastructures
of SDS for consumption by other servers in the data
causes I&O leaders to position it as
center; and provision and support hybrid compute and
silo deployments within enterprise data storage in the cloud.
While Hyperconverged solutions are targeted to Recommendations:
flatten the IT workspace and reduce the silo effect of ■■ Deploy HCIS to either: consolidate all of the
different infrastructure components, the majority of midsize data center and ROBO workloads or
vendor implementations are not designed to integrate to address the specific need for self-contained,
with existing IT investments such a storage or server high-impact workloads such as virtual desktop
farms, but rather to rip and replace them. That is infrastructure (VDI) or virtual server infrastructure
why HCIS is most often targeted and deployed as a in large enterprise data centers.
greenfield solution for a highly virtualized stack with
wide adoption in the midmarket segment, where the ■■ Prioritize HCIS vendor solutions that have
integration with outside compute and storage is less integration capabilities with existing data center
of a requirement. investments and that will support hybrid cloud
HCIS data silo effect may derail deployments for
large enterprises when, instead of gaining operational
efficiency, HCIS may end up adding on another

Figure 1. Critical Differences Between HCIS and the Traditional Server/

Storage Approach

Source: Gartner (January 2016)

As a result of workloads and economic to deployment, tight HW/SW integration, ease of

analyses, I&O leaders often deploy provisioning and daily management, common data
HCIS alongside existing server/storage services, unified life cycle management and pay as
infrastructures, resulting in an additional you scale out deployment model.

data center platform

HCIS has the potential to lower acquisition and
As IT architects expand their design objectives ownership costs by eliminating the expense of SAN
to include staff resources and ownership costs, storage and switches, supporting data management
the appeal of integrated systems and specifically features such as compression and deduplication,
Hyperconverged systems increases: various shrinking infrastructure delivery times, and enabling
integrated system implementations can include the use of commodity servers with direct attached
reference architectures, integrated stack systems, disk and flash.
integrated infrastructure systems and HCIS. The
inherent appeal of these systems rests upon the
advantages of single vendor support, fast time

Figure 2. Impacts and Top Recommendations for Benefiting From HCIS

Source: Gartner (January 2016)

The planned service life of an HCIS system does not Deploying a HCIS solution as alternative platform
have to align with server or storage system service within an enterprise can enable IT to quickly satisfy
lives because they will often be deployed as a silo or to the needs of a specific business application or
support a specific project or workload. workload by minimizing the testing needed to certify
its use with a variety of mission or business-critical
Differences in application needs and the value maps workloads. Examples include virtual servers, VDI
shown in Figure 3 indicate that cost-optimized or development/testing environments. Developing
infrastructures will align application needs with an extensible infrastructure and flexible operating
different technologies. Pursuing a coexistence strategy vision will help IT development by providing a viable
also has the advantages of keeping competitive alternative against unwanted shadow IT. While there
pressure on traditional storage and server suppliers are many qualitative arguments that are made in
to deliver aggressive pricing and effective postsales favor of a single storage platform, the architectural
service and support. efficiencies and the benefits of maintaining a
HCIS environment might outweigh the operational
complexity and additional training they may require.

Figure 3. Value Map of Alternative Technologies

Source: Gartner (January 2016)

Recommendations: Each vendor’s HCIS implementation is likely to

exhibit unique storage efficiency, scalability and
■■ Identify high-impact workloads that can utilize
performance profiles based on a specific workload.
HCIS scale-out architectures, and benefit from
HCIS decision planners need to be aware of the
HCIS low-touch deployment, ease of ongoing
wide span of HCIS offerings:
management and data reduction and protection
■■ Hardware: Wide range of CPU, I/O optimization
hardware and SSD for caching or tiering
■■ Integrate HCIS as a new platform deployed to
support well-defined, well-matched workloads and
■■ Hypervisor: Some HCIS solutions support a single
not as a one-size-fits-all server/storage alternative.
hypervisor, while others offer broader options

Most users do not go through formal

■■ Data reduction: Some HCIS solutions offer no data
HCIS benchmarking and technical
reduction, whereas others offer compression and/
evaluation processes to uncover
or deduplication, including global deduplication
differences in storage design and across the cluster
hardware implementation that result in
unquantified HCIS storage efficiency, ■■ Data resiliency and efficiency: Some HCIS will
performance and ownership costs only provide data block replication, while others
While HCIS is a relatively new deployment model, can enable erasure coding, few provide the ability
it is expected to grow from $372 million in 2014 to to select between erasure coding and replication,
more than $5 billion by 2019 with 68% CAGR, while and some broader backup/disaster recovery with
remaining very fluid and fast-evolving, causing rapid application integration and file-level recovery
change for its product offering. All providers stress
simplicity and flexibility in various ways, but there ■■ Scalability: Some HCIS clusters scale only up to eight
are subtle differences in exactly what these messages nodes, while others claim to scale into the hundreds
actually translate to.

■■ Integration: Some HCIS solutions allow integration The migration of existing workloads onto HCIS is likely
with existing data center infrastructure (such as to make I&O leaders to update their existing vendor
servers, storage or public cloud) while most do not agreements, SLAs, data center design, backup and
disaster recovery strategies, staffing and organization
■■ Data protection and availability: Some HCIS responsibilities
solutions include built-in snapshots, QoS backup
and sync remote replication HCIS performance profiles and mean time between
data loss (MTBDL) will differ from existing storage/
There are big differences between HCIS server infrastructures. Users should identify existing
performance, depending on hypervisor, SLAs that have been made obsolete and create new
software stack, hardware, VM density, SLAs that align with HCIS capabilities. Revising
workloads, caching and data reduction SLAs also creates an opportunity for users to cost-
optimize their operations by better aligning SLAs
with application requirements, thereby reducing the
Recommendations: number of situations where the infrastructure is
■■ Include the following criteria when evaluating overdelivering against application needs. Common
HCIS: redundancy model, support and measures include guaranteed I/O rates, host visible
maintenance procedures, hypervisor support, and bandwidth, response times, availability, MTBDL,
method of providing SDS. recovery point objectives (RPOs), recovery time
objectives (RTOs) and $/GB costs.
■■ Create HCIS software-defined storage evaluation
criteria, a test plan and an analysis tool that Disaster recovery schemes that rely on proprietary
assigns heavier weighting to data reduction ratio; HCIS-based replication technologies can only work
performance and scalability (all for the worst-case with other HCIS-based systems in the same family and
scenario); customer support capabilities and the cannot work within existing disaster recovery schemes.
HCIS vendor’s overall supported ecosystem. If the user has a contract with a disaster recovery
provider or colocation company, there will be contracts
■■ Test HCIS solution performance under load as well to review and possible renegotiation. Possible areas
as data reduction ratios over time and at scale in of renegotiation could include bandwidth, power and
order to rightsize your cluster and finalize your space requirements, and the need to purchase a new
HCIS configuration. system at the disaster recovery site.

■■ Create a HCIS workload testing lab and perform Since HCIS systems are inherently more autonomic in
head-to-head testing by using real workloads or their operation and require less ongoing maintenance,
storage workload generators. One example is their deployment could create opportunities to
HCIbench, a free storage performance testing tool revise policies and procedures that have been made
for HCIS. obsolete by new technologies. As a result of HCIS

implementation, I&O leaders will be able to reorganize Note 1. Sample HCIS Vendors
operations to improve efficiency and free budget to ■■ Atlantis Computing
reskill the organization to make it profitable rather
than a cost center. ■■ Gridstor

Recommendations: ■■ Dell
■■ Build a cross-functional team that includes all
stakeholders to ensure the inclusion of current ■■ Hitachi
and future storage and application requirements,
senior management support and the creation of an ■■ HP
effective RFP that covers the subtle consequences,
detailed in this research, of deploying an SDS ■■ HTBase
HCIS solution in the data center.
■■ Maxta
■■ Engage with HCIS suppliers to profile candidate
storage workloads and create SLAs that align with ■■ Nutanix
HCIS SDS capabilities and application needs.
■■ Pivot3
■■ Create impact analyses of switching from
traditional storage to HCIS based on vendor ■■ Scale Computing
proposals and bids in the areas of procurement,
facilities, networking, security, backup and disaster ■■ SimpliVity
recovery, and future technology deficits.
■■ Springpath
Additional research contribution and review by Arun
Chandrasekaran, Mike Cisek, Dave Russell and ■■ StarWind Software
George Weiss
■■ Stratoscale
■■ VMware
Evidence for this research includes more than 200
Gartner client inquiries in 2015; vendor interviews,
surveys and product demonstrations in 2014 and Source: Source: Gartner Research, G00292287,
2015; and customer reference surveys in 1H15. Julia Palmer and Stanley Zaffos, 15 January 2016

911 Dispatch Center Improves

Emergency Response Times with
DataCore™ Hyper-converged Virtual SAN 
Speeds up 911 Dispatch Response
Every Millisecond Counts in a 911 Call Center;
DataCore Reduces Latency Times and Makes
Critical SQL Server-based Dispatch Application Run
20X Faster

Located in Medford Oregon, Emergency

Communications of Southern Oregon (ECSO) is a
combined emergency dispatch facility and Public
Safety Answering Point (PSAP) for the 911 lines in
Jackson County Oregon. Corey Nelson, IT Manager at
ECSO, is responsible for IT at the organization. Not
only is he responsible for most of the technology
in the 911 data center, but also for almost all
Fire Department and Police Department vehicle
computers that are deployed in the field.

ECSO is a firm believer in the power of a

hyperconverged solution now that it has implemented
DataCore™ Hyper-converged Virtual SAN. Importantly,
this single decision has enabled ECSO to keep using
a traditional storage array by making virtual storage
part of the hyperconverged infrastructure, as well as
significantly increasing performance and reducing
storage-related downtime.

ECSO first needed to look for a better storage

solution because its dispatch application, based on
Microsoft SQL Server, was experiencing latencies of
200 milliseconds at multiple times throughout the
day. When this application runs slow, it impacts how
fast Fire and Police can respond to an emergency. In
addition, ECSO wanted a solution to meet its key “must
haves” including better real-time mirroring, replication,
and an overall more robust storage infrastructure –
and the organization was dedicated to finding a better
alternative than its existing NetApp solution.

“This product makes you think differently about storage and ultimately is the next
step in virtualization. DataCore Hyper-converged Virtual SAN gives us the flexibility,
reliability and performance to keep our systems running non-stop. No other products I
looked at were even close to accomplishing this.”

- Corey Nelson, IT Manager, Emergency Communications of Southern Oregon

Fortunately, Nelson attended VMworld and found

DataCore. What Nelson was not thinking – even after
four intense months of looking at DataCore and
alternatives – was that a hyperconverged solution
would meet all of his tecnology and resulting business

DataCore is deployed as hyperconverged infrastructure

using DAS or internal storage on a cluster of hosts.
DataCore Hyper-converged Virtual SAN enables users
to put the internal storage capacity of their servers
to work as a shared resource while also serving as
integrated storage architecture. Hyperconverged
systems by definition combine compute, storage,
storage networking tiers into a single unified system.
From a performance standpoint, much of the traffic
that went over the storage network could now be
eliminated and with the compute and storage co-
located faster response times were possible.

“At the time I had my first conversation with one of

DataCore’s system engineers, I was not thinking about
a hyperconverged solution,” explained Nelson. “Rather,
I was thinking about a traditional storage solution
whereby I had a separate array that handles storage
and separate hosts that would rely on that backend

Once DataCore came onsite to ECSO and drew up

various potential solution scenarios that would meet
the organization’s infrastructure needs– focusing
specifically on a hyperconverged solution, according to
Nelson “a lightbulb went off” in his head.

“I knew then that hyperconverged was the way to

go,” emphasized Nelson. “Following that we were
able to come up with a price that suited our budget

The criterion that Nelson was using prior to DataCore’s

Customer Snapshot: Real-world Hyperconverged
selection consisted specifically of looking for a hybrid
Scenario at ECSO
storage solution whereby he would incorporate some
DataCore Hyper-converged Virtual SAN is perfect for
SSD drives for performance. Nelson built out his
environments that require high availability in a lowcost, small
footprint, as well as latency-sensitive environments where the “selection” spreadsheet that spanned traditional
user wants to move data close to database applications, but storage vendors as well as solutions that would
needs to share it across a cluster of servers. enable him to leverage his existing infrastructure– an
incredibly important objective since he had purchased
In one instance the entire ECSO building went offline because the NetApp technology just 18 months ago.
its Uninterruptible Power Supply (UPS) was being replaced.
For most companies, this would mean downtime, but that is
Performance Surges with DataCore
unacceptable for a 911 call center. Since Nelson had set up
a back-up data center (the DR site) with DataCore, everything Prior to DataCore, performance and specifically
failed over and continued to run, despite the power outage at the latency was a huge problem at ECSO – particularly
primary site. due to the NetApp array which delivered latency of
200 milliseconds on average throughout the day.
“I failed back after nine hours, and brought everything back
DataCore has solved the performance issues and
online to the primary site,” noted Nelson. “It all worked like it
fixed the real-time replication issues Nelson was
was supposed to. I had zero issues from the technology side. It
was great! And we stayed ‘live’ entire time. We never stopped previously encountering. This is because DataCore
receiving 911 calls – as that is never an option.” Hyper-converged Virtual SAN speeds up response and
throughput with its innovative Parallel I/O technology
in combination with high-speed caching (using low-
– and what came next was an excellent, hassle-free latency server RAM) to keep the data close to the
installation. I felt extremely good that DataCore Hyper- applications.
converged Virtual SAN was the right solution for us,
which is not something I can say about the product we The critical 911 dispatch application must interact
had previously installed for storage management.” nearly instantly with the SQL server-based database.
Therefore during the evaluation and testing period,
NetApp was the previous storage vendor, which within understanding response and latency times were
3-4 months of deployment became a huge headache vital criteria. To test this, Nelson ran a SQL Server
for ECSO. NetApp was paired with Dell – the server benchmark against his current environment as well as
vendor that ECSO was using prior to the NetApp the DataCore solution. The benchmark used a variety
purchase. However, with a sizable investment in of block sizes as well as a mix of random/sequential
NetApp, Nelson knew that he wanted to use NetApp and read/write patterns to measure the performance.
in some capacity. DataCore enabled him to do that to The results were, quite simply, amazing. The DataCore
extend the DAS capacity from each server. Hyper-converged Virtual SAN solution was 20X faster
than his current environment, despite the fact that the

same nodes that generated the I/O load had to a fulfill

IT Environment At-a-Glance
the requests (compared to the current environment
■■ DataCore Managed Capacity: 60 TBs
where separate servers generated the I/O load and all
the NetApp storage had to do was to meet the load,
■■ Are you using the auto-tiering feature? Yes
which it did poorly).

■■ Number of Users: 50 internal; 250 external

“Response times are much faster. The 200 millisecond
latency has gone away now with DataCore running,” ■■ Number of Virtual Servers and Number of Hosts: 3
stated Nelson. “In fact we are down to under 5 hosts; 45 VMs
milliseconds as far as application response times at
peak load. Under normal load, the response times are ■■ Primary Server Vendor: Dell
currently under one millisecond.”
■■ Storage Vendor(s): Dell; NetApp
Unsurpassed Storage Performance and
Simplified Management using DataCore ■■ Server Virtualization Platform: VMware ESXi 6

Hyper-converged Virtual SAN

■■ Desktop Virtualization Platform: NA
Before DataCore, every storage-related task was
labor intensive at ECSO. Nelson was accessing and ■■ Hyperconverged Software: DataCore Hyper-converged
reviewing documentation continuously to ensure that Virtual SAN
any essential step concerning storage administration
was not overlooked. What became clear was that if he
went down the path of purchasing a traditional storage DataCore Hyper-converged Virtual SAN frees Nelson
SAN, it would be yet another “point” to manage. from the pain of labor-intensive storage management
and provides true hardware independence.
“I wanted as few ‘panes of glass’ to manage as
possible,” commented Nelson. “Adding yet another “DataCore has radically improved the efficiency,
storage management solution to manage would just performance and availability of our storage
add unnecessary complexity.” infrastructure,” he said. “I was in the process of
purchasing new hosts, and DataCore Hyper-converged
The DataCore Hyper-converged solution was Virtual SAN fit perfectly into the budget and plan. This
exactly what Nelson was looking for. DataCore has is a very unique product that can be tested in anyone’s
streamlined the storage management process by environment without purchasing additional hardware.”
automating it and enabling IT to gain visibility to
overall health and behavior of storage infrastructure
from a central console.

As it turned out, Nelson got the “path forward’ he Delivering Real-time Data Redundancy
wanted with DataCore Hyper-converged Virtual SAN According to Nelson, “Now we are synchronously
in that he can now rely on one pane of glass (the mirroring to the other site. Before I may have been
DataCore management console) to manage the doing some snapshots to the other site – but that was
storage residing on NetApp, which he just serves up timed, managed and certainly not done in realtime.
to the DataCore servers as an extension to their local There certainly was no mirroring going on before and
disk space. latency was deplorable. Moreover, the old solution
would not allow us to failover to the backup site
After DataCore was implemented, NetApp was without migrating the systems, therefore taking them
relegated to being the low-end storage tier for use offline during that time. I knew that a special product
cases such as storage archiving applications that was needed to keep the systems running all of the
do not require a lot of throughput or performance. time. If our systems fail, it puts not only citizens but
DataCore allowed the investment in NetApp to be first responders at risk.”

Two DataCore nodes reside at the primary site and

The “hierarchy” of storage now at ECSO is as follows: one DataCore node resides at the DR site, which is
two miles away. The DR site is connected by dark fiber
■■ DataCore-managed flash storage comprises Tier – specifically a 10-gig low-latency latency link. Both
1 storage. primary site nodes mirror to the third node at the DR
site. All told, the infrastructure consists of 60 TBs of
■■ Tier 2 storage consists of the DataCore-managed storage including 5 TBs of SSD or flash storage.
SAS drives.

One of Nelson’s concerns with some of his

■■ Tier 3 storage is represented by the NetApp applications was whether they could use the fiber-
external storage array. linked DR site for snapshots, periodic replication or
purely synchronous mirroring.
With DataCore auto-tiering, all this storage is utilized
holistically to meet the performance and capacity “With DataCore, all of that works with no problem,”
needs of the workloads. “Hot” data will typically said Nelson. “Protecting your data against server
reside on tier 1, “warm” data on tier 2 and “cold” outages simply by adding DataCore Hyper-
data residing on tier 3. By automatically moving data converged Virtual SAN software is easy. Within
on a sub-LUN level basis to the tier that best matches seconds, I can migrate my two production CAD
its performance characteristics, DataCore ensures systems over to the backup site and Dispatch is not
that each tier is used efficiently and optimally from a affected. It works great. There is zero downtime.
performance and capacity perspective. Nobody even knows it occurred.”

Nelson has brought an entire host down, while everything Nelson explains that adding the third node was not
was moved over to the backup host and it was “invisible” particularly difficult – although he admits he is glad
according to him. “And then you can bring it back, which a Wizard exists that he can utilize in the future when
often is the problem,” he lauded. “DataCore does it all – necessary for configuring the additional nodes.
from failover to failback, all seamlessly.”
“DataCore did not even know that I added another
Key applications are all based on SQL Server, server because I just did it myself and turned it on,”
Exchange and Active Directory. One application ECSO stated Nelson. “What is more, I did not have to buy
has is very unique and that is a computer-aided any specific hardware. I could have bought a bunch of
dispatch application for Fire and Police. All of the data disk drives and just added those. DataCore gives me
is stored in SQL Server, but runs in a private cloud at the flexibility to build my environment how it needed to
ECSO’s data center. be built.”

“That is really our critical application where all Summary

information is broadcast over the network,” stated For ECSO, a hyperconverged solution from DataCore
Nelson. This gets all the Tier 1 support and it is what accelerated their mission-critical applications while
everything revolves around at our site. It must always providing huge cost-savings.
be up-and-running.”

The call dispatch application, utilizing Microsoft SQL

Better Storage Economics through Flexibility Server, has a direct impact on the speed of Fire and
One of the things that most appealed to Nelson about Police to respond to emergencies. With DataCore, the
DataCore was that if he wanted to add another server tremendous performance seen during the Proofof-
(as has already been the case), then he could just Concept was matched by real-world performance in
buy a server and turn it on – because he had already production with peak latencies below 5 milliseconds,
bought enough licenses to cover a new server under whereas the application was regularly seeing latencies
DataCore’s license terms. of 200 milliseconds previously.

“Originally I had two DataCore-powered servers

deployed and that was working just fine – and then
I added a third at a DR site just for some additional
redundancy and because I needed some more CPU
cycles,” explained Nelson. “At some point I might add
a fourth to our DR site.”

In addition, the organization knew that it needed new About Emergency Communications of Southern
hosts, but Nelson was prescient enough to know that Oregon
he did not want to buy new hosts without solving the Beyond serving as a combined emergency dispatch
storage issue. It was during an introductory meeting facility and Public Safety Answering Point (PSAP) for
with DataCore that Nelson began to understand all of the Jackson County Oregon 9-1-1 lines, ECSO is also a
the inherent benefits of embracing a hyperconverged regional “drop point” for emergency information that
infrastructure. When the lightbulb “went off,” Nelson needs to be given to Jackson and Josephine counties.
realized that hyperconverged was a strategy that This may include severe storm warnings or notice of
could be embraced immediately by a solution readily a foreign enemy attack. This information is received
available from DataCore – one wherein the host and through the National Air Warning Alert System
the storage were all in one box. (NAWAS) radio channel that covers the entire United
“It was at that very moment that I thought – it fits our
price range and it gives us a way to use our existing
storage,” said Nelson. “It was a sheer breath of relief Source: DataCore
once I found the solution in DataCore Hyper-converged
Virtual SAN that I had been struggling for months to
find. By implementing DataCore we would be solving
multiple issues with one purchase.”

And because of that one decision, we fixed our

storage performance issues and we upgraded our
entire infrastructure all within the budget we wanted
to spend in just a single year – rather than having to
spread out purchases to multiple years.”
About DataCore
DataCore, the Data Infrastructure Software company,
is the leading provider of Software-Defined Storage
and Hyper-converged Software – harnessing today’s
powerful and cost-efficient server platforms with
Parallel I/O technology to overcome the IT industry’s
biggest problem, the I/O bottleneck, in order to
deliver unsurpassed performance, hyper-consolidation
efficiencies and cost savings. The company’s
comprehensive and flexible Software-defined Storage
and Hyper-converged Virtual SAN solutions free users
from the pain of labor-intensive storage management Contact Us
and provide true independence from solutions that
cannot offer a hardware agnostic architecture.
DataCore Software Corporation
DataCore’s storage virtualization and Parallel I/O
Corporate Park
technology revolutionize data infrastructure and
6300 NW 5th Way
serve as the cornerstone of the next-generation,
Ft. Lauderdale, FL 33309
software-defined data center – delivering greater value,
1 (877) 780-5111
industry-best performance, availability and simplicity.

Why DataCore and What We Do

We think differently. We innovate through software and
challenge the IT status quo.

For additional information, please visit or email

We pioneered software-based storage virtualization.
Now, we are leading the Software-defined and Parallel © 2016 DataCore Software Corporation. All Rights Reserved. DataCore,
Processing revolution. Our Application-adaptive the DataCore logo and SANsymphony are trademarks or registered
trademarksof DataCore Software Corporation. All other products,
software exploits the full potential of servers and services and company names mentioned herein may be trademarks of
their respective owners.
storage to solve data infrastructure challenges and
elevate IT to focus on the applications and services
that power their business.

DataCore parallel I/O and virtualization technologies

Hyperconverged Infrastructure: The CxO View is published by DataCore.
deliver the advantages of next generation enterprise Editorial content supplied by DataCore is independent of Gartner
analysis. All Gartner research is used with Gartner’s permission, and
data centers – today – by harnessing the untapped was originally published as part of Gartner’s syndicated research service
available to all entitled Gartner clients. © 2016 Gartner, Inc. and/or
power of multicore servers. DataCore software its affiliates. All rights reserved. The use of Gartner research in this
solutions revolutionize performance, cost-savings, and publication does not indicate Gartner’s endorsement of DataCore’s
products and/or strategies. Reproduction or distribution of this
productivity gains businesses can achieve from their publication in any form without Gartner’s prior written permission is
forbidden. The information contained herein has been obtained from
servers and data storage. sources believed to be reliable. Gartner disclaims all warranties as to the
accuracy, completeness or adequacy of such information. The opinions
expressed herein are subject to change without notice. Although Gartner
research may include a discussion of related legal issues, Gartner
does not provide legal advice or services and its research should not
be construed or used as such. Gartner is a public company, and its
shareholders may include firms and funds that have financial interests
in entities covered in Gartner research. Gartner’s Board of Directors
may include senior managers of these firms or funds. Gartner research
is produced independently by its research organization without input
or influence from these firms, funds or their managers. For further
information on the independence and integrity of Gartner research, see
“Guiding Principles on Independence and Objectivity” on its website.