Вы находитесь на странице: 1из 9

Expert Reference Series of White Papers

Unlock the Power


of Microsoft SQL
Server 2014

1-800-COURSES

www.globalknowledge.com

Unlock the Power of Microsoft SQL


Server 2014
Brian D. Egler, Global Knowledge Course Director, MCITP, MCSE, MCT

Introduction
Are you ready to unlock the power of Microsoft SQL Server 2014? In this white paper, we will examine three key
new features that show how SQL Server 2014 provides high-performance online transaction processing (OLTP),
optimization of business analytics, and migration of data to the cloud.

Microsoft SQL Server 2014 Themes


Three underlying themes categorize the SQL Server 2014 release:
Mission critical performance
Faster insights from any data
Platform for hybrid cloud
Within this white paper, we will be exploring one selected new feature under each theme to get a sense of the
capabilities of the new release. Other features, while too numerous to describe here, are documented on the
Microsoft web site at http://www.microsoft.com/sql. You can also download an evaluation copy of the
software from the same location. Of course, all the functionality of Microsoft SQL Server 2014 is contained in the
Microsoft Official Curriculum (MOC) courses offered by Global Knowledge both in the physical and virtual
classroom platforms. Details are available at http://www.globalknowledge.com.

Mission Critical Performance


Selected Feature: In-Memory OLTP
It has always been true that increasing physical memory improves the performance of a database server. Logical
input/output (I/O) operations straight from physical memory are much faster than the physical I/O equivalent
from storage. In SQL Server 2014, Microsoft is introducing the In-Memory OLTP database engine, which
essentially performs the entire I/O operation set against selected tables straight from memory. Included in the
new architecture is a lockless concurrency model plus the support for natively compiled stored procedures
resulting in recorded performance gains up to 20x on the same hardware. The In-Memory OLTP option is an
enterprise-level feature and, as the name suggests, is designed specifically for typical OLTP traffic, balancing
read/write operations on the data with concurrent access. It has to be enabled at the database level before
proceeding to create memory-optimized tables.
To enable a database to host memory-optimized tables, a special FILEGROUP needs to be added first to the
database which CONTAINS MEMORY_OPTIMIZED_DATA. This filegroup creates a folder to contain the memoryoptimized data before it is loaded into memory.
There are various restrictions that apply to the new In-Memory OLTP feature. For instance, FOREIGN KEY
constraints, DEFAULT constraints, CHECK constraints, IDENTITY columns, and triggers are not allowed on
memory-optimized tables. However, if any of the restricted functionality is required, it can usually be coded using
natively compiled Stored Procedure code. To assist with the migration to Memory-optimized tables, SQL Server
Management Studio has a Table Memory Optimization Advisor, which will give expert advice on the restrictions
and how to work-around them, if necessary.
Copyright 2014 Global Knowledge Training LLC. All rights reserved.

Figure 1: Table Memory Optimization Advisor

An example is displayed in Figure 2 below, showing a table that currently has DEFAULT constraints that will need
to be removed if it is to be migrated to a Memory-optimized table. If a Stored Procedure for inserts is created,
equivalent processing can be coded and the performance advantage of In-Memory OLTP can be realized.
Example code is shown if you click the Show me how hyperlink.

Figure 2: Memory Optimization Checklist

Copyright 2014 Global Knowledge Training LLC. All rights reserved.

When looking at a valid CREATE TABLE statement for a Memory-optimized table in Figure 3, the
MEMORY_OPTIMIZED = ON clause is a requirement. The DURABILITY = SCHEMA AND DATA clause will make
sure the updates are written to the Transaction Log on disk even though the access and updates to the table are
made fully in memory. With the absence of locks in the new architecture, this will enable great performance gains
while protecting the transactions in case of failure. In rare circumstances where the latest transactions do not
need to be recoverable, DURABILITY = SCHEMA_ONLY can be used but is not recommended for obvious reasons.

Figure 3: Memory-optimized Tables

I downloaded a sample from Microsofts www.codeplex.com code-sharing website to test In-Memory OLTP versus
the classic on-disk model with SQL Server 2014. I setup a Windows Azure VM (see later in this white paper) using
Windows Server 2012 R2 and SQL Server 2014 CTP2 with a predefined image from the Windows Azure Image
Gallery. The sample tests out heavy duty updates using a stress testing tool OSTRESS with both memoryoptimized and disk-based tables. In my limited tests, the memory-optimized tables handled 1000 transactions
repeated across 100 threads yielding a 6x improvement over the equivalent disk-based tables. The disk I/O was
dramatically reduced but not completely removed as the transaction log still needed to be updated on disk for
durability reasons. See Figure 6 for a screenshot of an in-Memory test. This test was made against the upgraded
AdventureWorks2012 database as a proof of concept.

Copyright 2014 Global Knowledge Training LLC. All rights reserved.

Figure 4: Test results using in-Memory OLTP

Codeplex sample available at: http://msftdbprodsamples.codeplex.com/releases/view/114491

100,000 order inserts


using 100 threads, with
1,000 inserts each

Disk-based tables
1:37

Memory-optimized tables
0:16

Performance difference
6x

I ran my tests with a lightweight VM using a dual-core CPU and 3.5GB of memory as a simple proof-of-concept.
The author of the sample used a machine with 24 logical cores and separate SSD drives for data and log files to
yield an impressive 50x improvement.

Faster Insights from Any Data


Selected Feature: Clustered Columnstore Index
In SQL Server 2012, Microsoft introduced the Non-Clustered Columnstore index, which was perfectly designed for
large fact tables in a data warehouse and provided an order of magnitude performance gain based on massive
column compression and optimized star queries typical in such a database. The major restriction of this type of
index was that a table could operate only as a read-only table while the Columnstore index was in place. Of
course, in a typical data warehouse, the data is read-only during the day while changes are applied at off-peak
times. Indexes are typically dropped and recreated before and after the incremental updates, so this was not a
Copyright 2014 Global Knowledge Training LLC. All rights reserved.

major restriction for most installations. Workarounds included the concept of a separate Delta table of the
same structure where changes could be made and consolidated with the main table at a later time. A similar
solution used Partition switching to achieve the same result. Now in SQL Server 2014, you can create a Clustered
Columnstore Index, which is now updateable.

Figure 5: Clustered Columnstore Index (CCI)

In testing this out, the Clustered Columnstore Index (CCI) did yield similar performance gains (over 10x) to its
Non-Clustered cousin and did accept updates as designed. However, as with all features, there is a trade-off of
some sort. In this case, the CCI must be the only index on the table and it does not support Primary or Foreign
Key constraints. So, in order to test this out on an existing table, I had to remove all PK and FK constraints and
any other indexes first. As with all Clustered Indexes, all columns are contained in the CCI. Internally, the CCI
actually manages a Delta table of its own, merging the data into the table as appropriate. Again, the transaction
log makes sure all updates are durable and recoverable. As in the In-Memory OLTP solution, validation provided
by PK and FK constraints can be emulated through Stored Procedure code, if necessary. There are other
restrictions which are fully documented in Books Online. The Clustered Columnstore Index is an enterprise-level
feature.

Copyright 2014 Global Knowledge Training LLC. All rights reserved.

Platform for Hybrid Cloud


Selected Feature: New Cloud Migration Wizard
Cloud-based data provides increased flexibility and elastic scalability, ideal for rapidly growing organizations.
There are still many unanswered security-based questions, but some databases may be appropriate to host in the
cloud and synchronize with sensitive on-premises databases. This distributed approach to our data is a growing
trend in the industry. Microsoft is at the forefront of this technology with its Windows Azure platform supporting
SQL Azure for databases in the cloud. Windows Azure also supports the setting up of hosted Virtual Machines
(VMs) which can support the full version of SQL Server, if needed. Once again, ease of use and lower costs are
Microsofts value propositions. SQL Azure is now compatible with SQL Server 2014 and supports a new Cloud
Migration Wizard.

Figure 6: Cloud Migration Wizard

Windows Azure and SQL Azure can be accessed directly using http://windowsazure.com . A 30-day free trial is
available. After that, a monthly fee that is based on storage requirements and resources used, is charged. SQL
Server databases can be created and populated using the web interface or using SQL Server Management Studio
remotely, accessing the database server using the fully qualified DNS name (FQDN) provided by the Windows
Azure platform. SSIS packages can also extract information from SQL Azure databases using the same FQDN. SQL
Azure databases must use SQL Server authentication but can be accessed using any of the SSIS Tasks provided in
the SSIS Toolbox. In this way, accessing data in the cloud is essentially the same as any other data source.
Copyright 2014 Global Knowledge Training LLC. All rights reserved.

To migrate a database to the cloud, you can now use the new Cloud Migration Wizard, which can be launched
via SQL Server Management Studio 2014 (see Figure 7). You can choose to migrate an on-premises database to
SQL Azure using the task: Deploy Database to Windows Azure SQL Database against your database. The
wizard will report on any restrictions, such as every table must have a Clustered Index on SQL Azure, guiding you
through the process of porting your database to the cloud.

Figure 7: Requesting the Cloud Migration Wizard

If you prefer, you can setup a VM on Windows Azure to host the full version of SQL Server and use the task,
Deploy Database to a Windows Azure VM In this case, you are not restricted to the limitations of SQL Azure
as you are effectively dealing with a normal instance of SQL Server that happens to be running on a Windows
Azure VM using Cloud Services. I tested this out using a Windows Azure VM with Windows Server 2012 R2
running SQL Server 2014 CTP2 and successfully migrated a SQL Server 2014 database to a SQL Azure database.
Then I connected to a SQL Azure database using SSMS 2014 and was able to deploy using the Deploy Database
to a Windows Azure VM option.
With its impressive technology, Microsoft has successfully incorporated the cloud as an extension of their
architecture rather than as a completely separate platform and as such is allowing customers to leverage existing
technologies in what is being called the Hybrid Cloud.

Copyright 2014 Global Knowledge Training LLC. All rights reserved.

Figure 8: Windows Azure VM and Cloud Service support

Conclusion
Microsoft SQL Server 2014 has some great new features that will allow you to develop higher performing, more
scalable next-generation applications using the hybrid cloud. The fact that the features are largely incremental in
nature should reassure users that Microsoft is building on the established foundation of SQL Server 2008 and
2012. Using similar architecture and management tools, customers will be able to smoothly upgrade their systems
and skills based on the need for the new features and according to their own schedule.

Learn More
Learn more about how you can improve productivity, enhance efficiency, and sharpen your competitive edge
through training.
Updating Your SQL Server Skills to Microsoft SQL Server 2014 (M10977)
MCSA: SQL Server 2012 Boot Camp
MCSE: Data Platform Boot Camp
MCSE: Business Intelligence Boot Camp
Visit www.globalknowledge.com or call 1-800-COURSES (1-800-268-7737) to speak with a Global Knowledge
training advisor.

About the Author


Brian Egler is a Global Knowledge instructor and course director specializing in Microsoft SQL Server technologies.
He currently resides in Highlands, North Carolina.

Copyright 2014 Global Knowledge Training LLC. All rights reserved.

Вам также может понравиться