Академический Документы
Профессиональный Документы
Культура Документы
1-800-COURSES
www.globalknowledge.com
Introduction
Are you ready to unlock the power of Microsoft SQL Server 2014? In this white paper, we will examine three key
new features that show how SQL Server 2014 provides high-performance online transaction processing (OLTP),
optimization of business analytics, and migration of data to the cloud.
An example is displayed in Figure 2 below, showing a table that currently has DEFAULT constraints that will need
to be removed if it is to be migrated to a Memory-optimized table. If a Stored Procedure for inserts is created,
equivalent processing can be coded and the performance advantage of In-Memory OLTP can be realized.
Example code is shown if you click the Show me how hyperlink.
When looking at a valid CREATE TABLE statement for a Memory-optimized table in Figure 3, the
MEMORY_OPTIMIZED = ON clause is a requirement. The DURABILITY = SCHEMA AND DATA clause will make
sure the updates are written to the Transaction Log on disk even though the access and updates to the table are
made fully in memory. With the absence of locks in the new architecture, this will enable great performance gains
while protecting the transactions in case of failure. In rare circumstances where the latest transactions do not
need to be recoverable, DURABILITY = SCHEMA_ONLY can be used but is not recommended for obvious reasons.
I downloaded a sample from Microsofts www.codeplex.com code-sharing website to test In-Memory OLTP versus
the classic on-disk model with SQL Server 2014. I setup a Windows Azure VM (see later in this white paper) using
Windows Server 2012 R2 and SQL Server 2014 CTP2 with a predefined image from the Windows Azure Image
Gallery. The sample tests out heavy duty updates using a stress testing tool OSTRESS with both memoryoptimized and disk-based tables. In my limited tests, the memory-optimized tables handled 1000 transactions
repeated across 100 threads yielding a 6x improvement over the equivalent disk-based tables. The disk I/O was
dramatically reduced but not completely removed as the transaction log still needed to be updated on disk for
durability reasons. See Figure 6 for a screenshot of an in-Memory test. This test was made against the upgraded
AdventureWorks2012 database as a proof of concept.
Disk-based tables
1:37
Memory-optimized tables
0:16
Performance difference
6x
I ran my tests with a lightweight VM using a dual-core CPU and 3.5GB of memory as a simple proof-of-concept.
The author of the sample used a machine with 24 logical cores and separate SSD drives for data and log files to
yield an impressive 50x improvement.
major restriction for most installations. Workarounds included the concept of a separate Delta table of the
same structure where changes could be made and consolidated with the main table at a later time. A similar
solution used Partition switching to achieve the same result. Now in SQL Server 2014, you can create a Clustered
Columnstore Index, which is now updateable.
In testing this out, the Clustered Columnstore Index (CCI) did yield similar performance gains (over 10x) to its
Non-Clustered cousin and did accept updates as designed. However, as with all features, there is a trade-off of
some sort. In this case, the CCI must be the only index on the table and it does not support Primary or Foreign
Key constraints. So, in order to test this out on an existing table, I had to remove all PK and FK constraints and
any other indexes first. As with all Clustered Indexes, all columns are contained in the CCI. Internally, the CCI
actually manages a Delta table of its own, merging the data into the table as appropriate. Again, the transaction
log makes sure all updates are durable and recoverable. As in the In-Memory OLTP solution, validation provided
by PK and FK constraints can be emulated through Stored Procedure code, if necessary. There are other
restrictions which are fully documented in Books Online. The Clustered Columnstore Index is an enterprise-level
feature.
Windows Azure and SQL Azure can be accessed directly using http://windowsazure.com . A 30-day free trial is
available. After that, a monthly fee that is based on storage requirements and resources used, is charged. SQL
Server databases can be created and populated using the web interface or using SQL Server Management Studio
remotely, accessing the database server using the fully qualified DNS name (FQDN) provided by the Windows
Azure platform. SSIS packages can also extract information from SQL Azure databases using the same FQDN. SQL
Azure databases must use SQL Server authentication but can be accessed using any of the SSIS Tasks provided in
the SSIS Toolbox. In this way, accessing data in the cloud is essentially the same as any other data source.
Copyright 2014 Global Knowledge Training LLC. All rights reserved.
To migrate a database to the cloud, you can now use the new Cloud Migration Wizard, which can be launched
via SQL Server Management Studio 2014 (see Figure 7). You can choose to migrate an on-premises database to
SQL Azure using the task: Deploy Database to Windows Azure SQL Database against your database. The
wizard will report on any restrictions, such as every table must have a Clustered Index on SQL Azure, guiding you
through the process of porting your database to the cloud.
If you prefer, you can setup a VM on Windows Azure to host the full version of SQL Server and use the task,
Deploy Database to a Windows Azure VM In this case, you are not restricted to the limitations of SQL Azure
as you are effectively dealing with a normal instance of SQL Server that happens to be running on a Windows
Azure VM using Cloud Services. I tested this out using a Windows Azure VM with Windows Server 2012 R2
running SQL Server 2014 CTP2 and successfully migrated a SQL Server 2014 database to a SQL Azure database.
Then I connected to a SQL Azure database using SSMS 2014 and was able to deploy using the Deploy Database
to a Windows Azure VM option.
With its impressive technology, Microsoft has successfully incorporated the cloud as an extension of their
architecture rather than as a completely separate platform and as such is allowing customers to leverage existing
technologies in what is being called the Hybrid Cloud.
Conclusion
Microsoft SQL Server 2014 has some great new features that will allow you to develop higher performing, more
scalable next-generation applications using the hybrid cloud. The fact that the features are largely incremental in
nature should reassure users that Microsoft is building on the established foundation of SQL Server 2008 and
2012. Using similar architecture and management tools, customers will be able to smoothly upgrade their systems
and skills based on the need for the new features and according to their own schedule.
Learn More
Learn more about how you can improve productivity, enhance efficiency, and sharpen your competitive edge
through training.
Updating Your SQL Server Skills to Microsoft SQL Server 2014 (M10977)
MCSA: SQL Server 2012 Boot Camp
MCSE: Data Platform Boot Camp
MCSE: Business Intelligence Boot Camp
Visit www.globalknowledge.com or call 1-800-COURSES (1-800-268-7737) to speak with a Global Knowledge
training advisor.