Вы находитесь на странице: 1из 352

Designing the Data Tier

for Microsoft® SQL


Server™ 2005
Workbook
Course Number: 2783A

MCT USE ONLY. STUDENT USE PROHIBITED


Beta
Information in this document, including URL and other Internet Web site references, is subject to
change without notice. Unless otherwise noted, the example companies, organizations, products,
domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious,
and no association with any real company, organization, product, domain name, e-mail address,
logo, person, place or event is intended or should be inferred. Complying with all applicable
copyright laws is the responsibility of the user. Without limiting the rights under copyright, no
part of this document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or
otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual property.

© 2005 Microsoft Corporation. All rights reserved.

Microsoft, MS-DOS, Windows, Windows NT, <plus other appropriate product names or titles.
The publications specialist replaces this example list with the list of trademarks provided by the
copy editor. Microsoft, MS-DOS, Windows, and Windows NT are listed first, followed by all
other Microsoft trademarks listed in alphabetical order.> are either registered trademarks or
trademarks of Microsoft Corporation in the U.S.A. and/or other countries.

<The publications specialist inserts mention of specific, contractually obligated to, third-party
trademarks, provided by the copy editor>

The names of actual companies and products mentioned herein may be the trademarks of their
respective owners.

Beta

MCT USE ONLY. STUDENT USE PROHIBITED


Session 0: Introduction

Contents
Introduction 1
Clinic Materials 2
Microsoft Learning Product Types 5
How to Get the Most Out of a Clinic 6
Microsoft Learning 7
Microsoft Certification Program 9
Facilities 12
About This Clinic 13
Prerequisites 15
Clinic Outline 16
Introduction to the Workshop Business
Scenario 17

MCT USE ONLY. STUDENT USE PROHIBITED


Information in this document, including URL and other Internet Web site references, is subject to
change without notice. Unless otherwise noted, the example companies, organizations, products,
domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious,
and no association with any real company, organization, product, domain name, e-mail address,
logo, person, place or event is intended or should be inferred. Complying with all applicable
copyright laws is the responsibility of the user. Without limiting the rights under copyright, no
part of this document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or
otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory,
regarding these manufacturers or the use of the products with any Microsoft technologies. The
inclusion of a manufacturer or product does not imply endorsement of Microsoft of the
manufacturer or product. Links are provided to third party sites. Such sites are not under the
control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link
contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for
webcasting or any other form of transmission received from any linked site. Microsoft is providing
these links to you only as a convenience, and the inclusion of any link does not imply endorsement
of Microsoft of the site or the products contained therein.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual property.

© 2006 Microsoft Corporation. All rights reserved.

Microsoft, Active Directory, ActiveX, BizTalk, Excel, IntelliSense, Microsoft Press, MSDN, MS-
DOS, Outlook, PowerPoint, SharePoint, Visio, Visual Basic, Visual C++, Visual C#, Visual
Studio, Windows, Windows NT, and Windows Server are either registered trademarks or
trademarks of Microsoft Corporation in the United States and/or other countries.

All other trademarks are property of their respective owners.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 0: Introduction 1

Introduction

*****************************illegal for non-trainer use******************************

The instructor will ask you to introduce yourself by providing the information on the slide to the
other students in the clinic.
Briefly describe your job role in your organization and your prior experience developing databases
in an enterprise environment, including any notable challenges you might have encountered. In
particular, describe your experiences developing database solutions using previous versions of
Microsoft® SQL Server™ in conjunction with the Microsoft .NET Framework.
Let your instructor know what you hope to learn in this clinic so that he or she can best meet your
expectations. Additionally, your instructor can recommend other Microsoft learning products that
can help you to acquire the knowledge and skills you need to meet your professional goals.

MCT USE ONLY. STUDENT USE PROHIBITED


2 Session 0: Introduction

Clinic Materials

*****************************illegal for non-trainer use******************************

The following materials are included with your kit:


„ Name card. Write your name on both sides of the name card.
„ Student workbook. The student workbook contains the material covered in class, in addition to the
hands-on lab exercises.
„ Student Materials compact disc. The Student Materials compact disc contains a Web page that
provides you with links to resources pertaining to this clinic, including additional readings, lab
files, multimedia presentations, and clinic-related Web sites.

Note To open the Web page, insert the Student Materials compact disc into the CD-ROM
drive, and then in the root directory of the compact disc, double-click StartCD.exe.

„ Clinic evaluation. Near the end of the clinic, you will have the opportunity to complete an online
evaluation to provide feedback on the clinic, training facility, and instructor.

To provide additional comments or feedback on the clinic, send e-mail to


support@mscourseware.com. To inquire about the Microsoft Certified Professional program, send
e-mail to mcphelp@microsoft.com.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 0: Introduction 3

Student Materials Compact Disc Contents


The Student Materials compact disc contains the following files and folders:
„ Autorun.inf. When the compact disc is inserted into the compact disc drive, this file opens
Autorun.exe.
„ Default.htm. This file opens the Student Materials Web page. It provides you with resources
pertaining to this clinic, including additional reading, review and lab answers, lab files,
multimedia presentations, and clinic-related Web sites.
„ Readme.txt. This file explains how to install the software for viewing the Student Materials
compact disc and its contents and how to open the Student Materials Web page.
„ StartCD.exe. When the compact disc is inserted into the compact disc drive, or when you double-
click the StartCD.exe file, this file opens the compact disc and allows you to browse the Trainer
Materials compact disc.
„ StartCD.ini. This file contains instructions to launch StartCD.exe.
„ Flash. This folder contains the installer for the Macromedia Flash 5.0 browser plug-in.
„ Fonts. This folder contains fonts that may be required to view the Microsoft Word documents that
are included with this clinic.
„ Toolkit. This folder contains the files for the Resource Toolkit.
„ Webfiles. This folder contains the files that are required to view the clinic Web page. To open the
Web page, open Windows Explorer, and then in the root directory of the compact disc, double-
click StartCD.exe.
„ Wordview. This folder contains the Word Viewer that is used to view any Word document (.doc)
files that are included on the compact disc.

MCT USE ONLY. STUDENT USE PROHIBITED


4 Session 0: Introduction

Document Conventions
The following conventions are used in clinic materials to distinguish elements of the text.
Convention Use

Represents resources available by launching the Resource Toolkit shortcut on the desktop.

Bold Represents commands, command options, and syntax that must be typed exactly as
shown. It also indicates commands on menus and buttons, dialog box titles and options,
and icon and menu names.
Italic In syntax statements or descriptive text, indicates argument names or placeholders for
variable information. Italic is also used for introducing new terms, for book titles, and for
emphasis in the text.
Title Capitals Indicate domain names, user names, computer names, directory names, and folder and file
names, except when specifically referring to case-sensitive names. Unless otherwise
indicated, you can use lowercase letters when you type a directory name or file name in a
dialog box or at a command prompt.
ALL CAPITALS Indicate the names of keys, key sequences, and key combinations —for example,
ALT+SPACEBAR.
monospace Represents code samples or examples of screen text.
[] In syntax statements, enclose optional items. For example, [filename] in command syntax
indicates that you can choose to type a file name with the command. Type only the
information within the brackets, not the brackets themselves.
{} In syntax statements, enclose required items. Type only the information within the braces,
not the braces themselves.
| In syntax statements, separates an either/or choice.
Ç Indicates a procedure with sequential steps.
... In syntax statements, specifies that the preceding item may be repeated.
. Represents an omitted portion of a code sample.
.
.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 0: Introduction 5

Microsoft Learning Product Types

*****************************illegal for non-trainer use******************************

Microsoft Learning offers four types of instructor-led products. Each is specific to a particular
audience type and level of experience. The different product types also tend to suit different
learning styles. These types are as follows:
„ Courses are for information technology (IT) professionals and developers who are new to a
particular product or technology and for experienced individuals who prefer to learn in a
traditional classroom format. Courses provide a relevant and guided learning experience that
combines lecture and practice to deliver thorough coverage of a Microsoft product or technology.
Courses are designed to address the needs of learners engaged in planning, design,
implementation, management, and support phases of the technology adoption lifecycle. They
provide detailed information by focusing on concepts and principles, reference content, and in-
depth hands-on lab activities to ensure knowledge transfer. Typically, the content of a course is
broad, addressing a wide range of tasks necessary for the job role.
„ Workshops are for knowledgeable IT professionals and developers who learn best by doing and
exploring. Workshops provide a hands-on learning experience in which participants use Microsoft
products in a safe and collaborative environment based on real-world scenarios. Workshops are
the learning products where students learn by doing through scenario, and through troubleshooting
hands-on labs, targeted reviews, information resources, and best practices, with instructor
facilitation.
„ Clinics are for IT professionals, developers and technical decision makers. Clinics offer a detailed
“how to” presentation that describes the features and functionality of an existing or new Microsoft
product or technology, and that showcases product demonstrations and solutions. Clinics focus on
how specific features will solve business problems.
„ Hands-On Labs provide IT professionals and developers with hands-on experience with an
existing or new Microsoft product or technology. Hands-on labs provide a realistic and safe
environment to encourage knowledge transfer by learning through doing. The labs provided are
completely prescriptive so that no lab answer keys are required. There is very little lecture or text
content provided in hands-on labs, aside from lab introductions, context setting, and lab reviews.

MCT USE ONLY. STUDENT USE PROHIBITED


6 Session 0: Introduction

How to Get the Most Out of a Clinic

*****************************illegal for non-trainer use******************************

Clinics are intended to provide knowledge, not skills, transfer. The primary purpose of a clinic is to
give customers a first look at the benefits and features of the latest Microsoft technologies so that
they can make informed decisions and plan ahead
The clinic is a fast-paced learning format that focuses on instructor-led demonstrations rather than
lecture. In a clinic, lecture time is kept to a minimum so that students have the opportunity to focus
on the demonstrations. The clinic format enables students to reinforce their learning by seeing how
tasks are performed and how problems are solved.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 0: Introduction 7

Microsoft Learning

*****************************illegal for non-trainer use******************************

Microsoft Learning develops Official Microsoft Learning Product (OMLP) courseware for
computer professionals who design, develop, support, implement, or manage solutions by using
Microsoft products and technologies. These learning products provide comprehensive, skills-based
training in instructor-led and online formats.

Microsoft Learning Products for Experienced Professional Database


Developers and Database Administrators Using Microsoft SQL Server
2005
This clinic is a part of the Microsoft Learning Products for Professional Database Developers and
Database Administrators Using Microsoft SQL Server 2005 portfolio of learning products. Other
product titles in this portfolio include:
„ Premium (instructor-led training [ILT] and e-learning) learning products:
• Course 2781A, Designing Microsoft SQL Server 2005 Server-Side
Solutions (3 days)
• Course 2782A, Designing Microsoft SQL Server 2005 Databases (2
days)
• Workshop 2784A, Tuning and Optimizing Queries using Microsoft SQL
Server 2005 (3 days)
„ Certification exams:
• Exam 70-441: PRO: Designing Database Solutions by Using Microsoft
SQL Server 2005
• Exam 70-442: PRO: Designing and Optimizing Data Access by Using
Microsoft SQL Server 2005

MCT USE ONLY. STUDENT USE PROHIBITED


8 Session 0: Introduction

„ Assessments:
• One job-role based assessment: Introduction to Microsoft SQL Server
2005 for Database Developers

Each learning product relates in some way to other learning products. A related product may be a
prerequisite; a follow-up course, clinic, or workshop in a recommended series, or a learning product
that offers additional training.
It is recommended that you take the following learning products in this order:
„ Clinic 2783A, Designing the Data Tier for Microsoft SQL Server 2005
„ Workshop 2784A, Tuning and Optimizing Queries Using Microsoft SQL Server 2005

Other related learning products may become available in the future, so for up-to-date information
about recommended learning products, visit the Microsoft Learning Web site.

Microsoft Learning Information


For more information, visit the Microsoft Learning Web site at
http://www.microsoft.com/learning/.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 0: Introduction 9

Microsoft Certification Program

*****************************illegal for non-trainer use******************************

Microsoft Learning offers a variety of certification credentials for developers and IT professionals.
The Microsoft certification program is the leading certification program for validating your
experience and skills, keeping you competitive in today’s changing business environment.

Related Certification Exam


This workshop helps students to prepare for Exam 70-442: Designing and Optimizing Microsoft
SQL Server 2005 Database Applications.
Exam 70-442 is core exam for the MCITP: Database Developer certification.

MCP Certifications
The MCP program includes the following certifications.
„ MCITP: Database Developer

Microsoft Certified IT Professional: Database Developer (MCITP: Database Developer) is the


premier certification for database designers and developers. This credential demonstrates that you
can design a secure, stable, enterprise database solution using Microsoft SQL Server 2005.
„ MCDST on Microsoft Windows

The Microsoft Certified Desktop Support Technician (MCDST) certification is designed for
professionals who successfully support and educate end users and troubleshoot operating system
and application issues on desktop computers running the Windows® operating system.
„ MCSA on Microsoft Windows Server™ 2003

The Microsoft Certified Systems Administrator (MCSA) certification is designed for professionals
who implement, manage, and troubleshoot existing network and system environments based on
the Windows Server 2003 platform. Implementation responsibilities include installing and
configuring parts of systems. Management responsibilities include administering and supporting
MCT USE ONLY. STUDENT USE PROHIBITED
systems.
10 Session 0: Introduction

„ MCSE on Windows Server 2003

The Microsoft Certified Systems Engineer (MCSE) credential is the premier certification for
professionals who analyze business requirements and design and implement infrastructure for
business solutions based on the Windows Server 2003 platform. Implementation responsibilities
include installing, configuring, and troubleshooting network systems.
„ MCAD

The Microsoft Certified Application Developer (MCAD) for Microsoft .NET credential is
appropriate for professionals who use Microsoft technologies to develop and maintain department-
level applications, components, Web or desktop clients, or back-end data services, or who work in
teams developing enterprise applications. The credential covers job tasks ranging from developing
to deploying and maintaining these solutions.
„ MCSD

The Microsoft Certified Solution Developer (MCSD) credential is the premier certification for
professionals who design and develop leading-edge business solutions with Microsoft
development tools, technologies, platforms, and the Microsoft Windows DNA architecture. The
types of applications MCSDs can develop include desktop applications and multi-user, Web-
based, N-tier, and transaction-based applications. The credential covers job tasks ranging from
analyzing business requirements to maintaining solutions.
„ MCDBA on Microsoft SQL Server 2000

The Microsoft Certified Database Administrator (MCDBA) credential is the premier certification
for professionals who implement and administer SQL Server databases. The certification is
appropriate for individuals who derive physical database designs, develop logical data models,
create physical databases, use Transact-SQL to create data services, manage and maintain
databases, configure and manage security, monitor and optimize databases, and install and
configure SQL Server.
„ MCP

The Microsoft Certified Professional (MCP) credential is for individuals who have the skills to
successfully implement a Microsoft product or technology as part of a business solution in an
organization. Hands-on experience with the product is necessary to successfully achieve
certification.
„ MCT

Microsoft Certified Trainers (MCTs) demonstrate the instructional and technical skills that qualify
them to deliver Official Microsoft Learning Products through a Microsoft Certified Partner for
Learning Solutions.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 0: Introduction 11

Certification Requirements
Requirements differ for each certification category and are specific to the products and job
functions addressed by the certification. To become a Microsoft Certified Professional, you must
pass rigorous certification exams that provide a valid and reliable measure of technical proficiency
and expertise.

Additional Information See the Microsoft Learning Web site at


http://www.microsoft.com/learning/.
You can also send e-mail to mcphelp@microsoft.com if you have specific certification
questions.

Acquiring the Skills Tested by an MCP Exam


Official Microsoft Learning Products can help you develop the skills that you need to do your job.
They also complement the experience that you gain while working with Microsoft products and
technologies. However, no one-to-one correlation exists between Official Microsoft Learning
Products and MCP exams. Microsoft does not expect or intend for a course or clinic to be the sole
preparation method for passing MCP exams. Practical product knowledge and experience is also
necessary to pass MCP exams.
To help prepare for MCP exams, use the preparation guides available for each exam. Each Exam
Preparation Guide contains exam-specific information such as a list of topics on which you will be
tested. These guides are available on the Microsoft Learning Web site at
http://www.microsoft.com/learning/.

MCT USE ONLY. STUDENT USE PROHIBITED


12 Session 0: Introduction

Facilities

*****************************illegal for non-trainer use******************************

Inform the students of class logistics, including class start and end times, break
times, and building hours. Also inform the students of any classroom policies
you might have, such as limitations on cellular telephone usage.
Point out the locations of parking, restrooms, dining facilities, telephones, and
areas where smoking is permitted.
If your facility participates in a recycling program, be sure to encourage
students to recycle accordingly.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 0: Introduction 13

About This Clinic

*****************************illegal for non-trainer use******************************

This section provides you with a brief description of the clinic, objectives, and target audience.

Description
The purpose of this one-day clinic is to teach database developers working in enterprise
environments how to understand and determine how application developers will access and
consume their data—one of the major reasons for the failure of database solutions today.

Clinic Objectives
After completing this clinic, you will be able to explain how to:
„ Choose data access technologies and an object model to support an organization’s business needs.
„ Design an exception handling strategy.
„ Choose a cursor strategy.
„ Design query strategies using multiple active results sets (MARS).
„ Design caching strategies for database applications.
„ Design a scalable data tier for database applications.

MCT USE ONLY. STUDENT USE PROHIBITED


14 Session 0: Introduction

Audience
The target audience for this clinic is experienced professional database developers who are already
proficient in database technologies and at applying them at a professional job role level. These
database developers have experience using previous version of Microsoft SQL Server or other
database technologies, but have not worked with SQL Server 2005.
Most audience members will work for enterprise-level organizations consisting of at least 500
personal computers (PCs) and 100 servers. Typically, their job roles require them to address and
create solutions for all types of enterprise issues, including:
„ Writing Transact-SQL queries.
„ Designing and implementing programming objects.
„ Troubleshooting programming objects.
„ Doing database performance tuning and optimization.
„ Designing databases, at both the conceptual and logical levels.
„ Implementing databases at the physical level.
„ In some cases, designing and troubleshooting the data access layer of the application.
„ Gathering business requirements.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 0: Introduction 15

Prerequisites

*****************************illegal for non-trainer use******************************

Clinic Prerequisites
This clinic requires that you meet the following prerequisites:
„ Experience reading user requirements and business-need documents. For example, development
project vision/mission statements or business analysis reports.
„ Basic knowledge of the Microsoft .NET framework, .NET concepts, Microsoft ADO.NET, and
service-oriented architecture (SOA).
„ Familiarity with the tasks that application developers typically perform.
„ Understanding of Transact-SQL syntax and programming logic.
„ Experience with professional-level database design.
• Specifically, the ability to design a normalized database and know the tradeoffs involved in
denormalization and designing for performance and business requirements.
„ Basic monitoring and troubleshooting skills.
• Specifically, how to use SQL Server Profiler and dynamic management views.
„ Basic knowledge of the operating system and platform. That is, how the operating system
integrates with the database, what the platform or operating system can do, and how interaction
between the operating system and the database works.
„ Basic knowledge of application architecture. That is, how applications can be designed based on
SOA, what applications can do, how interaction between the application and the database works,
and how the interaction between the database and the platform or operating system works.

MCT USE ONLY. STUDENT USE PROHIBITED


16 Session 0: Introduction

Clinic Outline

*****************************illegal for non-trainer use******************************

Session 1, “Choosing Data Access Technologies and an Object Model,” explains how to choose
data access technologies and an object model to support an organization’s business needs. This
session focuses on methods of accessing data, building a data access layer, designing a data access
layer with SQLCLR, and using data object models for administering SQL Server 2005.
Session 2, “Designing an Exception Handling Strategy,” covers the various types of exceptions that
can occur in a database system, how to capture them, and how to manage them appropriately. In
addition, the session explains how to design strategies for detecting exceptions at the appropriate
layer, and how to log and communicate exceptions according to your business requirements.
Session 3, “Choosing a Cursor Strategy,” explains when cursors are appropriate, and how to use
them to optimize the use of system resources. The main purpose of this session is to discover the
adequate application scope for cursors. The session explains the scenarios in which cursors are
appropriate, considerations for selecting server-side and client-side cursors, and how to use cursors
to optimize the use of system resources.
Session 4, “Designing Query Strategies Using Multiple Active Result Sets,” explains how Multiple
Active Result Sets (MARS) can improve application response time and user satisfaction. The
session describes scenarios in which it might be beneficial to use MARS to combine write and read
operations. The session also covers the locking implications of using MARS and how these locks
affect other transactions.
Session 5, “Designing Caching Strategies for Database Applications,” focuses on how to optimize
system resources by caching data and objects in the appropriate layers. This session explains how
correctly optimizing applications by implementing caching will result in reduced resource
utilization and consequently better system performance. The session also describes how resources
such as memory, physical I/O, and network bandwidth can be optimized by using caching
methodologies.
Session 6, “Designing a Scalable Data Tier for Database Applications,” describes how to assess
scalability needs and design the best architecture to scale the system to meet the needs of your
users. The session explains how to identify when to scale database applications and what layer to
scale, as well as how to select the appropriate technology to avoid concurrency problems and

MCT USE ONLY. STUDENT USE PROHIBITED


improve application performance. In addition, the session covers how to evaluate whether scale-out
or scale-up is appropriate for the scalability requirements of your database system.
Session 0: Introduction 17

Introduction to the Workshop Business Scenario

*****************************illegal for non-trainer use******************************

This topic introduces a fictitious business scenario, your role in that scenario, and one potential
solution to the business problem presented in the scenario. Your instructor will demonstrate the
solution, which you will be able to create upon completion of this workshop.

Introduction to Adventure Works Cycles


Adventure Works Cycles is a large multinational manufacturing company that falls under the
Sarbanes/Oxley regulatory act. The company manufactures and sells metal and composite bicycles to
North American, European, and Asian commercial markets. Although its base operation is located in
Bothell, Washington, with 290 employees, several regional sales teams are located throughout the
company’s market base. Branch sales offices are located in Barcelona and Hong Kong, and
manufacturing is located in a wholly owned subsidiary in Mexico.

Your role in Adventure Works Cycles


Throughout this course, you will perform the role of a lead database designer in Adventure Works
Cycles. You will perform database designer tasks based on the instructions and specifications given to
you by the company’s management team. Your assignment is to design the data tier of the company’s
SQL Server 2005 implementation for maximum reliability and security.

Demonstration
Your instructor will demonstrate the solution to the business problem in this workshop. The
solution that you ultimately create should be similar to the solution that you see in the
demonstration.

MCT USE ONLY. STUDENT USE PROHIBITED


THIS PAGE INTENTIONALLY LEFT BLANK

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access
Technologies and an Object Model

Contents
Session Overview 1
Section 1: Introduction to Data Access
Technologies 2
Section 2: Choosing Technologies for
Accessing Data 8
Section 3: Building a Data Access Layer 32
Section 4: Designing Data Access from
SQLCLR Objects 44
Section 5: Available Data Object Models for
Administering SQL Server 56
Next Steps 68
Discussion: Session Summary 69

MCT USE ONLY. STUDENT USE PROHIBITED


Information in this document, including URL and other Internet Web site references, is subject to
change without notice. Unless otherwise noted, the example companies, organizations, products,
domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious,
and no association with any real company, organization, product, domain name, e-mail address,
logo, person, place or event is intended or should be inferred. Complying with all applicable
copyright laws is the responsibility of the user. Without limiting the rights under copyright, no
part of this document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or
otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory,
regarding these manufacturers or the use of the products with any Microsoft technologies. The
inclusion of a manufacturer or product does not imply endorsement of Microsoft of the
manufacturer or product. Links are provided to third party sites. Such sites are not under the
control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link
contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for
webcasting or any other form of transmission received from any linked site. Microsoft is providing
these links to you only as a convenience, and the inclusion of any link does not imply endorsement
of Microsoft of the site or the products contained therein.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual property.

© 2006 Microsoft Corporation. All rights reserved.

Microsoft, <The publications specialist places the list of trademarks provided by the copy editor
here. Microsoft is listed first, followed by all other Microsoft trademarks in alphabetical order.>
are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or
other countries.

<The publications specialist inserts mention of specific, contractually obligated to, third-party
trademarks, provided by the copy editor>

All other trademarks are property of their respective owners.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model i

Session Overview

*****************************illegal for non-trainer use******************************

Database applications can be used to access data stored in a database or to provide management
access to database systems. With knowledge of various data access technologies, you can select the
appropriate technologies and best features to develop efficient and manageable database
applications.
Developers often continue using the same data access technologies that they have used to develop
database applications in the past. This is because they are unaware of technologies that can better
serve their development needs. Using inappropriate data access technologies results in development
of inefficient database applications.
This session focuses on data access technologies and explains methods of accessing data, building a
data access layer, designing a data access layer with SQLCLR, and using data object models for
administering Microsoft® SQL Server™ 2005.

Session Objectives
„ Describe a typical database system and the role that data access technologies play in that system.
„ Select appropriate technologies for accessing data stored in SQL Server 2005.
„ Explain how to build a data access layer.
„ Explain how to design SQL Server objects that use the in-process data provider.
„ Describe the data object models for administering SQL Server 2005 components and objects.

MCT USE ONLY. STUDENT USE PROHIBITED


ii Session 1: Choosing Data Access Technologies and an Object Model

Section 1: Introduction to Data Access Technologies

*****************************illegal for non-trainer use******************************

Section Overview
There are two types of database applications: applications that obtain data and store it in the
database using data access components, and the administrative tools, based on well-defined object
models which provide the functionality required to administer a database system. Your approach
toward each type of database application will affect how these applications are used, their
performance, and their maintenance.
This section introduces you to available data access technologies, discusses their appropriate scope
of utilization and lists the sources from where you can find information on these technologies.
In this section, you will learn about the two types of database applications. You will also learn
about the various components of the data access system and the architecture of the data access
components and libraries. Additionally, you will learn about the sources where you can find
information and the available documentation describing the functionality of data access
technologies.

Section Objectives
„ Explain the various types of database applications.
„ Explain how the various components of a data access system interact with each other.
„ Explain where you can find information and documentation about data access technology.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model iii

Discussion: Database Application Types

*****************************illegal for non-trainer use******************************

Introduction
Database applications serve many purposes, such as managing data and providing administrative
control to the database system. In many cases, database applications are clearly oriented towards
either data management or system administration. However, it is common for database applications
to provide a mixture of these two completely different types of tasks.

Discussion Questions
1. What is a database application?
2. Is SQL Server Management Studio a database application?
3. Is Windows Explorer a database application?
4. Is a database application confined to a database server?
5. Should database applications be developed exclusively by client-side developers?
6. Should database applications be server-independent?

MCT USE ONLY. STUDENT USE PROHIBITED


iv Session 1: Choosing Data Access Technologies and an Object Model

Data Access System Components

*****************************illegal for non-trainer use******************************

Introduction
To interact with a data source, every application requires a data access system. All applications,
from the simplest to a complex distributed enterprise application, require the same type of data
access components.
A simple database application can be a query running directly inside a database server. Such an
application will typically be a client application. An example of a simple database application is an
SQLcmd running on a desktop computer, connected to the database server through a specific
network library, such as SQL Native Client (SQLNCLI).
A complex distributed application can be a standard business application, such as a customer
relationship management (CRM) application or an accounting application. These applications can
be developed using data access components. You can create a client application that uses a data
access interface, such as ADO.NET, with a data source provider such as SQLNCLI to execute
queries against a remote database server.
In this topic, you will learn about the various components of the data access system.

Data Access System Components


Data access system components can be classified as one of the following:
„ Server-side components, which run on the server and manage requests from other computers.
Examples of server-side components include network libraries installed with SQL Server and T-
SQL Endpoints, which are assigned in the server to listen to requests from client applications.
„ Client-side components, which send requests from the database application to the server
components, and from the server components to the database applications. Examples of client-side
components include a presentation layer with user interface components, user interface process
components, and a data access layer that is composed of data access components and runs on the
client computer.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model v

How Database Components Interact


The following steps explain how the database components interact with each other.
1. The client application implements a data access layer for manageability purposes.
2. The data access layer uses a generic data access application programming interface (API), such
as ADO.NET, to interact with the remote data source.
3. The API uses a specific data access provider component to interact with the programming API of
the remote data source.
4. The data access provider interacts with the physical network and the network’s communication,
transactional, and security protocols, and data serialization formats to communicate with the remote
data source.
5. The data source interprets the request; executes an action, which might be a data retrieval,
creation, update, or deletion; and returns the necessary results through the same channels and
objects through which the input message was sent.

How Database Applications Are Built


In addition to learning data access system components, you must also know how a database
application is built. Building a database application as a multi-tiered application with the following
layers is recommended:
„ Presentation tier that interacts with the end-user through a user interface
„ Business tier that enforces business processing rules and business workflows
„ Data access tier that translates the logical data representation into physical schemas and enforces
data integrity

Additional Information T-SQL Endpoints, which replace the legacy open data services
(ODS) interface, are TCP endpoints that SQL Server creates to listen to requests from client
applications. Rather than using the standard endpoint created by default, you can create
multiple T-SQL Endpoints to which applications can connect, and these endpoints can be
assigned specific permissions.
For more information on T-SQL Endpoints, refer to the following resources:
• “Hardware and Software Requirements for Installing SQL Server 2005” from
Books Online
• “Net-Libraries and Network Protocols” from Books Online, the online product
documentation for SQL Server 2005.

MCT USE ONLY. STUDENT USE PROHIBITED


vi Session 1: Choosing Data Access Technologies and an Object Model

Demonstration: Searching for Data Access Technology Information


and Documentation

*****************************illegal for non-trainer use******************************

Introduction
During this demonstration, your instructor will point out important sources of information about
SQL Server 2005, including Books Online, the online product documentation for SQL Server 2005.
These sources of information are dynamically updated to provide up-to-date support for SQL
Server 2005.

Demonstration Overview
In this demonstration, your instructor will illustrate how to navigate to various sources of
information about data access technologies.
It is important to understand that Books Online (BOL) is not the only source of information about
SQL Server 2005. In fact, this demonstration will show other important, dynamically updated
sources of information. These sources provide up-to-date support for SQL Server 2005 and provide
more information than BOL can offer.

Task 1: Navigating to Various Sources of Information about Data


Access Technology
Task Overview
This task illustrates how to navigate to various sources of information about data access
technologies.
To navigate to various sources of information about data access technology, perform the following
steps.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model vii

1. To access official SQL Server documentation, click Start, point to All Programs, point to
Microsoft SQL Server 2005 CTP, point to Documentation and Tutorials, and click SQL Server
Books Online.
The SQL Server Books Online window appears.
2. Click Start, point to All Programs, point to Microsoft SQL Server 2005 CTP, point to
Documentation and Tutorials, point to Tutorials, and then click SQL Server Tutorials.
3. In Windows Explorer, navigate to C:\Program Files\Microsoft SQL Server\90\Samples.
4. Click Start, point to Programs, point to Microsoft Visual Studio 2005 Beta 2, and then click
Microsoft Visual Studio 2005 Documentation.
5. Start Microsoft Internet Explorer.
6. Browse to http://msdn.microsoft.com.
7. Browse to http://msdn.microsoft.com/sql.
8. Browse to http://msdn.microsoft.com/practices.
9. Browse to http://technet.microsoft.com.
10. Browse to http://www.microsoft.com/sql/default.mspx.
11. Browse to http://blogs.msdn.com/.
12. Browse to http://msdn.microsoft.com/newsgroups/.

MCT USE ONLY. STUDENT USE PROHIBITED


viii Session 1: Choosing Data Access Technologies and an Object Model

Section 2: Choosing Technologies for Accessing Data

*****************************illegal for non-trainer use******************************

Section Overview
Database applications can be designed to access data by using various available data access
technologies. To make an informed decision about selecting the technology that best matches your
design, development, cost, and performance expectations, you should be aware of these
technologies. This knowledge also enables you to design and develop robust and efficient database
applications.
In this section, you will learn about various data access technologies, considerations for using
legacy and new data access technologies, and how to improve database application maintenance.
You will also learn how to use Hypertext Transfer Protocol (HTTP) Endpoints and SOAP to access
data, considerations for connecting SQL Server to other data stores, and how to connect SQL
Server to other data stores.

Section Objectives
„ Explain the various data access technologies.
„ Describe scenarios in which using earlier technologies to access data is appropriate.
„ Describe scenarios in which using SQL Native Client (SNAC) to access data is appropriate.
„ Explain how to migrate an existing C++ component that uses Open Database Connectivity
(ODBC) to a component that uses SNAC.
„ Apply the guidelines for accessing data by using ADO.NET.
„ Explain how to improve the maintenance of database applications by managing connection strings
appropriately.
„ Apply the guidelines for accessing data by using HTTP Endpoints and SOAP.
„ Evaluate the considerations for connecting SQL Server to other data stores.
„ Explain how to connect SQL Server to other data stores and the various ways to connect.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model ix

Data Access Technologies

*****************************illegal for non-trainer use******************************

Introduction
Data access technologies enable database applications to connect to and obtain data from databases.
Database developers can use many different data access technologies to enable this functionality.
Understanding the architecture of data access technologies and how their components communicate
with each other will help you both select the data access technology that meets your application
development requirements and develop efficient client-side database applications.
This topic explains the architecture of data access technologies. It also discusses how server and
client components of data access technologies communicate. In addition, this topic describes the
evolution of these technologies from proprietary protocols to open client libraries.

Architecture of Data Access Technologies


The architecture of data access technologies consists of three main sets of components:
„ Databases, which store data and use server-side components to provide connectivity to database
applications and components.
„ Providers, which establish communication between a database server and client components.
„ Client components, a set of objects that enable a client application to connect to and interact with a
database. Client components act as data transformers and local repositories. To create client-side
database applications, you can use various data access technologies. Unmanaged client
technologies include Jet, ActiveX® Data Objects (ADO), ODBC, and OLE Database (OLE DB).
Database access applications that are based on managed code use .NET Framework data access
providers.

SQL Native Client (SQLNCLI), a new technology implemented by SQL Server 2005, is a library
providing full access to SQL Server databases, including the new functionalities of SQL Server
2005, through the same programming interfaces as ODBC and OLE DB. Because SQLNCLI is
MCT USE ONLY. STUDENT USE PROHIBITED
x Session 1: Choosing Data Access Technologies and an Object Model

specific to SQL Server, there is no need to use an OLE DB provider or an ODBC driver to add
another layer. The communication is from the SQLNCLI directly to the SQL Server database.
The network client library connects to a database server through a server network library, which is
accessed internally from a T-SQL Endpoint.
A client always communicates with a database through the data access providers, and the database
application does not need to be involved in the implementation details of this communication.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xi

Considerations for Using Earlier Technologies to Access Data

*****************************illegal for non-trainer use******************************

Introduction
Depending on their level of complexity and their feature set, it might be appropriate to use legacy
data access technologies for specific applications. To select the best legacy data access technology
and make proper design decisions, you must be aware of the capabilities, limitations, and
performance levels of each technology.

DBLibrary
Legacy SQL Server database applications use DBLibrary to connect to SQL Server. DBLibrary, a
client interface that Microsoft inherited from Sybase, interacts with SQL Server databases.
DBLibrary exposes a set of commands and functions that can be called by C code to perform
operations on and extract results from a database.
Following are considerations for using DBLibrary to access data:
„ Deprecated feature: DBLibrary has been designed to work exclusively with SQL Server 6.5 and
its previous versions. It is considered a deprecated feature and will be removed in future releases.
Therefore, it should not be used for new projects.
„ No support for SQL Server versions 7.0 and above: DBLibrary does not expose new
functionalities provided by SQL Server versions 7.0 and above.

MCT USE ONLY. STUDENT USE PROHIBITED


xii Session 1: Choosing Data Access Technologies and an Object Model

ODBC
Following are considerations for using ODBC to access data:
„ Well-established industry standard: Many database applications use ODBC to connect to relational
database systems. In fact, many high performance database components that require extreme
connectivity performance to SQL Server 2000 are still developed in the C language using ODBC
natively.
„ Availability of drivers: Developers implement the functions in the ODBC API through DBMS-
specific drivers. Applications call the functions in these drivers to access the data stored in any
database management system (DBMS). A driver manager manages communication between
applications and drivers. Microsoft provides driver managers for computers running Microsoft®
Windows® 95 and later versions, as well as driver managers for ODBC drivers. However, most
available ODBC applications and drivers are developed by other vendors, such as IBM and
Oracle.
„ ODBC Control Panel Data Source Names (DSNs) and DSN-less connections: The driver manager
exposes itself through an extension for the ODBC Control Panel to define different connection
definition names or DSNs to a user or an administrator. However, applications using ODBC can
create DSN-less connections as necessary.

Additional Information The connection definitions can be stored in the following locations:
ƒ The registry:
• System DSNs. Stored in the HKLM portion of the registry and are available to all
users of the computer
• User DSNs. Stored in the private per-user HKU portion of the registry, and are
available only to the user who created them
ƒ In the file system:
• File DSN. Files that contain the ODBC connection information and can be copied
from one computer to another or stored in a central location available to all
computers

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xiii

OLE DB
Following are considerations for using OLE DB to access data:
„ Additional functionality from ODBC: Many database applications developed with Microsoft
Visual Basic® 6.0 or later versions use ADO and OLE DB to connect to SQL Server and other
database systems. OLE DB has been designed to access data stores that are beyond the typical
relational store. This makes it more suitable for database applications accessing other data sources
such as e-mail stores.
„ Data providers: OLE DB is a set of COM-based interfaces that expose data from a variety of
sources called OLE DB data providers. These data providers include Microsoft DBMSs and other
DBMSs such as Oracle OLE DB provider and Interbase OLE DB provider. OLE DB interfaces
provide applications with uniform access to data stored in diverse information sources or data
stores, giving developers better and standard programming interfaces. Because OLE DB is based
on a Component Object Model (COM) implementation, it exposes all its functionality as a
collection of objects, thereby exposing properties and simple methods for each required action
instead of the C-based, call-level interface exposed by ODBC drivers.
„ Universal Data Link (UDL): To simplify the creation of connection strings used by OLE DB, you
can create file data sources called UDLs directly in Windows Explorer. However, as with ODBC,
you can create connections as necessary without using these UDL files. Most applications using
OLE DB do so using ADO, because the ADO exposes a simpler object model than OLE DB does.

SQLXML
Following are considerations for using SQLXML to access data:
„ URL-based access to SQL Server: By using SQLXML, earlier known as SQL Server Web
Release, you can obtain URL access to SQL Server through a virtual server based on a specific
Internet Server Application Programming Interface (ISAPI) extension. This extension handles
communication with SQL Server.
„ SQLXML templates: There is a risk of structured query language (SQL) injection during URL-
based access to SQL Server. To avoid this security threat, SQLXML enables creation of
SQLXML templates containing the T-SQL syntax for queries, which are often parameterized and
restrict the type of queries that users can send through the URL.
„ Deprecated feature: In SQL Server 2005, SQLXML is replaced by HTTP Endpoints. SQL Server
also provides new XML support.

Following is an example of an XML template containing a simple query.


<ROOT xmlns:SQL="urn:schemas-microsoft-com:xml-SQL">
<SQL:query>
SELECT top 2 CustomerID, CompanyName
FROM Customers
FOR XML AUTO
</SQL:query>
</ROOT>
This template, saved as File1.xml in a virtual directory called nwind defined in the server running
Internet Information Services (IIS), can be executed using the following URL:
http://IISServer/nwind/template/File1.xml

MCT USE ONLY. STUDENT USE PROHIBITED


xiv Session 1: Choosing Data Access Technologies and an Object Model

Considerations for Using SQL Native Client to Access Data

*****************************illegal for non-trainer use******************************

Introduction
Some applications require database operations to be executed with minimum overhead. These
applications can achieve this by using ADO.NET and OLE DB, but these data access APIs
introduce extra programming layers that can be optimized only to a certain point. Such applications
are developed in the C language using ODBC drivers or OLE DB providers natively. However, the
current ODBC drivers cannot access the new functionality of SQL Server 2005.
To use the new functionality provided by SQL Server 2005 and provide fast access to data, you use
SNAC using the familiar ODBC or OLE DB programming interfaces.

Accessing Data Using SNAC


SNAC is an API included with SQL Server 2005. It is specifically designed to simultaneously
expose the new features of SQL Server 2005 and maintain backward compatibility with earlier
versions. Understanding the features of SNAC will help you decide which of its special features
you require, when you will need backward compatibility, and how to use this technology
efficiently.

Differences with Microsoft Data Access Components


Microsoft Data Access Components (MDAC) is part of the operating system and is part of the
service packs that are to be updated.
MDAC contains components for using OLE DB, ODBC, and ADO. SNAC only implements OLE
DB and ODBC, although ADO can access the functionality of SNAC.
SNAC is not part of MDAC, but SNAC exposes most of its own functionalities and will provide
full compatibility in the new version of MDAC to be released after Windows XP Service Pack 2
(SP2) and Microsoft Windows Server™ 2003 Service Pack 1 (SP1).

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xv

Co-Existence Issues with MDAC


There are no known co-existence issues between MDAC and SNAC if they are installed and run on
the same computer. They are designed to be fully compatible.

Functionality Added to ADO.NET by SNAC


The following table details the functionality added to ADO.NET by SNAC.
Functionality Description

User-Defined Types Extends the SQL type system by enabling


you to store objects and custom data
structures in a SQL Server 2005 database.
XML Data Type A SQL Server 2005 XML–based data type
that can be used as a column type, variable
type, parameter type, or function return
type.
Large Value Types Large object data types (LOB data types)
are supported in SQL Server 2005 with
the introduction of a max modifier for
varchar, nvarchar, and varbinary data
types to enable storage of values as large
as 2^32 bytes.
Snapshot Isolation Support for snapshot isolation in SQL
Server 2005 enhances security for Online
Transaction Processing (OLTP)
applications by avoiding read/write
blocking scenarios.
Multiple Active Result Sets (MARS) Enables the execution of multiple result
sets using a single connection to a SQL
Server 2005 database.
Password Expiration Enhances the handling of expired
passwords so that passwords can be
changed on the client without
administrator involvement.
Asynchronous Operations Enables methods to return immediately
without blocking the calling thread. This
enables much of the power and flexibility
of multi-threading.

Benefits of Using SNAC Instead of ADO.NET


The benefits provided by SNAC can be used by applications that need to:
„ Use the special features offered by SNAC.
„ Store user-defined data types for special information.
„ Use XML data types.
„ Use MARS and asynchronous operations.

Note SQL Server 2005 uses SNAC to expose its new functionalities and administrative tools.

MCT USE ONLY. STUDENT USE PROHIBITED


xvi Session 1: Choosing Data Access Technologies and an Object Model

Using SQLXML
SQLXML 4.0, included with SQL Server 2005 and Microsoft Visual Studio® 2005, enables you to
use the new capabilities of SNAC. Using SQLXML, you can implement XML formatting in the
middle tier instead of overloading the computer running SQL Server to perform the task.

Including the SNAC Header File


To include the SNAC header file, you must use an include statement in the C or C++ programming
code. This header file can be compiled only with Microsoft Visual C++® 7.0 or later versions.
OLE DB
To include the SNAC header file in an OLE DB application, use the following line of code.
include “SNAC.h”;

When creating a connection to a data source through SNAC, use SNAC as the provider name.
ODBC
To include the SNAC header file in an ODBC application, use the following line of programming
code.
include “SNAC.h”;

When creating a connection to a data source through SNAC, use SQL Native Client as the driver
name string.

Support for UDT and XML


SNAC has full support for uniform data transfer (UDT), enabling you to obtain and insert custom
common language runtime (CLR) objects and structures directly inside records in a database. This
level of integration can be very useful, but it can also present performance and compatibility issues.
However, SNAC does not have XML integration.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xvii

Demonstration: Migrating a C++ Component from ODBC to SNAC

*****************************illegal for non-trainer use******************************

Introduction
You can retrieve information from a database through a standard C++ application that uses ODBC
natively. To retrieve information, implement an ODBC connection string component in the
application.
Using a SNAC connection string component in a C++ application is an alternative and efficient
method of retrieving information from a database. To migrate from using an ODBC component to a
SNAC component, you only need to modify some lines of C++ code.

Demonstration Overview
In this demonstration, your instructor will illustrate how to migrate an existing C++ component that
uses ODBC to a component that uses SNAC.

Task 1: Migrating a C++ Component from ODBC to SNAC


Task Overview
This task illustrates how to migrate an existing C++ component that uses ODBC to a component
that uses SNAC.
To migrate a C++ component from ODBC to SNAC, perform the following steps.
1. Browse to Democode\Section02\compute, open the compute.sln file, and show the code of the
compute.cpp module.
2. To go to line 22, press CTRL+G, and then type 22 in the Go to Line dialog box.
3. To run the application, press F5, and then show the results in the console window.
4. Go to line 22, and add a comment by entering the / character twice.

MCT USE ONLY. STUDENT USE PROHIBITED


xviii Session 1: Choosing Data Access Technologies and an Object Model

5. Remove the comment in line 23 by removing the // characters.


6. To start Solution Explorer, press CTRL+ALT+L. Expand the project tree and the headers files
under it to show the sqlncli.h file.
7. Press F5 again to show the application running.
8. Close the Microsoft Visual Studio 2005 Beta 2 environment.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xix

Considerations for Using ADO.NET to Access Data

*****************************illegal for non-trainer use******************************

Introduction
ADO.NET introduced a new way of working with data by splitting connected and disconnected
operations. This is done by defining two distinct models: the connected model, which is provider-
dependent, and the disconnected model, which is not linked to any particular data provider.
To design .NET Framework data applications, you must know about these models and the features
that they provide.

Connected versus Disconnected Models


ADO.NET uses the disconnected model as the primary way to manage data. ADO.NET uses the
connected model only when you use the DataReader object to obtain information or when
ADO.NET uses an SqlCommand object to execute queries in SQL Server.
When you use the connected model, you have direct access to a database. This enables you to
obtain the most up-to-date data because the queries retrieve the latest version of the rows, unless
you determine other isolation mechanisms. However, this consumes resources on three levels:
„ Client level: To keep the connection open and alive.
„ Network bandwidth level: To send and receive messages from the server.
„ Server level: SQL Server uses some amount of memory for each open connection.

Using the disconnected model, you request the database server to retrieve a set of information. SQL
Server sends a stream of data to ADO.NET, which fills a DataSet or a DataTable, and then closes
the connection, freeing the server resources. This reduces the network traffic and resource
utilization on the server side but needs more client resources, mostly memory resources, to keep a
copy of the data inside the application. You must also keep track of the data version when you
update it.

MCT USE ONLY. STUDENT USE PROHIBITED


xx Session 1: Choosing Data Access Technologies and an Object Model

ADO.NET and SQLCLR Objects


In SQL Server 2005, you can create CLR objects inside the database. CLR objects can use
ADO.NET to access information in different databases. This provides you with a valid way to
combine information from different servers. To accomplish this, you must make sure that the
connection established from a SQLCLR object has enough permissions to perform the required
actions in the remote databases.

The DataAdapter Object


DataAdapter, included in the ADO.NET object model, is a complex object and encapsulates all the
functionality required to manipulate information in a database by using a single object.
The DataAdapter object uses explicit or implicit SqlCommand objects to perform SELECT,
INSERT, UPDATE, and DELETE operations on a database as queries, or through specific stored
procedures through a connection to the computer running SQL Server.

ADOMD.NET
ADOMD.NET is a standard .NET data provider designed to communicate with multidimensional
data sources, such as SQL Server 2005 Analysis Services.
ADOMD.NET uses XML for Analysis version 1.1 to communicate with multidimensional data
sources and can also use Transmission Control Protocol/Internet Protocol (TCP/IP) HTTP streams
to transmit and receive XML-compliant SOAP requests and responses for analysis specification.
ADOMD.NET exposes objects similar to ADO.NET, such as AdoMdConnection,
AdoMdCommand, AdoMdDataReader, and AdoMdDataAdapter.

Compact Framework Data Providers


Two different data providers access SQL Server databases. They are System.Data.SQLClient and
System.Data.SQLServerCe. Both are for SQL Server 2005 Mobile Edition. Each exposes similar
functionalities, but they differ in the destination of the calls.
System.Data.SQLClient in the compact framework needs network connection to a computer
running SQL Server in the network and has the same functionality as the standard .NET
Framework.
System.Data.SQLServerCe is used for programming applications that are semi-connected to, or
disconnected from, SQL Server databases running on top of computers running SQL Server 2005
Mobile Edition. This provider ensures that any call to one of the objects of this namespace uses as
few resources as possible, because limited memory is available to mobile device applications.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xxi

Demonstration: Managing Connection Strings

*****************************illegal for non-trainer use******************************

Introduction
Creating and managing connection strings is an important issue for any database application. This
simple task, when not performed appropriately, can create difficulty in applying changes to the
applications when the target database needs to be changed.

Demonstration Overview
In this demonstration, your instructor will illustrate different ways of managing the database
connection strings and how to improve the availability and security of a connection string.

Task 1: Exploring ODBC Administrator


Task Overview
This task illustrates how to use ODBC Administrator and how to create a File DSN connection.
To explore ODBC Administrator and create a file DSN connection, perform the following steps.
1. To start ODBC Data Source Administrator, click Start, point to Programs, point to
Administrative Tools, and then click Data Sources.
2. Show the two first tabs in the ODBC Data Source Administrator dialog box.
3. Click the File DSN tab, and then click Add.
4. Select SQL Server as the selected driver, and then click Next.
5. Specify a name for the connection, and then click Next.
6. To save the connection, click the Finish button.
7. From the Server list, select MIA--SQL\SQLINST1, and then click Next.

MCT USE ONLY. STUDENT USE PROHIBITED


xxii Session 1: Choosing Data Access Technologies and an Object Model

8. On the Create a New Data Source to SQL Server screen, accept the default setting Windows
NT Authentication, and then click Next.
9. Select the Change the default database to check box, choose AdventureWorks from the
database list, and then click Next.
10. Click Finish, and then click the Test Data Source button. Notice the message Tests Completed
Successfully! Click OK.
11. Click OK to close the window.
12. Open Windows Explorer, and then browse to C:\PROGRAM_FILES\Common
Files\ODBC\Data Sources.
13. Open the file created with the specified name for the DSN connection. Right-click the file, click
Open With, and then click Select the program from a list. In the Open With dialog box, select
Notepad, and then click OK to show the content of the file.
14. Close Windows Explorer.
15. Close the Microsoft Visual Studio 2005 Beta 2 development environment.

Task 2: Setting up a Connection String from the .NET Framework


Task Overview
This task illustrates how to set up a connection string from the .NET Framework.
To set up a connection string from the .NET Framework, perform the following steps.
1. Browse to D:\Democode\Section02, and open the 2783M1D2 solution.
2. On the Data menu, click Add New Data Source.
3. In the Data Source Configuration Wizard, select Database, and then click Next.
4. Click the New Connection button.
5. In the Server name box, type MIA-SQL\SQLINST1, or the instance to be used.
6. Accept the default settings on this screen, and then select AdventureWorks as the primary
database.
7. To check the connection, click the Test Connection dialog box. The message “Test connection
succeeded” will appear. Click OK.
8. Accept the default settings on this screen, and then click Next.
9. In the Choose your Database objects dialog box, click the Finish button. In the confirmation
message box, click Yes.
10. In Solution Explorer, right-click the project, and then click Properties.
11. To show the connection string, click the Settings tab on the left.
12. To edit the app.config file, in the Solution Explorer window, double-click the app.config file.
13. Show that the connection string is stored as clear text inside the ConnectionStrings section.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xxiii

Task 3: Managing Connection Strings to Be Used Centrally and


Securely by Different Applications and Computers
Task Overview
This task shows how to manage connection strings that will be used centrally and securely used by
different applications and computers.
To manage connection strings that will be used centrally and securely by different applications and
computers, perform the following steps.
1. Start the registry editor.
2. In Windows Explorer, browse to C:\Windows\Microsoft.NET\Framework\v2.0.50215\CONFIG.
3. Double-click the machine.config file to show its contents.

Task 4: Building Connection Strings That Support Failover Servers


Task Overview
This task illustrates how to build connection strings that support failover servers
To build connection strings that support failover servers, perform the following steps.
1. View the 2783M1D2 solution.
2. View the connection string in the app.config file.
3. At the end of the connection string, add the following line of code:
Partner Failover=<server_name>;

Task 5: Determining When to Use Connection Objects and Connection


Strings in ADO.NET Overloaded Method
Task Overview
This task illustrates when to use connection objects and connection strings in ADO.NET
overloaded method.
To determine when to use connection objects and connection strings in ADO.NET overloaded
method, perform the following steps.
1. In the Visual Studio project used in Task 2, 2783M1D2, double-click the form to access the form
code in the form_Load method.
2. Type the declaration Dim da as New System.Data.SqlClient.SqlDataAdapter, and show the
students the overloaded methods of the DataAdapter’s constructor.
3. Observe the third overload, which requires a select command string and a connection string, and
compare it to the fourth overload, which accepts a connection object as the second argument.

MCT USE ONLY. STUDENT USE PROHIBITED


xxiv Session 1: Choosing Data Access Technologies and an Object Model

Considerations for Using HTTP Endpoints and SOAP to Access


Data

*****************************illegal for non-trainer use******************************

Introduction
The Internet has become an important medium for accessing information. Many important features
are available to help you create database applications based on Internet protocols and standards.
This topic discusses the possibility of using HTTP Endpoints and SOAP to access data through the
Internet, and the advantages and disadvantages of using each.

Using HTTP Endpoints to Expose a SQL Server Front End Server


To expose a SQL Sever as a Web service, you must define an HTTP Endpoint. The CREATE
ENDPOINT statement is used to create HTTP and TCP endpoints that will be used by:
„ SOAP
„ T-SQL
„ Service Broker
„ Database Mirroring

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xxv

Differences in Functionality from XML Web Services


From the client’s point of view, there is no difference between calling an HTTP Endpoint and
calling a Web service. However, Web services require some additional overhead to collect data
from a server to a client because they are generic services external to SQL Server. In contrast to
Web services, native SQL Server HTTP Endpoints are already defined inside SQL Server to
address data requests specifically.

Following are guidelines for using SQL Server Endpoints:


„ When using SQL Server HTTP Endpoints, you should not enable basic or digest authentications
unless you must impose access limitation on a database.
„ For access to sensitive data, it is preferable to adopt a Secure Sockets Layer (SSL) communication
channel.
„ Do not use HTTP Endpoints if you are building an intensive online transaction processing (OLTP)
application or must manage large data values such as binary large objects (BLOBs).

Differences in Performance from XML Web Services


Using HTTP Endpoints to return XML data is often more efficient than using SQLXML in the
middle tier.
If an important part of the functionality of a Web service is to query a database, it is best to
transform the Web service into HTTP Endpoints. However, you should consider the security
implications of exposing a database server natively through HTTP.

Writing Code That Uses HTTP Endpoints


Using the Visual Studio 2003 or Visual Studio 2005 development environment is the easiest way to
write code that uses HTTP Endpoints. From this development environment, you only need to add a
Web reference to the endpoint. This creates a wrapper class, which contains one member with the
function signature for each WEBMETHOD that is defined in the endpoint. Using this class, you can
access the endpoint in the same way that you would with any other Web service.

Maintaining and Securing Endpoints


Sometimes you need to change or disable the HTTP Endpoint to change its definition, to disable the
service temporarily for database maintenance purposes, or to change the authentication type for the
HTTP Endpoint. The ALTER ENDPOINT statement enables you to execute these changes in the
HTTP Definition.

Using Catalog Views to Get Information about Endpoints


SQL Server 2005 exposes the following set of catalog views to provide information about the
defined endpoints:
„ Sys.soap_endpoints. This view returns a list of enabled HTTP SOAP endpoints.
„ Sys.endpoint_webmethods. This view returns a row for each method that is exposed in each
endpoint.

MCT USE ONLY. STUDENT USE PROHIBITED


xxvi Session 1: Choosing Data Access Technologies and an Object Model

Considerations for Connecting SQL Server to Other Data Stores

*****************************illegal for non-trainer use******************************

Introduction
Some database applications require SQL Server to connect to other data stores by using stored
procedures, triggers, or user defined functions and to execute distributed queries combining data
from remote data stores.
This topic discusses the primary issues to be considered when connecting to remote data stores and
obtaining information about remote stores.

Linked servers provide access to different data stores


SQL Server 2005 enables you to define other computers running SQL Server and other data sources
using OLE DB or ODBC libraries as linked servers. Using this feature, you can use distributed
queries and updates in one T-SQL statement or stored procedure. The data stores include third-party
databases, Microsoft Office Access databases, full text indexes, text files, and comma separated
value (CSV) files.

Note You can use computers running SQL Server and SQL Server databases that have an OLE
DB provider or an ODBC driver as linked servers. However, the functionality of an application
is limited to the functionality implemented by the OLE DB provider or the ODBC driver.

Delegation issues with linked servers


When a user connects to SQL Server to execute queries requesting data from other servers, SQL
Server needs to delegate or impersonate the user’s credentials to ensure that the user has the
necessary permissions to obtain the required data. However, it is possible for SQL Server to obtain
data only if some specific conditions are met.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xxvii

Considerations for local and remote processing


When the query processor receives a query, it creates a query plan that is based on available
information about tables, indexes, and statistics.
If the query includes remote objects, the local query processor tries to split the execution of the
original query. The portions of the query that are to be executed remotely will be sent to the remote
server as a remote query, and the results will be evaluated as if they have come from a worktable.
This technique can improve performance because the remote server can filter the data to be
obtained, minimizing unnecessary data flow, and improving overall query performance.
In some cases, this is not possible because the local query processor does not have enough
information to pass through these subqueries to the remote server.
There are also limitations to this technique, such as dialect version and specific linked server
settings. When SQL Server cannot rely on remote processing, the local server will request that all
data be sent by all remote servers, and the entire query will be evaluated locally.
Local processing can provide better consistency but is more expensive than remote execution; the
cost depends on the amount of data to be retrieved from remote servers.

Considerations for loopback linked servers


You can define a linked server to connect to the same instance where the linked server is defined,
creating a loopback linked server. The most common scenario involving loopback linked servers is
for testing and debugging purposes. This is to make sure that specific processes work against linked
servers.

Considerations about linked servers defined over clustered servers


When you define linked servers in a clustered server, you might experience problems if the same
version of the provider has not been installed in all the nodes.
You might also experience loss of connectivity if the linked server has been defined to be
connected to a specific node, because the node might not be available at all times.
You can resolve this problem by defining the de-linked server using the .Net Framework name of
the cluster instead of using node identification to establish the linked server connection.

Note You can get more information about linked servers in SQL Server by using system
catalog views.

MCT USE ONLY. STUDENT USE PROHIBITED


xxviii Session 1: Choosing Data Access Technologies and an Object Model

Considerations for data access to external sources from SQLCLR


You can create a SQLCLR stored procedure or function and enable it with the
EXTERNAL_ACCESS permission when you need to perform the following tasks:
„ Deal with third-party data sources.
„ Access data sources that are not fully compliant with linked server implementation.
„ Process the information before using it in combination with local data.

Inside a SQLCLR procedure, you can define how to connect to an external database, get the
information according to the data source standards, transform it in a compatible format, and process
it before using it with other information inside the original database. This gives you full control of
the external database behavior and enables you to cover any special needs that might be difficult to
meet using standard T-SQL techniques.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xxix

Demonstration: Connecting SQL Server to Other Data Stores

*****************************illegal for non-trainer use******************************

Introduction
SQL Server can connect directly to other data stores such as text files, other SQL Server databases,
and databases managed by other relational data base management systems (RDBMSs). You can
automate this process by using a linked server that uses T-SQL. However T-SQL offers additional
functionalities to connect to remote data stores.

Demonstration Overview
In this demonstration, your instructor will illustrate the various ways of connecting SQL Server to
other data stores.

Key points
„ Define SQL Servers as Linked Servers.
„ Define other data sources as linked servers.
„ Define linked servers using T-SQL.
„ Access information from linked servers.

MCT USE ONLY. STUDENT USE PROHIBITED


xxx Session 1: Choosing Data Access Technologies and an Object Model

Task 1: Connecting SQL Server to another SQL Server Instance


Task Overview
This task illustrates how to connect with other computers running SQL Server by using a linked
server definition.
To connect with other SQL Servers using a linked server definition, perform the following steps.
1. Start SQL Server Management Studio.
If prompted, connect to MIA-SQL\SQLINST1 using Windows Authentication.
2. Expand the Server objects node, and show the Linked Servers node.
3. Right-click the Linked Servers node, and click New Linked Server.
4. In the Server type section, click the SQL Server option, and then in the text box, type MIA-
SQL\SQLINST2 as the second instance name.
5. In the left pane, click the Security node.
6. In the left pane, click the Server Options node.
7. Click OK to close the New Linked Server dialog box.

Task 2: Connecting SQL Server to a Text File (CSV File)


Task Overview
This task illustrates how to connect SQL Server to a CSV format text file.
To connect SQL Server to a CSV format text file, perform the following steps.
1. Click to expand the Linked Servers node. From the Providers node, right-click MIA-
SQL\SQLINST2. From the shortcut menu, click Delete. Click OK to delete MIA-
SQL\SQLINST2.In the message box, click Yes to continue with deletion.
2. Right-click the Linked Servers node and then click New Linked Server.
3. In the Linked server box, type MIA-SQL\SQLINST2 as the second instance name. In the
Server type section, click the Other Data Source option, and then select the Microsoft OLE DB
Provider for ODBC Drivers in the drop-down list. In the Data Source box, type
D:\Democode\Section02\Linked Server\FlatSample.csv.
4. Click OK.

Task 3: Connecting to a computer running SQL Server by Using T-SQL


Task Overview
This task illustrates how to connect to a computer running SQL Server using T-SQL.
To connect to a SQL Server using T-SQL, perform the following steps.
1. Click to expand the Linked Servers node. From the Providers node, right-click MIA-
SQL\SQLINST2. From the shortcut menu, click Delete. Click OK to delete MIA-
SQL\SQLINST2.. In the message box, click Yes to continue with deletion.
2. To open the D:\Democode\Section02\Linked Server\CreateLinkedServer.sql file, in SQL Server
Management Studio, on the File menu, click Open File.
MCT USE ONLY. STUDENT USE PROHIBITED
Session 1: Choosing Data Access Technologies and an Object Model xxxi

3. In the Connect to Server dialog box, select MIA-SQL\SQLINST1, click Windows


Authentication, and then click Connect.
4. Press F5 to execute the stored procedure.
5. Select the Linked Servers node, and from the shortcut menu, click Refresh. Notice that the
Server MIA-SQL\SQLINST2 is created.

Task 4: Using OPENQUERY with a Linked Server


Task Overview
This task illustrates how to use OPENQUERY with a linked server.
To use OPENQUERY with a linked server, perform the following steps.
1. To open the D:\Democode\Section02\LinkedServer\OpenQuery.sql file, in SQL Server
Management Studio, on the File menu, click Open File.
2. Highlight and name the arguments of the statement.
3. Click the Execute button.
4. Close SQL Server Management Studio.

MCT USE ONLY. STUDENT USE PROHIBITED


xxxii Session 1: Choosing Data Access Technologies and an Object Model

Section 3: Building a Data Access Layer

*****************************illegal for non-trainer use******************************

Section Overview
The data access layer is an abstraction layer between the business logic processing code and the
physical data sources. The data access layer performs important tasks such as maintaining
transactional integrity, enforcing security, enabling communication, and transforming data. Every
system that stores and accesses data benefits from a well-designed data access layer.
A database application without a well-designed data access layer will be more difficult to develop
and maintain. The application will also suffer from performance problems because not all data
access processes will have the benefit of a centralized and common approach to performing data
access tasks.
In this section, you will learn about the benefits of building a data access layer. You will also learn
about some recommended techniques for pooling data access objects and about the guidelines for
passing data through application tiers.

Section Objectives
„ Explain the benefits of building a data access layer.
„ Explain the available application blocks relevant to data access technologies.
„ Explain the different techniques for pooling data access objects to improve application
performance.
„ Explain how to monitor ADO.NET connection pooling and how to create custom counters that
monitor the way other objects are pooled.
„ Apply the guidelines for passing data access objects through application tiers.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xxxiii

Discussion: Benefits of Building a Data Access Layer

*****************************illegal for non-trainer use******************************

Introduction
Building a data access layer to group data access functions and methods facilitates their
maintenance. Building a data access layer also enhances the performance of the code in the data
access layer. Code performance affects the entire application.

Discussion Questions
1. Where do you place T-SQL code?
2. Do you use stored procedures or dynamic T-SQL code?
3. Do you design applications to deal with different SQL dialects?
4. Do you design applications to deal with different providers?
5. Does your application code access data interfaces or data objects directly?
6. What do you do to improve maintenance of database applications?
7. What do you do to improve data access layer scalability?

MCT USE ONLY. STUDENT USE PROHIBITED


xxxiv Session 1: Choosing Data Access Technologies and an Object Model

Demonstration: Viewing Available Application Blocks Relevant to


Data Access Technologies

*****************************illegal for non-trainer use******************************

Introduction
The Patterns and Practices team at Microsoft provides scenario-specific recommendations for how
to design, develop, deploy, and operate applications that are architecturally sound for the Microsoft
.NET Framework. Some of these recommendations apply to the construction of data access layers,
specifically the Data Access Application Building Block and the Enterprise Library.

Demonstration Overview
In this demonstration, your instructor will illustrate how to navigate to application blocks created
and published on the Patterns and Practices team Web site.

Note To download application blocks, visit the Patterns and Practices team Web site at
http://msdn.microsoft.com/practices/.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xxxv

Task 1: Navigating to Available Application Blocks Relevant to Data


Access Technologies
Task Overview
This task illustrates how to navigate to application blocks created and published on the Patterns and
Practices team Web site.
To navigate to application blocks created and published on the Patterns and Practices team Web
site.
1. Start Internet Explorer.
2. In the Address bar, type http://msdn.microsoft.com/practices, and then click Go.
3. To open the Application Blocks and Libraries page, in the left pane, click the Application
Blocks and Libraries link.
4. To obtain information on the Data Access Application Block, scroll down the Web page, and
then click the Data Access Application Block link.
5. To return to the previous Web page, click the Back button on the Standard toolbar.
6. To obtain information on the Enterprise Library Application Block, click the Enterprise
Library link.

MCT USE ONLY. STUDENT USE PROHIBITED


xxxvi Session 1: Choosing Data Access Technologies and an Object Model

Techniques for Pooling Data Access Objects

*****************************illegal for non-trainer use******************************

Introduction
When an object, such as a command and data set, is created it passes through the following stages:
1. Memory allocation
2. Code initialization
3. Variable creation
4. Allocation of resources, such as connections, on the server side
5. Garbage collection
6. Memory de-allocation
Object pooling is a technique in which you keep a pool of available objects in the memory for
recycling and reuse instead of creating and destroying them repeatedly. An existing object instance
that is recycled and reused by multiple processes improves the performance of an application.
Object pooling is useful when creating an object is costly or when you need to pool access to scant
resources.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xxxvii

Techniques for Pooling Data Access Objects


To minimize the need to create and destroy objects repeatedly, you can:
„ Create a pool of pre-created objects, recycling instances for several processes. Connection pooling
also works in this manner.
„ Create an object to be reused as a template so that the instance is already initialized with instance
data when it is referenced. For example, Data Access Application Block (DAAB) uses this
technique to manage ParameterSets.
„ Serialize objects locally to a durable source, such as the file system, to reduce the dependencies on
the memory resources. .
„ Save objects in a data store.

MCT USE ONLY. STUDENT USE PROHIBITED


xxxviii Session 1: Choosing Data Access Technologies and an Object Model

Demonstration: Monitoring ADO.NET Pooling

*****************************illegal for non-trainer use******************************

Introduction
There are several tools that you can use to monitor and trace ADO.NET connection pooling and
custom objects. Pooling techniques must find a balance between the number of available pools and
the maximum and minimum number of object instances per pool that must be loaded when a pool is
created.

Demonstration Overview
In this demonstration, your instructor will illustrate how to monitor ADO.NET connection pooling
and create custom counters that monitor how other objects are pooled.

Task 1: Creating a Web Application That Can Use Connection Pooling


Task Overview
This task illustrates how to create a Web application that can use connection pooling.
To create a Web application that can use connection pooling, perform the following steps.
1. Open the Microsoft Visual Studio 2005 Beta 2 development environment.
2. Browse to the folder MOC2783L3Demonstrations, and open the solution
MOC2783L3Demonstrations.sln.
3. Select the project MOC2783L3DemonstrationsWS, and open the file, web.config.
4. Configure the connection string to use connection pooling by using the parameters
Pooling=true;Min Pool Size=2;Max Pool Size=5.
5. Open the file Default.aspx.cs.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xxxix

6. Demonstrate that the code opens and closes the ADO.NET SqlConnection object several times
on the LoadVendors, LoadProducts, LoadStoreContacts, and LoadSalesTerritories routines.

Task 2: Monitor Connection Pooling by Configuring System Monitor to


Show the Objects and Counters
Task Overview
This task illustrates how to monitor connection pooling by configuring System Monitor to show the
objects and counters.
To configure System Monitor to show objects and counters, perform the following steps.
1. To start System Monitor, click Start, point to Programs, point to Administrative Tools, and
then click Performance.
2. Place the mouse pointer on any portion of the graph, right-click, and on the shortcut menu, click
Add Counters.
3. In the Performance object list, click SQL Server: General Statistics.
4. From the counter list, click User Connections, and then click Add.
5. From the Performance object list, click .NET CLR Data.
6. From the Select Instances list, click {}
7. In the counter list displayed, show the following counters:
a. NumberOfActiveConnectionPools
b. NumberOfActiveConnection
c. NumberOfPooledConnections
8. Add all the counters to the Performance window.

Task 3: Configuring SQL Server Profiler to Start Profiling


Task Overview
This task illustrates how to configure SQL Server Profiler to start profiling.
To configure SQL Server Profiler to start profiling, perform the following steps.
1. Start Microsoft SQL Server 2005 Profiler.
2. On the File menu, point to New, and then click New Trace.
3. In the Connect to Server dialog box, select MIA-SQL\SQLINST1, click Windows
Authentication, and then click Connect.
4. In the Trace Properties dialog box, click the Events Selection tab.
5. In the Selected event class list, clear all the events except Audit Login and Audit Logout.
6. To start tracing, click Run.

MCT USE ONLY. STUDENT USE PROHIBITED


xl Session 1: Choosing Data Access Technologies and an Object Model

Task 4: Creating Different Instances of Internet Explorer Running an


Application
Task Overview
This task illustrates how to create different instances of Internet Explorer running an application.
To create different instances of Internet Explorer running an application, perform the following
steps.
1. Start five instances of Internet Explorer.
2. In each Internet Explorer window, run CustomCounter.aspx.
3. Return to System Monitor, and explain the graph.
4. Return to SQL Server Profiler and show all instances of when a connection was opened and
closed.

Task 5: Editing the Application Configuration File to Switch Off


Connection Pooling and Show the Counters Again
Task Overview
This task illustrates how to edit the application configuration file to switch off connection pooling
and show the counters again.
To edit the application configuration file to switch off connection pooling and show the counters
again, perform the following steps.
1. Open the Microsoft Visual Studio 2005 Beta 2 development environment.
2. Open the folder MOC2783L3Demonstrations, and locate MOC2783L3Demonstrations.sln.
3. Select MOC2783L3DemonstrationsWS, and open the file, web.config.
4. To switch off connection pooling for the connection string, in the file, add the code statement
Pooling=false;

Task 6: Restarting and Running the Application and Monitoring Tools


Task Overview
This task illustrates how to restart and run the application and monitoring tools.
Now run the application again and use monitoring tools to test the same performance counters that
were used in the pervious tasks.
Notice the difference in the connection count between when pooling was switched on and when it
was switched off.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xli

Task 7: Using Custom Objects and Counters to Facilitate Monitoring of


Other Pooled Objects
Task Overview
This task illustrates how to use custom objects and counters to facilitate monitoring of other pooled
objects.
To use custom objects and counters to facilitate monitoring of other pooled objects, perform the
following steps.
1. Open the Microsoft Visual Studio 2005 Beta 2 development environment.
2. Open the folder MOC2783L3Demonstrations, and open the solution
MOC2783L3Demonstrations.sln.
3. Open the file App_Code\MyCache.cs.
4. Demonstrate that the code implements custom pooling on a SqlCommand object.
5. View that the code declares and implements a custom performance counter.
6. To open System Monitor and add a different set of counters, perform the steps specified in Task
2.
7. In the Performance object list, click MyCategory. If the category is not visible, return to Visual
Studio, right-click CustomCounterInstall, select Debug, and then select Start New Instance. This
will execute an installer application that will create the counter and the category in System Monitor.
8. In the counter list, select MyCounter.
9. To add the selected counter to the graph, click the Add button.
10. In the Add Counters dialog box, click Close.
11. Right-click CustomCounter.aspx.cs.
12. To start the ASP.NET application, select Set as Start Page, and use a browser to access the
instrumented page.
13. Note how System Monitor displays the counter value.
14. From the list of departments displayed, select a department.
15. The page will post back and display the list of employees for that department.
16. To select an employee, click the link, and then select the row.
17. The page will post back and display the reporting structure of the selected employee.

MCT USE ONLY. STUDENT USE PROHIBITED


xlii Session 1: Choosing Data Access Technologies and an Object Model

Considerations for Passing Data through Application Tiers

*****************************illegal for non-trainer use******************************

Introduction
In a multi-tier application, data is passed from one tier to another for processing, displaying, or
combining with other data. There are many techniques for performing these actions, and each
technique offers you a different degree of flexibility and performance. The following considerations
help you evaluate the different techniques.

Data That Can Be Passed Between Application Tiers


There are various ways to represent data while passing it through application tiers in an application.
This ranges from a data-centric model to a more object-oriented representation. Data that can be
passed between application tiers are:
„ XML
„ Generic DataSet
„ Typed DataSet
„ Custom business entity components
„ Custom business entity components with create, read, update, and delete (CRUD) behaviors

Performance Implications When Passing Data Through Application


Tiers
„ If an application mainly works with sets and requires functionality such as sorting, searching, and
data binding, then DataSets are recommended.
„ If an application works with instance data, then scalar values perform better.
„ If an application works with instance data, then custom business entity components might be the
best choice, because they prevent the overhead caused when a DataSet represents only one row.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xliii

Best Practices Design an application to use a data-centric format such as XML documents or
DataSets.

Specific Issues
Several types of objects are not serializable because they have a dependency on, or an affinity with,
the computer on which they are running. Such types of objects are:
„ Connections
„ Transactions
„ DataSets
„ DataReaders

MCT USE ONLY. STUDENT USE PROHIBITED


xliv Session 1: Choosing Data Access Technologies and an Object Model

Section 4: Designing Data Access from SQLCLR Objects

*****************************illegal for non-trainer use******************************

Section Overview
SQLCLR integrates the SQL Server 2005 engine and the common language runtime (CLR). It
provides the ability to extend the database and create new database elements that were not possible
in previous versions of SQL Server.
Coding for SQLCLR is similar to writing client-side code with ADO.NET, but because the code is
executed in-process inside the database server, there are some differences in the behavior of the
code.
In this section, you will learn about using ADO.NET as an in-process provider with SQLCLR.

Section Objectives
„ Explain the differences in the behavior of ADO.NET when accessing data from SQLCLR objects
and standard data access components.
„ Describe the various ADO.NET objects specific to the in-process data provider.
„ Explain how to use the in-process data provider to access data from SQLCLR objects.
„ Explain best practices for designing data access from SQLCLR objects.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xlv

Behavior of ADO.NET vs. Behavior of SQLCLR Objects

*****************************illegal for non-trainer use******************************

Introduction
When you are writing code for SQLCLR, the coding and object models are the same as those for
regular ADO.NET code. But when running inside the server, the code has access to more
contextual information, which can be consumed through new available classes.

How ADO.NET Behaves Differently from SQLCLR Objects As Opposed


to Standard .NET Framework Components

ADO.NET in-process (SQLCLR) ADO.NET out-of-process (standard


.NET Framework components)

ƒ Communicates directly with ƒ Communicates remotely with the


local server as an in-process database server
provider ƒ Depends on the network and
ƒ Does not need to go through the transport protocol layer to reach the
network protocol and transport correct server
layer ƒ Needs to be authenticated by the
ƒ Does not need to follow the server to exchange information
authentication process because it ƒ Needs to explicitly open a
is running in-process from an connection to the database server
already authenticated connection
ƒ Needs to explicitly open a
ƒ Does not need to open a transaction if needed
connection to the database,
although it uses a SqlConnection
object
ƒ Runs in the same transaction
space as the calling client does
MCT USE ONLY. STUDENT USE PROHIBITED
xlvi Session 1: Choosing Data Access Technologies and an Object Model

Available Namespaces
A new namespace and many new classes are available for using ADO.NET as an in-process
provider:
„ Microsoft.SqlServer.Server
• SqlContext
• SqlPipe
• SqlDataRecord
• SqlTriggerContext

Transaction Management
When using ADO.NET as an in-process provider, there are several options for transactions
management. ADO.NET can:
„ Use T-SQL statements, such as BEGIN TRAN, COMMIT TRAN, and ROLLBACK TRAN, for
local transactions.
„ Use SqlTransaction, and SqlTransaction.BeginTransaction objects through the SqlConnection
object.
„ Create distributed transactions with System.Transactions.Transaction class or with
SystemTransactions.TransactionScope class.

How to Use ADO.NET as an In-Process Provider


If you want to use ADO.NET as an in-process provider, the connection string should be configured
as a context connection.
C#
SqlConnection c= new SqlConnection(“context connection=true”);

Visual Basic 2005.


Dim c as new SqlConnection(“context connection=true”)

Limitations When Using ADO.NET as an In-Process Provider


Following are limitations when using ADO.NET as an in-process provider.
„ A SQLCLR object using the ADO.NET in-process provider can have only one connection open to
a local server at any given time.
„ MARS is not supported by the local server connection.
„ The SqlBulkCopy class and Update batching does not operate on the local server connection.
„ SqlNotificationRequest cannot be used with commands that execute against the local server
connection.
„ There is no support for canceling commands that are running against the local server connection.
SqlCommand.Cancel will ignore the request.
„ No other connection string keywords can be used when you use “context connection=true”.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model xlvii

Important Executing heavy data-access routines from inside SQLCLR objects is not
recommended. T-SQL is more appropriate for this type of task.

Note For more information about SQLCLR, refer to the article “Managed Data Access Inside
SQL Server with ADO.NET and SQLCLR” on the MSDN Web site at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnsql90/html/mandataaccess.asp. You can also refer to Course 2782: Designing Microsoft
SQL Server 2005 Databases to learn more about SQLCLR objects.

MCT USE ONLY. STUDENT USE PROHIBITED


xlviii Session 1: Choosing Data Access Technologies and an Object Model

ADO.NET Extensions for the In-Process Data Provider

*****************************illegal for non-trainer use******************************

Introduction
When using ADO.NET as an in-process provider, several classes provide contextual information
and the ability to interact with the calling client application.

About the Specific Objects


The following table summarizes the ADO.NET objects that are specific to the in-process data
provider.
Object Feature

SqlContext This is a top-level object that provides:


ƒ Direct access to properties of the
caller's context (SqlPipe).
ƒ Direct access to properties of task-
specific contexts
(SqlTriggerContext).
SqlPipe This object represents the connection or
pipe to the client executing commands.

Results and messages can be sent to the


calling application through this object.
SqlTriggerContext This object provides context information
about a trigger.

This object:
ƒ Is available from the SqlContext

MCT USE ONLY. STUDENT USE PROHIBITED class, SqlContext.TriggerContext.


Session 1: Choosing Data Access Technologies and an Object Model xlix

ƒ Includes information about the


action that caused a trigger to fire.
SqlDataRecord This object represents a single row of data
and its related metadata, SqlMetaData.

Enables SQLCLR stored procedures to


send custom result sets to a client.

MCT USE ONLY. STUDENT USE PROHIBITED


l Session 1: Choosing Data Access Technologies and an Object Model

Demonstration: Using the In-Process Data Provider to Access Data


from SQLCLR Objects

*****************************illegal for non-trainer use******************************

Introduction
There are various techniques of accessing data by using out-of-process data providers and stored
procedures in T-SQL code. Data can also be accessed from a SQLCLR stored procedure utilizing
the in-process data provider. This demonstration will allow developers to view data retrieved by
each methodology.

Demonstration Overview
In this demonstration, your instructor will explain how to access data from SQLCLR objects by
using the in-process data provider.

Task 1: Creating an Application to Access Data


Task Overview
This task shows how to create an application to access data.
To create an application to access data, perform the following steps.
1. Open the Microsoft Visual Studio 2005 Beta 2 development environment. Browse to
D:\Democode\Section04, and open the Demonstration_S4T3 solution. In the ClientApp project,
open the DataLayer.cs file.
2. To view the code in line 15, press CTRL+G, and then type 15 in the Go to Line dialog box. The
code in line number15 executes T-SQL statements by using ADO.NET as an out-of-process data
provider.
3. Notice the connection string connecting to a remote server in line 13.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model li

4. In the same file, see the code in line 37. This code executes ADO.NET as an out-of-process data
provider to access a T-SQL stored procedure.
5. In Solution Explorer window, navigate to the project TSQL_SP, click the Create Scripts folder,
and then open the Employee_Birthday_TSQL.sql file.
6. In the TSQL_SP project, in the Create Scripts folder, right-click
Employee_Birthday_TSQL.sql.
7. From the context menu, select Run On.
8. If prompted to select a database, reference to the AdventureWorks database on the
MIA_SQL\SQLINST1 server, choose from the list of servers, or create a new reference.
The code executes the same T-SQL statements as it did in line 15 of the DataLayer.cs file of the
ClientApp project, but it is written as a stored procedure and is stored inside the database.
9. In the Data Layer.cs file, go to the code in line 53. This code executes ADO.NET as an out-of-
process data provider to access a SQLCLR stored procedure.
10. In Solution Explorer, in the InProc_SP project, open the Employee_Birthday_SQLCLR.cs
file.
11. The code executes the same T-SQL statements as it did in line 15 of the DataLayer.cs file of the
ClientApp project, but it is written as a stored procedure and is stored inside the database.

Task 2: Configuring SQL Server Profiler to Start Profiling


Task Overview
This task shows how to configure SQL Server Profiler to start profiling.
To configure SQL Server Profiler to start profiling, perform the following steps.
1. Start SQL Server Profiler.
2. On the File menu, click New Trace.
3. In the Connect to Server dialog box, select MIA-SQL\SQLINST1, click Windows
Authentication, and then click Connect.
4. To specify connection details, in the Trace Properties window, on the General tab, in the Use the
template section, select the Standard (default) template.
5. On the Events Selection tab, verify that all events except SQL:BatchStarting are selected.
6. To start tracing, in the Trace Properties window, click Run.

MCT USE ONLY. STUDENT USE PROHIBITED


lii Session 1: Choosing Data Access Technologies and an Object Model

Task 3: Running the Examples


Task Overview
This task shows how to run the examples.
To run the examples, perform the following steps.
1. Switch to the Microsoft Visual Studio 2005 Beta 2 development environment and build solution.
2. In Solution Explorer, right-click the ClientApp project, select Set as Start Up Project, and then
press F5.
When you run the application, a Windows Form is displayed with a ListBox control and a DataGrid
control. The ListBox control displays a list of possible access methods to execute. When you select
an access method from the ListBox control, the DataGrid control will automatically refresh and
display the resulting data from the execution method.
3. From the Listbox control, select an access method to execute. Wait for the DataGrid control to
refresh with the data.
4. Verify that the form now displays a list of employees’ birthdates.

Task 4: Checking the SQL Server Profiler Trace


Task Overview
This task shows how to check the SQL Server Profiler trace.
To check the SQL Server Profiler trace, perform the following steps.
1. Switch to the Microsoft SQL Server 2005 Profiler.
2. To stop the executing trace, on the File menu, select Stop Trace.
3. Check the results and identify the Audit Login and Audit Logout events.
4. Verify that there are only six connection events logged in the Trace results window.
5. In the TextData column, identify the SELECT and exec statements.
6. Close the SQL Server 2005 Profiler window.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model liii

Task 5: Identifying the Limitations of TVFs That Access Data with an In-
Process Provider
Task Overview
This task illustrates the limitations of Table-Valued Functions (TVFs) that access data with an in-
process provider.
To illustrate the limitations of TVFs that access data with an in-process provider, perform the
following steps.
1. Switch to the Microsoft Visual Studio 2005 Beta 2 development environment
2. In the InProc_SP project, open the ImpossibleTVF.cs file.
3. Show that the code declares SQLCLR TVF, which returns a resultset read from the database
with the in-process data provider.
4. To run the code in the Test.sql file in the Test Scripts folder, in Solution Explorer window, right-
click the InProc_SP project, select Set as startup project, and press F5.
5. On the View menu, click Other Windows, and then click Output.
View the error message that is displayed. It should read: “System.InvalidOperationException: Data
access is not allowed in this context. Either the context is a function or method not marked with
DataAccessKind.Read or SystemDataAccessKind.Read, is a callback to obtain data from FillRow
method of a Table Valued Function, or is a UDT validation method.”

MCT USE ONLY. STUDENT USE PROHIBITED


liv Session 1: Choosing Data Access Technologies and an Object Model

Best Practices for Designing Data Access from SQLCLR Objects

*****************************illegal for non-trainer use******************************

Introduction
SQLCLR provides new ways of programming tasks that could be programmed with different
technologies, but it is not a substitute for T-SQL or any other API that extends the capabilities of
SQL Server 2005.
When and how you use SQLCLR to implement data access to SQL Server 2005 should depend on
the recommended practices for each of the scenarios in which these technologies were designed to
be used.

Designing Data Access from SQLCLR Objects


The following table summarizes some best practices and the descriptions for designing data access
from SQLCLR objects.

Best practice Description

Use T-SQL for data access–intensive T-SQL was specifically designed for
operations. direct data access and manipulation in the
database. This is a programming language,
appropriate for handling large sets of data.

Use SQLCLR if there is significant SQLCLR is useful for calculations and


procedural logic to execute. complicated execution logic. It provides
extensive support for many complex tasks,
including string handling and regular
expressions, advanced math operations,
file access, and cryptography. SQLCLR

MCT USE ONLY. STUDENT USE PROHIBITED


also provides access to many pre-built
classes and routines in the Base Class
Session 1: Choosing Data Access Technologies and an Object Model lv

Libraries.
Split a long procedure into several smaller Modular programming techniques can be
procedures by using the most appropriate applied when programming for SQL
language for each subprocedure. Server 2005, and therefore, you can
choose the best programming language for
the job.
Do not perform data access operations SQLCLR TVFs return their results in a
from inside a TVF. streaming manner. The process of
streaming of results happens as follows:
1. TVFs load all the results in memory.
2. The TVFs then stream the results back
to the client.
This process leads to a significant increase
of memory usage, depending on the size
of the result set.
Minimize data access operations inside Accessing data from inside triggers should
triggers. be minimized because a trigger executes
on the same transaction as does the client
code that modified the table that fired the
trigger execution.
Triggers increase lock contention if they
do not execute quickly enough. Accessing
data from inside a trigger might increase
execution time; therefore locks will be
held longer.
Consider offloading some code to client- The database server should execute only
side processing. the data related code. Business logic
processing and workflow management
should be moved to other physical layers.
Consider offloading some processing to You can use SQL Service Broker to
external components. communicate asynchronously with remote
servers and to promote distributed
execution and processing.

MCT USE ONLY. STUDENT USE PROHIBITED


lvi Session 1: Choosing Data Access Technologies and an Object Model

Section 5: Available Data Object Models for


Administering SQL Server

*****************************illegal for non-trainer use******************************

Section Overview
Creating user applications requires programmatic administration of SQL Server 2005, which
involves administering its various services. You can administer SQL Server programmatically by
using data object models only.
This section analyzes the changes in the management API of SQL Server 2005. This section also
discusses techniques for administering various SQL Server services programmatically.

Section Objective
In this section, you will learn about the data object models for administering SQL Server 2005
components and objects.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model lvii

Demonstration: How SQL Server Was Managed Before SQL Server


2005

*****************************illegal for non-trainer use******************************

Introduction
In SQL Server 2000, SQL Server Enterprise Manager used the SQL-DMO API to administer and
communicate with SQL Server. Administering SQL Server 2000 involves administering server
objects; creating, updating, and deleting databases and their artifacts; executing administrative
tasks; and managing SQL Server 2000 services.
Using SQL-DMO, you can create applications that administer SQL Server 2000 in the same way as
SQL Server Enterprise Manager does.
SQL-DMO is also available with SQL Server 2005.

Demonstration Overview
In this demonstration, your instructor will illustrate how SQL Server Enterprise Manager uses SQL-
DMO to administer SQL Server and how database application can use SQL-DMO directly.

MCT USE ONLY. STUDENT USE PROHIBITED


lviii Session 1: Choosing Data Access Technologies and an Object Model

Task 1: Exploring the DMO Object Model


Task Overview
This task illustrates the structure of the DMO object model.
To illustrate the structure of the DMO Object model, perform the following steps.
1. Open SQL Server 2000 Books Online.
2. On the Search tab, search for the phrase SQL-DMO Object Tree.
3. Double-click the topics titled SQL-DMO Object Tree.
4. To synchronize with the Table of Contents tree, on the Standard toolbar, click Sync with
Table of Contents.
5. Review the complete object model declared by DMO.
6. Pay special attention to the Databases node; see the elements such as Database, Filegroups, and
Tables.

Task 2: How Does Enterprise Manager Use DMO?


Task Overview
This task illustrates how Enterprise Manager uses the DMO object model.
To illustrate how Enterprise Manager uses the DMO object model, perform the following steps.
1. On the Content tab, open the SQL-DMO Reference node.
2. On the Objects node, open D.
3. Select the Database Object node.
Review the lists of Properties and Methods for this object. To review an object, expand these nodes.
Repeat to review the other dependent objects in the tree-structure, such as Tables and
StoredProcedures.
4. Open SQL Server Enterprise Manager, and connect to SQL Server 2000.
5. Open the object tree on the left panel. Open the SQL Server Group node, and expand all the
child nodes up to the database level.
6. Right-click a database—for example, Model—and from the shortcut menu, click Properties.
A new window will open, showing the properties for the selected database. These properties
correspond to methods and properties for the objects in DMO.
7. Open SQL Server 2000 Profiler.
8. Click File, point to New, click Trace, and then select SQL Profiler Standard as the template
name to track the statements executed by SQL Server 2000 Enterprise Manager.
9. Click Run to start the trace.
10. Return to SQL Server Enterprise Manager, right-click the database used in step 8, and then click
Properties.
11. Return to the SQL Server 2000 Profiler window.

MCT USE ONLY. STUDENT USE PROHIBITED


12. In the TextData column in the Trace results window, locate the value use [model].
Session 1: Choosing Data Access Technologies and an Object Model lix

The recorded trace shows the events that were produced in SQL Server Profiler while you
navigated the object tree in SQL Server Enterprise Manager.

Task 3: Running an Application That Uses DMO


Task Overview
This task illustrates how to create scripts of database objects using DMO.
To illustrate how to create scripts of database objects using DMO, perform the following steps.
1. Open the Microsoft Visual Studio 2005 Beta 2 development environment. Browse and open the
Section5.sln solution. Go to the SQLDMO project, and then open the Form1.vb file.
2. Set this project as the startup project.
3. To execute the code, press F5.
The application uses DMO to navigate to the same objects and collections as seen on the SQL
Server 2000 Enterprise Manager.
4. Close the solution.

MCT USE ONLY. STUDENT USE PROHIBITED


lx Session 1: Choosing Data Access Technologies and an Object Model

Demonstration: Using SMO to Administering SQL Server 2005


Programmatically

*****************************illegal for non-trainer use******************************

Introduction
SQL Management Objects (SMO), the successor of DMO, is a managed API in SQL Server 2005.
SMO exposes an object model to help you develop applications which execute administrative tasks
on a SQL Server instance. You can also use SMO to administer a server instance of SQL Server
2000.

Demonstration Overview
In this demonstration, your instructor will illustrate how to administer SQL Server 2005
programmatically by using SQL Management Objects (SMOs).

Task 1: Exploring the SMO Object Model


Task Overview
This task illustrates the structure of the SMO object model.
To illustrate the structure of the SMO Object Model, perform the following steps.
1. Open SQL 2005 Server Books Online.
2. On the toolbar, click Search, and search for the phrase SMO Object Model Diagram.
3. Select the SMO Object Model Diagram topic.
4. To synchronize with the Table of Contents tree, on the Standard toolbar, click the Sync with
Table of Contents button.
Review the complete object model declared by SMO.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model lxi

Pay special attention to the DatabaseCollection branch. Notice the child elements such as Database,
Filegroup, and LogFile.

Task 2: Understanding how SQL Server Management Studio Uses SMO


Task Overview
This task illustrates how SQL Server Management Studio uses the SMO object model.
To illustrate how SQL Server Management Studio uses the SMO object model, perform the
following steps.
1. On the Content tab, browse to and expand the SMO Managed Programming Reference
Documentation node.
2. Open the SMO Class Library node, and then expand the
Microsoft.SQLServer.Management.SMO node.
3. Select and expand the Database Class node.
4. To review the lists of Properties and Methods for this object, click the Database Members link.
5. Review the list of Public Properties for the other dependent objects such as Tables,
StoredProcedures, and Views.
6. Open the SQL Server Management Studio tool.
7. If the Object Explorer window is not already open, press F8 to open it.
8. In the object browser, expand the Database folder.
9. Right-click a database—for example AdventureWorks—and from the shortcut menu, click
Properties.
The properties shown in the Properties window correspond to the methods and properties for the
object in SMO.
10. Click OK to close the Properties window.
11. Open SQL Server Profiler.
12. On the File menu, point to New Trace, select Standard template, and then click Run to start a
standard trace that shows the statements executed by SQL Server Management Studio.
13. Return to SQL Server Management Studio, right-click the database used in step 9, and from the
shortcut menu, select Properties.
14. Return to the SQL Server Profiler window. In the Trace Results window, in the TextData
column, locate the value ‘use[AdventureWorks].’
The recorded trace shows the events produced in SQL Server Profiler. They represent the calls
made from SQL Server Management Studio to SMO and from SMO to SQL Server Management
Studio.

MCT USE ONLY. STUDENT USE PROHIBITED


lxii Session 1: Choosing Data Access Technologies and an Object Model

Task 3: Running an Application That Uses SMO


Task Overview
This task illustrates how to use SMO to navigate through some of the database objects.
To illustrate how to navigate through some of the database objects by using SMO, perform the
following steps.
1. Open the Microsoft Visual Studio 2005 Beta 2 development environment. Browse to and open
the Section5 solution. Go to SMOBrowser project, and then open the SMOBrowser.cs file.
Configure this project to be the startup project.
2. In the Connect to SQL Server dialog box, select MIA-SQL\SQLINST1, select the
authentication mode as Windows Authentication, and then click Connect.
3. To execute the code, press F5.
The application uses SMO to navigate the same objects and collections in SQL Server Management
Studio.
Notice the similarities between the Database class used on SMO and the correspondence with
DatabasesCollection and DatabaseObject used in the previous example with DMO.

Task 4: Showing SMO-Specific Features


Task Overview
This task shows some SMO-specific features such as the ability to install a SQLCLR assembly
programmatically in SQL Server 2005.
To show some SMO-specific features, such as the ability to install a SQLCLR assembly
programmatically in SQL Server 2005, perform the following steps.
1. In the sample application SMO Browser, which you opened in the previous task, from the File
menu, select Add Assembly.
2. A dialog box prompt appears for an assembly file to be added. Select SqlHelloWorld.dll from
the default folder.
The application displays a message with the results of the creation of the assembly.
3. Switch to the code window, and locate line 336, where the new SqlAssembly class is being used
to create a new assembly inside the database.
4. Open SQL Server Management Studio, and in the Object Explorer window, browse to the
AdventureWorks database.
5. In the Adventure Works database, browse to the Programmability\Assemblies folder.
HelloWorldAssembly is installed by the sample application

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model lxiii

Demonstration: Using RMO to Administer SQL Server 2005


Replication Programmatically

*****************************illegal for non-trainer use******************************

Introduction
Replication Management Objects (RMO) is a managed API that automates administrative tasks
related to SQL Server Replication Services.
RMO and SMO specialize in specific administrative tasks, but work together for the programmatic
administration of a SQL Server instance. However, SQL-DMO is a single API that handles all the
administrative tasks.

Demonstration Overview
In this demonstration, your instructor will illustrate how to administer SQL Server 2005 replication
programmatically by using Replication Management Objects (RMO).

Task 1: Exploring the RMO Object Model


Task Overview
This task illustrates the structure of the RMO object model.
To illustrate the structure of the RMO object model, perform the following steps.
1. Open SQL 2005 Server Books Online.
2. On the toolbar, click Search, and then search for the phrase “XXXXXX.”
3. Select the XXXXXX topic.
4. To synchronize with the Table of Contents tree, on the Standard toolbar, click Sync with
Table of Contents.

MCT USE ONLY. STUDENT USE PROHIBITED


5. Review the complete object model declared by RMO.
lxiv Session 1: Choosing Data Access Technologies and an Object Model

Pay special attention to the “” branch. Observe all the child elements such as “”.

Task 2: Show How SQL Server Management Studio Uses RMO


Task Overview
This task shows how SQL Server Management Studio uses the RMO object model.
To show how SQL Server Management Studio uses the RMO object model, perform the following
steps.
1. Open SQL Server Management Studio.
2. To open the Object Explorer window, press F8.
3. Navigate to and open the Replication folder.
4. Right-click the Local Publications folder, and add a new subscription. The New Publication
wizard will guide you through the steps of creating a new publication. Name the subscription
MyPublication.
5. After the publication is created, right-click MyPublication, and notice that the actions that you
can execute on the graphical user interface (GUI) correspond to methods and properties for the
same objects in RMO.
6. Open SQL Server Profiler and start a standard trace to track the statements executed by SQL
Server Management Studio.
7. Show the events produced in the SQL Server Profiler while navigating the object tree in SQL
Server Management Studio.

Task 3: How Does Enterprise Manager Use RMO?


Task Overview
This task illustrates how to view and modify the properties for a single publication using RMO.
To illustrate how to use RMO to view and modify the properties for a single publication, perform
the following steps.
1. Open the Microsoft Visual Studio 2005 Beta 2 development environment. Navigate to and open
the Section_5 solution. Go to the RMODemo project, and then open the Program.cs file.
2. Right-click the RMODemo project, and from the shortcut menu, select Set as Start Up Project.
3. To execute the code in debug mode, press F11.
4. Press F11 continuously to navigate through the code.
The application uses RMO to display and modify the properties for a single publication. It uses the
same objects and collections as seen in SQL Server Management Studio.

Discussion Question
1. How does RMO relate to DMO and SMO?
Your instructor will explain the new relationship and similarities between RMO, DMO, and SMO.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model lxv

Demonstration: Using AMO and ASSL to Administer SQL Server


2005 Analysis Services Programmatically

*****************************illegal for non-trainer use******************************

Introduction
Analysis Management Objects (AMO) and Analysis Services Scripting Language (ASSL) are two
different tools for communicating with Analysis Services on a SQL Server 2005 instance. AMO is
a managed API used in managed applications to administer Analysis Services. ASSL is an XML
dialect that is used to execute Data Definition Language (DDL) statements to create, modify, or
delete objects on an instance of Analysis Services. ASSL is also used as a command language in
which to send action commands.

Demonstration Overview
In this demonstration, your instructor will illustrate how to administer SQL Server 2005 Analysis
Services programmatically by using AMOs and ASSL.

Task 1: Exploring the AMO Object Model and ASSL


Task Overview
This task illustrates the structure of the AMO object model and ASSL.
To illustrate the structure of the AMO object model and ASSL, perform the following steps.
1. Open SQL 2005 Server Books Online.
2. On the toolbar, click Search, and then search for the phrase XXXXXX.
3. Select the XXXXXX topic.
4. To synchronize with the Table of Contents tree, on the Standard toolbar, click Sync with
Table of Contents.

MCT USE ONLY. STUDENT USE PROHIBITED


5. Review the complete object model declared by AMO.
lxvi Session 1: Choosing Data Access Technologies and an Object Model

6. Pay special attention to the “” branch. Observe all the child elements such as “”.
7. On the toolbar, click Search, and then search for the phrase “Analysis Services Scripting
Language XML Element Hierarchy (ASSL).”
8. Select the topic titled Analysis Services Scripting Language XML Element Hierarchy
(ASSL).
9. Review the list of XML elements declared by ASSL.
Pay special attention to the root object “Server”; then see all the child objects, such as Databases,
Dimensions, and Cubes.

Task 2: Show How the Business Intelligence Development Studio Uses


ASSL
Task Overview
This task illustrates how the Business Intelligence Development Studio uses ASSL to communicate
with SQL Server 2005 Analysis Services.
To illustrate how the Business Intelligence Development Studio uses ASSL to communicate with
SQL Server 2005 Analysis Services, perform the following steps.
1. Open Business Intelligence Development Studio.
2. On the File menu, point to Open, and then click Analysis Services Database.
3. Provide the correct parameters to connect to the Localhost server and to the
AdventureWorksDW database.
4. Press CTRL+ALT+L to open the Solution Explorer window. You can also open Solution
Explorer from the View menu.
5. Browse to the Cubes folder.
6. Explore any of the declared cubes.
Notice all the properties and the actions that you can execute on each cube and its dimensions.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model lxvii

Task 3: Run an Application That Uses AMO


Task Overview
This task illustrates how to navigate through some of the cubes and dimensions in an Analysis
Services database that uses AMO.
To illustrate how to navigate through some of the cubes and dimensions in an Analysis Services
database that uses AMO, perform the following steps.
1. Open the Microsoft Visual Studio 2005 Beta 2 development environment. Browse to and open
the solution Section5.sln. In the project AMOBrowser, open the file AMOBrowser.cs.
2. Right-click the AMOBrowser project, and from the shortcut menu, select Set as Start Up
Project.
3. To execute the code, press F5.
4. Provide the connection string MIA-SQL\SQLINST1 to connect to the Analysis Services Server.
The application shows a window with the tree view control on the left.
5. Navigate through the tree view to see the properties of each node in the right pane.
6. Navigate to and expand the database node to see the installed databases on the Analysis Services
Server.

Discussion Question
„ How does AMO relate to DMO and SMO?
Your instructor will explain how AMO relates to DMO and SMO.

MCT USE ONLY. STUDENT USE PROHIBITED


lxviii Session 1: Choosing Data Access Technologies and an Object Model

Next Steps

*****************************illegal for non-trainer use******************************

Introduction
The information in this section supplements the information provided in Session 1.
„ “Data Points: ADO.NET and System.Transactions” -- MSDN Magazine, February 2005
• http://msdn.microsoft.com/msdnmag/issues/05/02/DataPoints/default.aspx
„ Managed Data Access Inside SQL Server with ADO.NET and SQLCLR
• http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnsql90/html/mandataaccess.asp
„ MSDN TV: Introducing System.Transactions in .NET Framework 2.0
• http://msdn.microsoft.com/msdntv/episode.aspx?xml=episodes/en/20050203NETMC/manifest.
xml
„ Using CLR Integration in SQL Server 2005
• http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnsql90/html/sqlclrguidance.asp

MCT USE ONLY. STUDENT USE PROHIBITED


Session 1: Choosing Data Access Technologies and an Object Model lxix

Discussion: Session Summary

*****************************illegal for non-trainer use******************************

Discussion Questions
1. What was most valuable to you in this session?
2. Have you changed your mind about anything based on this session?
3. Are you planning to do anything differently on the job based on what you learned in this session?
If so, what?

MCT USE ONLY. STUDENT USE PROHIBITED


THIS PAGE INTENTIONALLY LEFT BLANK

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling
Strategy

Contents
Session Overview 1
Section 1: Exception Types and Their
Purposes 3
Section 2: Detecting Exceptions 17
Section 3: Managing Exceptions 33
Next Steps 47
Discussion: Session Summary 48

MCT USE ONLY. STUDENT USE PROHIBITED


Information in this document, including URL and other Internet Web site references, is subject to
change without notice. Unless otherwise noted, the example companies, organizations, products,
domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious,
and no association with any real company, organization, product, domain name, e-mail address,
logo, person, place or event is intended or should be inferred. Complying with all applicable
copyright laws is the responsibility of the user. Without limiting the rights under copyright, no
part of this document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or
otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory,
regarding these manufacturers or the use of the products with any Microsoft technologies. The
inclusion of a manufacturer or product does not imply endorsement of Microsoft of the
manufacturer or product. Links are provided to third party sites. Such sites are not under the
control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link
contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for
webcasting or any other form of transmission received from any linked site. Microsoft is providing
these links to you only as a convenience, and the inclusion of any link does not imply endorsement
of Microsoft of the site or the products contained therein.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual property.

© 2006 Microsoft Corporation. All rights reserved.

Microsoft, <The publications specialist places the list of trademarks provided by the copy editor
here. Microsoft is listed first, followed by all other Microsoft trademarks in alphabetical order.>
are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or
other countries.

<The publications specialist inserts mention of specific, contractually obligated to, third-party
trademarks, provided by the copy editor>

All other trademarks are property of their respective owners.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 1

Session Overview

Exceptions are unexpected behaviors caused by user interactions or database systems. Exceptions
occur because the code of a database application cannot predict all the potential actions of users or
database systems.
A well-designed exception handling strategy shields an application from unexpected events and
enhances the user experience. A well-designed exception handling strategy functions in the
following sequence:
1. If an application is unable to detect an exception, the exception is sent directly to the caller,
which can be a user of the application, another application, or a component using the functionality
of the database application.
2. If an exception is detected, the application tries to recover from it automatically. If the
application is unable to recover from the exception, it:
a. Gathers information about the exception and adds any contextual information.
b. Logs error information synchronously or asynchronously to a data store.
c. Sends notifications synchronSously or asynchronously to the system.
d. Performs a cleanup.
e. Displays information to the user.
f. Propagates the exception to the caller.
In this session, you will learn about the various types of exceptions that can occur in a Microsoft®
SQL Server™ 2005 system. In addition, you will learn to design strategies for detecting exceptions
at the appropriate layer. You will also learn to log and communicate exceptions according to your
business requirements.

MCT USE ONLY. STUDENT USE PROHIBITED


2 Session 2: Designing an Exception Handling Strategy

Session Objectives
„ Describe the various types of exceptions that can be detected in a SQL Server 2005 system and
how they affect applications and users.
„ Design strategies to detect exceptions at the appropriate layer.
„ Design strategies to log and communicate exceptions according to business requirements.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 3

Section 1: Exception Types and Their Purposes

Section Overview
Applications should have a clearly defined strategy for providing consistent and coherent
information to a caller. This strategy should include exception management.
In this section, you will learn about the various types of exceptions that can occur in a SQL Server
2005 system and about the various exception severity levels. You will also learn about platform
monitoring tools that can be used to provide more information to system administrators. Using this
information, system administrators can take proactive measures against exceptions by fine-tuning
and adjusting applications.

Section Objectives
„ Explain the various types of exceptions that can occur in a database system.
„ Explain the various exception severity levels.
„ Explain the techniques for programmatically exposing exceptions to common Microsoft
Windows® administrative tools.
„ Explain the various techniques for handling exceptions produced by data integrity violations.
„ Apply the guidelines for creating a user-defined exception strategy.
„ Explain how to create user-defined messages and how to write applications to use them.

MCT USE ONLY. STUDENT USE PROHIBITED


4 Session 2: Designing an Exception Handling Strategy

Types of Exceptions in Database Systems

Introduction
Exceptions are not necessarily errors. They can be informational messages that inform users,
database administrators, or other applications about how a database system is working. Whether or
not an exception represents an error is determined by the application in which it occurs.
The various types of exceptions are classified based on the situations in which they originate. This
topic covers the types of exceptions and the situations in which they originate.

Exceptions Produced During Coding or Compiling


Exceptions that are produced during coding or compiling are syntactic or semantic errors
introduced into the code by a developer. Microsoft Visual Studio® 2005 offers a feature called
IntelliSense® to avoid these types of exceptions. IntelliSense provides developers with a pre-
populated list of possible keywords as they type the initial characters of keywords. Developers can
choose the appropriate keyword from the list to avoid syntactic or semantic errors.

Exceptions Produced by a Database System


A database system can produce exceptions when:
„ It cannot recover automatically from a situation.
„ There are security breaches and integrity check failures in the database system.
„ A program violates a rule or causes a resource limit to be exceeded.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 5

Exceptions Produced by a User


An application might produce exceptions when particular conditions are not met. For example, an
application may raise an exception when a user does not provide the required value. An application
may also produce exceptions when particular thresholds are crossed because of user actions with
the data or application, such as when a user enters a value that is beyond the expected range of an
attribute.

Exceptions Produced by an Application


Exceptions might also occur as a result of errors, such as arithmetic or conversion errors, in the
application code. Exceptions can also occur when certain thresholds are crossed or when certain
conditions, such as hardware-related conditions, occur.

Informational Messages
Informational messages are exceptions that are not considered critical. An informational message,
informs the user or the calling application about a certain condition in a database or about how the
database system is functioning.

MCT USE ONLY. STUDENT USE PROHIBITED


6 Session 2: Designing an Exception Handling Strategy

Exception Severity Levels

Introduction
The severity level of an exception indicates how critical the exception is. It also indicates actions
that a database system should take to prevent the exception or to recover from it. Applications can
react to exceptions based on their severity levels and decide which course of action to adopt.

Types of Severity Levels


The severity level of an exception indicates the type of problem encountered by a database server
and how critical the problem is. The following list presents the severity levels of exceptions and
what they indicate:
„ [0-10] – Indicates informational messages that return status information or report errors that are
not severe.
„ [11-16] – Indicates errors that can be corrected automatically by the application code with or
without inputs from the user. These errors do not necessarily cancel the current transaction or
disconnect the current connection.
„ [17-19] – Indicates software errors that cannot be corrected by users and that should be handled by
system administrators. However, these errors might not cancel the current connection.
„ [20-25] – Indicates fatal system problems that might force the instance of SQL Server to shut
down.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 7

How Severity Levels Affect Application Resources


Each severity level might affect the execution of tasks in a database server. Exceptions with
severity levels of 17–25 are considered high, and such exceptions might:
„ Stop the execution of the current batch.
„ Terminate the connection between an application and a database instance.
„ Shut down the instance of the database engine.
„ Require you to restore a database.
„ Indicate a hardware or software problem.

Logging Exceptions to an Event Log


Exceptions can be logged for further processing. Error messages with a high severity level (17–25)
are logged automatically.
Following are the different types of logs that record error messages in Windows:
„ Windows application log: In the three types of built-in Windows application logs—system logs,
security logs, and application logs—logging can be performed manually by using certain
programmatic API. Some database exceptions also log data automatically.
„ SQL Server error log: Error messages with a high severity level (17–25) are automatically written
to this error log. A new error log is created each time an instance of SQL Server starts.
„ Application-specific logs: These logs are managed by applications.
„ SQL Server Profiler and the Windows Performance console: These platform monitoring tools also
generate error logs.

MCT USE ONLY. STUDENT USE PROHIBITED


8 Session 2: Designing an Exception Handling Strategy

Demonstration: Exposing Exceptions to Administrative Tools

Introduction
When an error is detected in an application, system or database administrators can identify the
exceptions by using Windows and SQL Server tools. These tools are used as logging engines, and
monitoring and tracing utilities. The information stored by these tools enables the application
development team to handle errors appropriately.

Demonstration Overview
In this demonstration, your instructor will illustrate how to programmatically expose exceptions to
common Windows administrative tools.

Task 1: Setting Up a Monitoring Environment


Task Overview
This task illustrates how to set up an environment to start monitoring and profiling the execution of
the logging application.
To set up an environment to start monitoring and profiling the execution of the logging application,
perform the following steps.
1. Start SQL Server 2005 Profiler.
2. To create a new trace, on the File menu, click New Trace.
3. If prompted to connect to SQL Server, use Windows Authentication to connect to the MIA-
SQL\SQLINST1 server instance.
4. In the Trace Properties window, from the Use the Template list, select Blank.
5. Click the Events Selection tab.

MCT USE ONLY. STUDENT USE PROHIBITED


6. Scroll down to the end of the Events list, and then click to expand the User Configurable node.
Session 2: Designing an Exception Handling Strategy 9

7. Click UserConfigurable: 0 event, and then click Run.


8. Start Windows Event Viewer.
9. In the left pane, click the Application log.
10. Open the Microsoft Visual Studio 2005 Beta 2.
11. Open the 2783M2L1Demonstrations.sln solution file located at D:\Democode\Section01.
12. Right-click the CustomCounterInstaller project, and on the shortcut menu, select Debug.
Then click Start new instance. This will load an instance of the installation application console.
13. When prompted, press SHIFT+I to install the custom counter. A message confirming successful
installation appears.
14. Press ENTER.
15. Open Performance Monitor.
16. To add a new counter, on the toolbar, click the plus (+) sign.
17. In the Add Counters window, in the Performance object list, select ErrorLogger.
18. To start tracing the events on the Errors counter, click Add.
19. Click Close.
20. Return to the Microsoft Visual Studio 2005 Beta 2.
21. In the ResetDatabase project, in the Create Scripts folder, right-click CreateCustomTrace.sql,
and on the shortcut menu, click Run.
22. If prompted to select a database reference, from the Available References list, select mia-
sql\sqlinst1.master.dbo, or click Add New Reference. In the New Database Reference window, in
the Server name box, type MIA-SQL\SQLINST1, select Windows Authentication, and then,
from the list of databases, select Master.

Task 2: Writing User-Defined Messages to Administrative Tools


Task Overview
This task executes a Microsoft .NET Framework client application that implements several
techniques to alert the database server and system administrators about the events happening on the
client side.
To execute a Microsoft .NET Framework client application that implements several techniques to
alert the database server and system administrators about the events happening on the client side,
perform the following steps.
1. Return to the Microsoft Visual Studio 2005 Beta 2 .
2. Right-click the ClientApp project, and then, on the shortcut menu, click Set as StartUp Project.
3. To execute the client application, press F5.
4. In the Logger window, in the box, type Sample error message.
5. Click Send. Every time that you click Send, the application does one of the following:
a. Sends a message to the Windows Event Log.

MCT USE ONLY. STUDENT USE PROHIBITED


b. Increments the custom counter.
10 Session 2: Designing an Exception Handling Strategy

c. Sends a trace signal to SQL Server Profiler.


6. Arrange the windows so that they are visible, and then click Send. When you click Send,
observe how the application logs in the Windows Event Log, SQL Server Profiler, and Performance
Monitor react.
7. Close the Logger application, Event Viewer, Performance, and SQL Server Profiler.

Task 3: Sending Events to Administrative Tools, Defining a Custom


Performance Counter, and Creating a Custom Trace
Task Overview
This task demonstrates the source code used by the client application to send events to the Windows
Event Log. This task also explains how to define a custom performance counter measured on the
Windows Performance Monitor and to create a custom trace that SQL Server Profiler will be able to
detect.
To send events to the Windows Event Log, to define a custom performance counter measured on
the Windows Performance Monitor, and to create a custom trace that SQL Server Profiler will
catch, perform the following steps.
1. Return to the Microsoft Visual Studio 2005 Beta 2 .
2. In the ClientApp project, in code view, open Form1.cs.
3. Scroll down to line 17, where the btSend_Click method is declared.
Notice the code in lines 20–22. These statements declare an instance of the Windows Application
Log and send it a message.
The code in lines 25–27 declares an instance of the ErrorLogger counter and increments it by one.
The code in lines 30–39 declares the necessary Microsoft ADO.NET construction to run the
RaiseErrorCustomTrace stored procedure.
4. In the Create Scripts folder, in the ResetDatabase project, right-click CreateCustomTrace.sql,
and then, on the shortcut menu, click Open.
5. In line 9, check the declaration of the RaiseErrorCustomTrace stored procedure.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 11

Discussion: Techniques for Handling Exceptions Produced by Data


Integrity Violations

Introduction
Data integrity violations are the most common source of exceptions in database applications. How
you handle these exceptions depends on the context and scenario of the exceptions.
This discussion covers how data integrity violations occur, the type of exceptions produced by data
integrity violations, and the techniques for handling them.

Discussion Questions
1. Why is declarative referential integrity better for error handling?
2. How costly is rolling back a transaction when there is an error?
3. Is there anything to roll back when there is an integrity violation? How much of the transaction is
rolled back in such an event?
4. Do you roll back transactions in triggers? Is there an alternative?
5. Are there any differences in the behavior of the INSTEAD OF and AFTER triggers related to
data integrity checks?

MCT USE ONLY. STUDENT USE PROHIBITED


12 Session 2: Designing an Exception Handling Strategy

Guidelines for Creating a User-Defined Exception Strategy

Introduction
The predefined messages in SQL Server 2005 might not be enough to provide appropriate
exception handling for the following reasons:
„ The message might be too cryptic for users to grasp.
„ The message might contain confidential internal information that is not intended for external
callers.
„ Some messages are to be handled only by system or database administrators.

Applications should implement a proper exception-handling strategy. To achieve this goal, the
applications should declare clear rules for using specific exceptions appropriate to the intended
recipient. The recipient of exceptions can be an end user, another application, or a component. The
recipient will use the exception and the information generated by the exception to determine a
specific course of action.

Creating a User-Defined Exception Strategy


When creating an exception strategy, you must:
„ Define a specific number range for user-defined exceptions.
• SQL Server 2005 defines valid user-defined error message numbers as integers starting from
50001. Make sure that the user-defined exception numbers that are used do not overlap with
other applications running in the same database server.
„ Identify clearly the exception types to be used and their severity levels.
„ Identify the messages that can be parameterized.
• Define correct message templates.
• Define the number of expected parameters for each message.

MCT USE ONLY. STUDENT USE PROHIBITED


„ Identify whether the target of the messages is a system or a user.
Session 2: Designing an Exception Handling Strategy 13

• If the messages are meant for users, check the language used. Messages should contain as
much information as possible in order for users to react to the exception.
• Messages meant for an application or a system, contain relevant information only.
„ Design messages that are accepted internationally.
• Create as many localized messages as needed for all target languages.
• If a message is parameterized to accept dynamic arguments, validate whether the arguments are
in the correct language.

Important Remember that only messages sent to the event log can fire SQL Agent Alerts.

MCT USE ONLY. STUDENT USE PROHIBITED


14 Session 2: Designing an Exception Handling Strategy

Demonstration: Creating User-Defined Messages and Writing


Applications to Use Them

Introduction
T-SQL provides various mechanisms to send user-defined error messages to applications.
Depending on the error type and severity, client applications can use these messages to determine
an appropriate course of action.

Demonstration Overview
In this demonstration, your instructor will illustrate how to create user-defined messages and write
applications to use them

Task 1: Defining Messages in the Database Server


Task Overview
This task defines a series of messages in the Sys.Messages system table. Six stored procedures are
created to send messages to a calling client. Each time, the stored procedures apply a different
technique to send a message to the calling client.
To define a series of messages in the Sys.Messages system table, perform the following steps.
1. Start SQL Server 2005 Management Studio, and use Windows Authentication to connect to
MIA-SQL\SQLINST1.
2. In D:\Democode\Section01\TSQLMessages, open Messages.sql.
3. Select the code from lines 4–26 that is marked “1. Create messages,” and then press F5 to
execute the code.
4. Select the code from line 31 of section 2 to line 73 of section 7, and then press F5 to execute the
MCT USE ONLY. STUDENT USE PROHIBITED
code.
Session 2: Designing an Exception Handling Strategy 15

Task 2: Using T-SQL to Handle User-Defined Messages


Task Overview
This task executes the stored procedures defined in the previous task and shows how to handle
messages sent from the database server using T-SQL code.
To execute the stored procedures defined in the previous task, and to handle messages sent from the
database server using T-SQL code, perform the following steps.
1. In Messages.sql, scroll down to line 76 to the section marked “8. EXECUTE ALL.”
2. Select EXEC MSG_1, and then press F5.
3. Select EXEC MSG_2, and then press F5.
4. Select EXEC MSG_3, and then press F5.
5. Select EXEC MSG_4, and then press F5.
6. Select EXEC MSG_5, and then press F5.
7. Select EXEC MSG_6, and then press F5.
8. Open Windows Event Viewer, and then in the left pane, click the Application log.
Notice that the first message from the MSSQL$SQLINST1 source reads Error: 60003 Severity: 10
State: 1. This message is logged with xp_logevent.
9. Click OK to close the Event Properties window.
10. Close Event Viewer.

Task 3: Handling User-Defined Messages in a .NET Client Application


Task Overview
This task executes the stored procedures defined before, and shows how to handle messages sent
from the database server in a .NET Framework client application using ADO.NET.
To execute the stored procedures that were defined in the earlier tasks, and to handle messages sent
from the database server in a .NET Framework client application using ADO.NET, perform the
following steps.
1. Return to the Microsoft Visual Studio 2005 Beta 2 .
2. To open Solution Explorer, press CTRL+ALT+L.
3. In the ExecuteDBMessages project, open Program.cs.
4. Scroll down to line 15, where SqlInfoMessageEventHandler is declared.
5. Scroll down to the end of the file.
6. Right-click the ExecuteDBMessages project, and then on the shortcut menu, click Set as
StartUp Project.
7. To execute the console application, press F5.
8. The application will show the messages sent by each stored procedure.

MCT USE ONLY. STUDENT USE PROHIBITED


16 Session 2: Designing an Exception Handling Strategy

The application also listens to Application Log in Windows Event Log and catches the message
logged by xp_logevent.
9. Close the console application window.
10. Close the Microsoft Visual Studio 2005 Beta 2 .

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 17

Section 2: Detecting Exceptions

Section Overview
The efficiency and stability of an application depend on its ability to handle run-time exceptions or
errors. You can develop an application to resolve exceptions in a better way if you have adequate
information about the exceptions, such as what caused them and whether they originated in the
server side or client side. With the help of this information, you can decide whether an exception
requires user interaction or can be resolved by the database system.
In this section, you will learn how exceptions can be detected in a database system. You will also
learn various design strategies for detecting exceptions at the appropriate client-side layer, and how
to obtain more information about these exceptions. Additionally, you will understand the reasons
why the compile-time and run-time exceptions occur, and the guidelines for minimizing the
problems caused by these exceptions.

Section Objectives
„ Identify the database system layers where exceptions can be detected.
„ Explain how to obtain information about exceptions.
„ Explain the structure of the TRY…CATCH technique and the benefits of using it.
„ Explain why compile-time exceptions are generated, and explain the guidelines for minimizing
problems caused by these exceptions.
„ Explain how to detect integrity exceptions.
„ Explain why run-time exceptions are generated, and explain the guidelines for minimizing
problems caused by these exceptions.
„ Explain why deadlocks occur and how to detect them.

MCT USE ONLY. STUDENT USE PROHIBITED


18 Session 2: Designing an Exception Handling Strategy

Overview of Obtaining Exception Information in the Database


System

Introduction
In a database system, both the server and the client layers can raise exceptions to signal unexpected
conditions. When an exception is raised, both the database server and the client application should
obtain information about the exception by using adequate functions and programming techniques to
handle the exception appropriately.

Obtaining Exception Information in a Database Server


There are two techniques for obtaining exception information in a database server:
„ @@Error function: The @@Error function returns the error number of an error encountered by
the previous statement. The functionality of the @@Error function is very limited, but this is the
most common way of handling errors in T-SQL code. You can query sys.messages for more
information about the error that has occurred.
„ TRY…CATCH construct: The TRY…CATCH construct passes the program control to a CATCH
block to process the error. You can use error functions to retrieve more information about the
exception.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 19

Obtaining Exception Information in a Client Layer


When you use the ADO.NET SqlClient provider:
„ Errors cause the SqlException exception to be thrown.
„ Informational messages do not alter the control flow and can be intercepted by the application
code if the application code registers an event handler for the InfoMessage event.

Additional Information Read the section “SQL Server Error Messages” in SQL Server 2005
Books Online (BOL).

MCT USE ONLY. STUDENT USE PROHIBITED


20 Session 2: Designing an Exception Handling Strategy

Demonstration: Obtaining Exception Information

Introduction
Exception-related information is critical for database applications and for database administrators.
Database applications obtain and use this information for handling exceptions. Database
administrators require this information to manage database systems.
Exception information is available to database applications if they use the T-SQL functions
provided by SQL Server. Database applications based on ADO.NET can use specific classes to
obtain exception information.
Database administrators can access exception information by using SQL Server Profiler.

Demonstration Overview
This demonstration illustrates how to obtain information about exceptions using T-SQL code, client
ADO.NET code, and the SQL Server Profiler tool.

Task 1: Obtaining Exception Information in SQL Server 2005


Task Overview
This task illustrates how errors are handled on a database server and which functions and variables
obtain exception information.
To obtain information about the exceptions that are handled on the SQL Server 2005 database,
perform the following steps.
1. Start SQL Server Management Studio and use Windows Authentication to connect to MIA-
SQL\SQLINST1.
2. Open Project/Solution 2783M2L2Demonstrations.ssmssln located at
D:\Democode\Section02\SSMS\ExceptionInformation, use Windows Authentication to connect to
MCT USE ONLY. STUDENT USE PROHIBITED
MIA-SQL\SQLINST1, and then open ErrorVariable.sql.
Session 2: Designing an Exception Handling Strategy 21

3. Scroll down to line 8, where the T-SQL code uses the sp_addmessage system stored procedure to
declare a custom exception. .
4. Show the RAISERROR expression in line 15.
5. Select the code from lines 1–11. To execute the selected code, press F5.
6. Select the code from lines 14–25. To execute the selected code, press F5.
7. View the results in the Results pane.
8. In Solution Explorer, in the 2783M2L2Demonstrations solution, open TryCatch.sql.
9. Scroll down to line 8, where the T-SQL code uses the sp_addmessage system stored procedure to
declare a custom exception.
10. Scroll down to line 14.
11. Show the function calls on line 23.
12. Select the code from lines 1–11. To execute the selected code, press F5.
13. Select the code from lines 14–30. To execute the selected code, press F5.
14. View the results in the Results pane.

Task 2: Using ADO.NET Objects and Functions That Provide Exception


Information
Task Overview
This task illustrates how error information is retrieved on the client-side. It also illustrates how
much information Microsoft ADO.NET offers to handle exceptions.
To view how the error information is retrieved on the client-side and to understand how much
information ADO.NET offers for handling exceptions, perform the following steps.
1. Open the Microsoft Visual Studio 2005 Beta 2 .
2. Browse to and open the 2783M2L2Demonstrations solution located at D:\Democode\Section02.
In the ClientApp project, open Program.cs.
3. Scroll down to line 15. The code adds an event handler for the InfoMessage event to the
SqlConnection class.
Notice line 22, where SqlClient.SqlCommand is used to execute a T-SQL statement.
Notice how the code encloses the SqlCommand.ExecuteScalar method inside the TRY…CATCH
blocks.
4. Scroll down to line 24, and notice that SqlException is handled in the first catch block.
5. Scroll down to line 60, and notice that Exception is handled in the second catch block.
6. Right-click the ClientApp project, and on the shortcut menu, click Set as StartUp Project. To
execute the client application, press F5.
Notice the information that the client application displays on the screen.
7. Close the console application window.

MCT USE ONLY. STUDENT USE PROHIBITED


22 Session 2: Designing an Exception Handling Strategy

Task 3: Using SQL Server Profiler to Capture Exception Information


Task Overview
This task takes advantage of SQL Server Profiler to monitor an application for error-related
information.
To use SQL Server Profiler to monitor an application for error-related information, perform the
following steps.
1. Start SQL Server Profiler.
2. To create a new trace, on the File menu, select New Trace.
3. If prompted to connect to a database server, use Windows Authentication to connect to the MIA-
SQL\SQLINST1 server instance.
4. In the Trace Properties window, in the Use the Template list, select Blank.
5. Click the Events selection tab.
6. Select the following events from the Events selection tab:
a. Errors and Warnings: Attention
b. Errors and Warnings: Exception
c. Errors and Warnings: Execution Warnings
d. Errors and Warnings: User Error Message
7. To start the trace, click Run.
8. Return to SQL Server Management Studio, and run the code as you did in task 1.
9. Return to Microsoft Visual Studio 2005 Beta 2, and run the code as you did in task 2.
10. Return to SQL Server Profiler, and view the displayed trace messages and details.
11. Close SQL Server Profiler.
12. Close the console application window.
13. Close SQL Server Management Studio.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 23

The TRY…CATCH Technique

Introduction
Using the TRY…CATCH construct expands the functionality of the @@ERROR function. The
TRY…CATCH construct provides the following advantages:
„ A structured programming model for easy management and maintenance of code
„ More contextual information with the ERROR functions

The TRY…CATCH Construct


A TRY…CATCH construct consists of two blocks, TRY and CATCH. The following is the
sequence in which a TRY…CATCH construct is executed:
1. When a TRY block detects an error, the control of the program is transferred to the CATCH
block. Otherwise, the control is transferred to the next statement after the END CATCH statement.
2. After the CATCH block has been executed, the control is transferred to the next statement after
END CATCH.

Note Although the TRY…CATCH construct is common in many programming languages, the
use of this construct varies with each implementation.

MCT USE ONLY. STUDENT USE PROHIBITED


24 Session 2: Designing an Exception Handling Strategy

Gathering Information
Inside a CATCH block, you can use the following functions to obtain information about an
exception:
„ ERROR_LINE
„ ERROR_MESSAGE
„ ERROR_NUMBER
„ ERROR_PROCEDURE
„ ERROR_SEVERITY
„ ERROR_STATE

Restrictions of the TRY…CATCH Construct


A TRY…CATCH construct cannot detect the following two error types:
„ Compilation errors, such as syntax errors
„ Statement-level recompilation errors, such as name resolution errors

These two types of errors are returned to the calling process.

Additional Information Read the section “Using TRY…CATCH in Transact-SQL” in SQL


Server 2005 Books Online.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 25

Compile-Time Exceptions

Introduction
Compile-time exceptions prevent the database engine from building an execution plan. This type of
exception is often produced by a syntactic error in the coding. Compile-time exceptions in stored
procedures and in other objects are detected when they are created. As a result, application
developers do not have too many errors to detect. However, when an application uses dynamic
execution to execute dynamically constructed queries, compile errors might occur.

Why Compile-Time Exceptions Are Generated


Following are the reasons for the generation of compile-time exceptions.
„ Extensive use of dynamic execution by client applications
„ Syntactic errors during coding

Guidelines for Minimizing Problems Caused by Compile-Time


Exceptions
The following are the guidelines for minimizing problems caused by compile-time exceptions:
„ Applications should limit the use of dynamic execution, thereby minimizing the problems caused
by compile-time exceptions.
„ Applications should have internal audit mechanisms to detect patterns that describe frequently-
executed queries, and should replace dynamic execution with well-defined stored procedures. This
is especially important to avoid SQL injection attacks.

MCT USE ONLY. STUDENT USE PROHIBITED


26 Session 2: Designing an Exception Handling Strategy

Demonstration: Detecting Integrity Exceptions

Introduction
Integrity violations are generated when a database operation breaks a constraint on a database
server. Integrity is enforced by declarative integrity checks and procedural integrity
implementations.
Declarative integrity violations are managed by the database server, and the client application needs
to handle the exception raised by the server.

Demonstration Overview
In this demonstration, your instructor will illustrate how to detect integrity exceptions

Task 1: Detecting Entity Integrity Violations


Task Overview
This task shows how to handle entity integrity violations in the ADO.NET code in a .NET
Framework client application.
To handle entity integrity violations in ADO.NET code in a .NET Framework client application,
perform the following steps.
1. Return to the Microsoft Visual Studio 2005 Beta 2.
2. Browse to and open MOC2783M2L2Demonstrations.sln.
3. In the ResetDatabase project, in the Create Scripts folder, open the CreateDB.sql file.
4. Scroll down to the definition of the Table1 table in line 74.
5. Scroll down to the definition of the InsertTable1SP stored procedure in line 122.

MCT USE ONLY. STUDENT USE PROHIBITED


6. Right-click CreateDB.sql, and then on the shortcut menu, click Run.
Session 2: Designing an Exception Handling Strategy 27

7. If prompted to select a database reference, from the Available References list, choose mia-
sql\sqlinst1.master.dbo, or click Add New Reference. In the New Database Reference window, in
the Server Name box, type MIA-SQL\SQLINST1, select Windows Authentication, and then
select Master from the list of databases.
8. In the EnforceConstraints project, in code view, open Form1.cs.
9. Scroll to line 22 to view the ADO.NET code that executes the InsertTable1SP stored procedure.
Notice that lines 33 and 34 caused the UNIQUE constraint violation because they tried to insert the
same value twice.
10. Right-click the EnforceConstraints project, and then on the shortcut menu, click Set as
StartUp Project.
11. To execute the application, press F5.
12. On the sample application, click Entity Integrity.
13. Click OK to close the message box.
14. Close the Checking Integrity Demo application.

Task 2: Detecting Referential Integrity Violations


Task Overview
This task shows how to handle referential integrity violations in the ADO.NET code in a .NET
Framework client application.
To handle referential integrity violations in ADO.NET code in a .NET Framework client
application, perform the following steps.
1. In Microsoft Visual Studio 2005 Beta 2, in the ResetDatabase project, in the Create Scripts
folder, open CreateDB.sql.
2. Scroll down to the definition of the Table2 table in line 92.
3. Scroll down to the definition of the InsertTable2SP stored procedure in line 146.
4. Scroll down to the definition of the referential integrity constraint in line 190.
5. In the EnforceConstraints project, in code view, open Form1.cs.
6. Scroll to line 56 to view the ADO.NET code that executes the InsertTable2SP stored procedure.
Note that line 62 caused the referential integrity violation. This is because line 62 tried to insert an
ID that did not exist in the referenced table, Table1.
7. Right-click the EnforceConstraints project, and then on the shortcut menu, click Set as StartUp
Project.
8. To execute the application, press F5.
9. On the sample application, click Referential Integrity.
10. Click OK to close the message box.
11. Close the Checking Integrity Demo application.

MCT USE ONLY. STUDENT USE PROHIBITED


28 Session 2: Designing an Exception Handling Strategy

Task 3: Declaring Check Constraints


Task Overview
This task shows how to handle check constraint violations in the ADO.NET code in a .NET
Framework client application.
To handle check constraint violations in ADO.NET code in a .NET Framework client application,
perform the following steps.
1. In Microsoft Visual Studio 2005 Beta 2, under the ResetDatabase project, open CreateDB.sql.
2. Scroll down to the definition of the Table3 table in line 107.
3. Scroll down to the definition of the InsertTable3SP stored procedure in line 172.
4. Scroll down to the definition of the check constraint in line 193.
5. In the EnforceConstraints project, in code view, open Form1.cs.
6. Scroll down to line 90 to view the ADO.NET code that executes the InsertTable3SP stored
procedure.
Notice that line 96 caused the check constraint violation because it tried to insert a non-numeric
string into the VALUE column.
7. Right-click the EnforceConstraints project, and then on the shortcut menu, click Set as StartUp
Project.
8. To execute the application, press F5.
9. On the sample application, click Check Constraint.
10. Click OK to close the message box.
11. Close the Checking Integrity Demo application.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 29

Run-Time Exceptions

Introduction
Run-time exceptions prevent the database engine from finishing the execution of a statement. This
type of exception is difficult to detect during the development phase, because it is introduced as a
result of unexpected behavior, such as passing incorrect arguments to functions and assigning data
that exceeds the limits of a data type.

Why Run-Time Exceptions Are Generated


Following are reasons for the generation of run-time exceptions.
„ Arithmetic errors
„ Conversion errors
„ Transactional and locking errors
„ Unexpected server events, such as the shutdown of a server, that affect the application
• w

Guidelines for Minimizing Problems Caused by Run-Time Exceptions


Following are guidelines for minimizing problems caused by run-time exceptions:
„ Write code to check for a potential source of run-time errors and to avoid executing actions that
can lead to these events. However, this might not always be possible.
„ Prevent arithmetic errors by explicit type casting and type validation. Raise appropriate errors so
that the calling process can trap it. This will give the calling process a chance to handle the error
appropriately.
„ Avoid conversion errors by properly using casting and type conversion functions. Use data types
in objects and queries consistently to avoid data type overflow.
Locking errors cannot generally be prevented, but a well-designed transaction with appropriate
MCT USE ONLY. STUDENT USE PROHIBITED
„
selection of transaction isolation levels can drastically minimize this type of error.
30 Session 2: Designing an Exception Handling Strategy

Demonstration: Detecting Deadlocks

Introduction
A deadlock is a condition in which two transactions or processes cannot proceed because they are
waiting for each other’s mutually locked resources. SQL Server 2005 implements an automatic
deadlock detection engine that detects and breaks a deadlock by terminating one of the processes.
A client application should be developed to handle this type of exception and re-execute the
statement that caused the deadlock.

Demonstration Overview
In this demonstration, your instructor will explain why deadlocks occur and how to detect them.

Task 1: Understanding Deadlocks


Task Overview
This task illustrates how deadlock conditions are created. This task executes two transactions; the
first transaction blocks the second transaction to create a deadlock condition.
To create a transaction block and to create a deadlock condition, perform the following steps.
1. In the Microsoft Visual Studio 2005 Beta 2 , in the DeadLock project, open Program.cs.
2. Scroll down to the declaration of the Main method in line 75.
3. Review the code for the method.
4. Scroll down to the declaration of the P1 method in line 13.
5. Review the code for the method.
6. Scroll down to the declaration of the P2 method in line 44.

MCT USE ONLY. STUDENT USE PROHIBITED


7. Review the code for the method.
Session 2: Designing an Exception Handling Strategy 31

Task 2: Monitoring a Deadlock Condition


Task Overview
This task sets up a monitoring environment to catch any deadlock condition occurring in a database
server.
To set up a monitoring environment to catch any deadlock conditions occurring in a database
server, perform the following steps.
1. Start SQL Server Profiler.
2. To create a new trace, on the File menu, select New Trace.
3. If prompted to connect to a database server, connect to the MIA-SQL\SQLINST1 server
instance.
4. In the Trace Properties window, in the Use the template list, select Blank.
5. Click the Events selection tab.
6. Select the following events from the Events Selection tab:
a. Locks: Deadlock Graph
b. Locks: Lock:Deadlock
c. Locks: Lock:Deadlock Chain
7. To start the trace, click Run.

Task 3: Creating a Deadlock


Task Overview
In this task, you solve a deadlock condition by terminating the blocking process or by rolling back
the blocking transaction.
To solve a deadlock condition by terminating the blocking process or rolling back the blocking
transaction, perform the following steps.
1. Switch to the Microsoft Visual Studio 2005 Beta 2 .
2. Right-click the DeadLock project, and then on the shortcut menu, click Set as StartUp Project.
3. To run the application, press F5.
The application creates a console application.
View the on-screen messages. To close the application, press the ENTER key when the message
“Hit ENTER to finish” appears.
4. Switch to SQL Server Profiler.
View the logged messages.
5. Click the row on which the event from the Deadlock graph is logged to show its contents in the
lower panel.
SQL Server Profiler shows a graph with two circles representing client processes connected through

MCT USE ONLY. STUDENT USE PROHIBITED


lines to two boxes representing locks.
32 Session 2: Designing an Exception Handling Strategy

6. View the T-SQL statements that blocked these processes.


7. Close SQL Server Profiler.
8. Close the Microsoft Visual Studio 2005 Beta 2.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 33

Section 3: Managing Exceptions

Section Overview
Exceptions can be defined as unexpected events that occur during the execution of an application. If
exceptions are not handled by the application itself, they are handled by the operating system. In
this case, the operating system terminates the application process. When users try to handle
exceptions, it often results in loss of data.
An exception handling strategy is often not taken into account when you consider the business
requirements for an application. As a software developer or a database architect, you must
understand that exception handling techniques affect the performance of an application, which in
turn can affect the company’s image and revenue.
In this section, you will learn how to log and communicate exceptions according to your business
requirements. You will also learn how to manage exceptions on the server and on the client and
how to transfer messages between them. In addition, you will learn about SQL injection attacks and
how to avoid these attacks.

Section Objectives
„ Explain the considerations for managing exceptions.
„ Explain the considerations for logging exceptions and for determining the locations where
exceptions can be stored.
„ Explain the importance of producing meaningful and useful error messages.
„ Explain how to manage exceptions on the server and on the client and how to transfer messages
between them.
„ Apply the guidelines for proactively managing exceptions.
„ Explain the guidelines for filtering user input to avoid SQL injection attacks.

MCT USE ONLY. STUDENT USE PROHIBITED


34 Session 2: Designing an Exception Handling Strategy

Considerations for Managing Exceptions

Introduction
When designing a system, you must ensure that the system is capable of:
„ Gathering information.
„ Logging error information.
„ Sending notifications.
„ Performing cleanup.
„ Displaying information to users.

By building these capabilities into applications, developers can develop self-protecting applications
that will notify system administrators whenever there is an unexpected situation that the application
cannot handle. Developers can also create applications that give adequate contextual information
and feedback on the problems. This information will enable the development team to analyze and
rectify the problem.
In this topic, you will learn about the various considerations for managing exceptions.

Transferring Messages to Other Layers


Exceptions should be handled directly by the component or layer where the exception occurs. These
components or layers should be designed to:
„ Gather information for logging.
„ Add any relevant information to the exception.
„ Execute cleanup code.
„ Attempt to recover from the exception.

MCT USE ONLY. STUDENT USE PROHIBITED


Unhandled exceptions should propagate to the caller. An exception propagates up the call stack to
the last point or boundary at which the application can handle the exception and return to the user.
Session 2: Designing an Exception Handling Strategy 35

Logging Exceptions
If an exception is not logged, it might not be noticed by the system manager or the development
team. By logging exceptions, the development team ensures that it shall be notified about problems
from which the application cannot recover by itself.

Note Logging information without proper planning might result in circular logging. In circular
logging, certain logged information triggers a process that logs more information. Thus,
circular logging starts an endless process of logging information, which consumes important
system resources.

Solving Problems That Cause Exceptions


To solve the problems that cause an exception, you must capture all the appropriate information that
accurately represents the exception condition.
Exception solving involves users, application developers, system operators, and administrators.
Each will be interested in different types of information about an exception.

Preventing Exceptions
Monitoring and notifications are two important processes by which you can prevent exceptions
from occurring.
„ Monitoring: An application should provide monitoring tools to enable system administrators to
determine the health of the application.
„ Notifications: An application should quickly and accurately notify system administrators of any
problem that the application experiences. Without appropriate notifications, exceptions might not
be detected.

MCT USE ONLY. STUDENT USE PROHIBITED


36 Session 2: Designing an Exception Handling Strategy

Guidelines for Logging Exceptions

Introduction
Applications should be designed to expose instrumentation information. The instrumentation
information should be decoupled from the application code. Logging is an important part of the
instrumentation of applications. If exceptions are logged, developers need not change the
application source code every time a modification is required in the notification mechanism.
In this topic, you will learn about exception logs and the guidelines for logging exceptions.

Exception Logs
The exception log should record all the necessary information so that the different stakeholders can
resolve the exception. Moreover, the logs should be located close to the layer where the exception
occurs, and the logs should be easily accessible to the system administrators. The log format and
storage depends on the log location. The log files can be maintained in textual, XML, relational, or
custom binary format.

Guidelines
When you are designing an application, you should follow certain guidelines for logging
exceptions.
The following are some of the important guidelines that you should follow for logging exceptions:
„ The application should keep a log for each application boundary.
• Exceptions are produced by different layers. These layers should be as self-contained and self-
sufficient as possible. By maintaining a log for each application boundary, the dependency of
an application layer on other layers to log exceptions is eliminated.
• Client-side log (if communication with the server is intermittent or if the application is written
as a Smart Client application).

MCT USE ONLY. STUDENT USE PROHIBITED


• Server-side log
Session 2: Designing an Exception Handling Strategy 37

„ Create several types of logs, each type depending on the type of information that the log will hold
and the type of actions required.
• Each log typically serves a different purpose, and can be analyzed by different teams. Splitting
the log information into different logical logs makes the processing of these logs more
efficient.
• The following are various kinds of logs you can create:
ƒ Administrative log
ƒ Reporting log
ƒ Communications log
ƒ Custom log
„ Choose the relevant log format and storage option.
• Each log format and storage type has specific advantages and disadvantages and serves
different purposes. The log should be in an appropriate format for the callers to process.
• Each storage option provides various log formats such as textual, XML, relational, and custom
binary format. The storage options include:
ƒ Enterprise Instrumentation Framework (EIF)
ƒ Windows Event Log service
ƒ A central, relational database such as SQL Server 2005
ƒ Custom log file

MCT USE ONLY. STUDENT USE PROHIBITED


38 Session 2: Designing an Exception Handling Strategy

Discussion: Producing Meaningful and Useful Error Messages

Introduction
You should produce meaningful and useful error messages relevant to the given caller. The caller
can be a user, another internal component, or an external application. The error messages should
contain information that will enable the caller to resolve the exception.

Discussion Questions
1. Why do you use error messages?
2. Do you transfer system exceptions directly to other layers, or do you transfer your own custom
exceptions?
3. Are your messages designed to be understood by the target audience? For example, do you create
different types of messages for people and components?
4. Do you parameterize messages?
5. Do you try to avoid sending messages when there is nothing that the target user or component
can do about the error?
6. Is it important to use exception logging and monitoring mechanisms to facilitate administration
and operations?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 39

Demonstration: Managing Server-Side Errors from the Server and


the Client

Introduction
The capabilities for handling and managing errors in the server components and the client
components of an application vary.
Following is the process of handling an exception in a distributed application:
1. The server handles the error on the server-side and logs the error for the system administrator to
check.
2. The server-side error is then transferred from the server to the client.
3. The client handles the error and reacts to it. The client application might decide to present a
message to the user.
This demonstration will show how to evaluate the different techniques for managing database
exceptions on client applications.

Demonstration Overview
The instructor will demonstrate a complete cycle for handling an error both on the server side and
on the client side.
The application will cause a data integrity conflict while updating data inside a DataSet object.

Task 1: How Server-Side Objects Handle Errors


Task Overview
This task shows how errors are handled on the server—for example, on a SQL Server database.

MCT USE ONLY. STUDENT USE PROHIBITED


To handle errors on the SQL Server database, perform the following steps.
40 Session 2: Designing an Exception Handling Strategy

1. Open the Microsoft Visual Studio 2005 Beta 2.


2. In D:\Democode\Section03, browse to and open 2783M2L3Demonstrations.sln.
3. In the ResetDatabase project, open the Create Scripts folder, and then open
UpdateCreditCard.sql.
4. Right-click UpdateCreditCard.sql file, and click Run.
5. If prompted to select a database reference, from the Available References list, choose mia-
sql\sqlinst1.AdventureWorks.dbo, or click Add New Reference. In the New Database Reference
window, in the Server Name box, type MIA-SQL\SQLINST1, select Windows Authentication,
and select AdventureWorks from the list of databases.

Task 2: How Application Developers Handle Errors on the Client Side


Task Overview
This task shows how to handle errors on the client application.
To handle errors on the client application, perform the following steps.
1. In 2783M2L3Demonstrations.sln, browse to the ManagingErrors project, and then open
DataLayer.cs.
There are two methods defined in the DataLayer class.
The GetCreditCardRecords method queries the database for records. This method does not
implement any error handling routines.
The UpdateCreditCardRecords method updates the database. This method implements error
handling routines.
2. In DataLayer.cs, scroll down to line 53, where the first Catch construct begins.
3. Scroll down to line 59, where the second Catch construct begins.
4. Scroll down to line 74, where the third Catch construct begins.

Task 3: Solving Update Conflicts in a Data Set


Task Overview
This task executes a client application and shows how to cause and handle a data integrity error.
To execute a client application and understand how to handle a data integrity error, perform the
following steps.
1. Start SQL Server Management Studio, and use Windows Authentication to connect to MIA-
SQL\SQLINST1.
2. If Object Explorer is not already open, press F8 to open it.
3. Expand the databases folder, open the AdventureWorks database, open the Tables folder, and
scroll down to the Sales.CreditCard table.
4. Right-click the Sales.CreditCard table, and click Open Table.
SQL Server Management Studio displays all the credit cards that are registered on the database.

MCT USE ONLY. STUDENT USE PROHIBITED


5. Return to Microsoft Visual Studio 2005 Beta 2.
Session 2: Designing an Exception Handling Strategy 41

6. Right-click the ManagingErrors project, and on the shortcut menu, click Set as StartUp Project.
7. To execute the application, press F5.
The client application executes, opens, and displays the data.
8. For the first credit card on the list with CreditCardID value equal to one, double-click the
ExpMonth column to enter the edit mode, and then enter a value greater than or equal to 1, but less
than or equal to 12.
9. After making the change, press ENTER to exit the edit mode and move to the second row.
10. For the second credit card on the list with CreditCardID value equal to two, double-click the
ExpMonth column to enter the edit mode, and then enter a value greater than or equal to 1, but less
than or equal to 12.
11. After making the change, press ENTER to exit the edit mode.
12. Return to SQL Server Management Studio. For the first credit card on the list with
CreditCardID value equal to one, in the CreditType column, change the card type from
“SuperiorCard” to “InferiorCard.”
13. After making the change, press ENTER to exit the edit mode.
14. Return to the Credit Card Manager application.
15. On the toolbar, click Save Updates.
The first row is marked with a red exclamation point, indicating that there is a concurrency
violation on this record.
16. Open Event Viewer, and in the left pane of the Event Viewer, select the Application log.
17. In the Source column, at the top of the list, double-click the CreditCardManager event and
read the message. “Row ID:1, cause Concurrency violation: the UpdateCommand affected 0 of the
expected 1 records.”
18. Click OK to close the Event Properties dialog box.
19. Close Event Viewer.
20. Close Credit Card Manager.
21. Close SQL Server Management Studio.

MCT USE ONLY. STUDENT USE PROHIBITED


42 Session 2: Designing an Exception Handling Strategy

Guidelines for Proactively Managing Exceptions

Introduction
Exception handling should start as a reactive measure. However, as an application matures through
several application development life cycles, common exceptions should be handled and addressed
to prevent such exceptions from recurring.
By following guidelines to prevent exceptions, you can transform exception handling into a
proactive measure that will improve the quality of the application and enhance the user experience.

Guidelines for Managing Exceptions


The following are some of the guidelines for proactively managing exceptions.
„ Check error logs for common exceptions.
• System administrators and application developers should constantly check error logs to identify
exceptions that occur regularly.
• Each exception should provide a priority attribute so that the application can determine
whether handling should be elevated to system administrators.
• Exceptions that occur regularly should be addressed in the next application build. Leaving such
exceptions unhandled can affect the performance and scalability of the application and can
pose a security threat.
• Pay attention to errors caused when crossing application boundaries, accessing a remote data
source, executing a remote procedure call, or serializing data.
„ Analyze the causes of common exceptions.
• Determine the cause, and during the next application build, develop a plan to address the
exception.
• Study closely the exceptions that recur.

MCT USE ONLY. STUDENT USE PROHIBITED


„ Prioritize the exceptions to be addressed.
Session 2: Designing an Exception Handling Strategy 43

• The development team can concentrate on critical exceptions.


• Prioritize exceptions so that the development team can concentrate on exceptions that:
ƒ Need urgent attention.
ƒ Can cause the system to fail.
ƒ Do not permit a system module to work properly.
ƒ Propagate all the way to the user’s screen and do not provide
enough information for the user to understand what is happening.
„ Address exceptions in the next application build.
• Take necessary steps to eliminate the cause of the exception.
• Run test scripts again to check that:
ƒ Errors are fixed.
ƒ Code modifications did not introduce new errors.

MCT USE ONLY. STUDENT USE PROHIBITED


44 Session 2: Designing an Exception Handling Strategy

Demonstration: Filtering User Input to Avoid SQL Injection Attacks

Introduction
An SQL injection attack is a technique in which the administrators can use non-validated input to
pass SQL commands to a database for execution. You can prevent SQL injection attacks by

Demonstration Overview
A SQL injection attack is a technique used to pass and execute SQL commands through non-
validated inputs to a database management system.
The instructor will demonstrate how to cause and prevent a SQL injection attack. This demonstrates
a proactive measure to avoid unhandled exceptions.

Task 1: Viewing the Code of an Insecure Client Application


Task Overview
This task shows a client application that is written with inferior code and permits SQL injection
attacks to happen.
To view a client application that permits SQL injection attacks to occur, perform the following
steps.
1. Return to the Microsoft Visual Studio 2005 Beta 2.
2. In 2783M2L3Demonstrations.sln, expand the SQLInjection project.
3. Open Attack.cs in code view.
4. Scroll down to line 22, where the selectCommand variable is initialized.

MCT USE ONLY.


Task 2: Launching STUDENT
an SQL Injection Attack USE PROHIBITED
Session 2: Designing an Exception Handling Strategy 45

Task Overview
This task executes an application and launches an SQL injection attack on a client.
To execute an application and launch an SQL injection attack on a client, perform the following
steps.
1. Open Program.cs.
Notice that there are two lines of code. One line of code executes the Attack form, and the other
executes the Defend form.
2. Type // before the Application.Run(new Defend()); line of code to make it a comment.
Comment out the Application.Run(new Attack()); line of code.
3. Right-click the SQLInjection project and select Set as StartUp Project.
4. To execute the client application, press F5.
The application opens the Attack window, which contains a box, a search button, and a grid to
display the results.
5. In the box, type 3333112, and then click Search.
6. Browse the data grid by clicking the plus (+) signs until the results appear. Notice that nine rows
are displayed.
7. In the box, type 3333112’ or 1=1 --, and then click Search.
8. Browse the data grid by clicking the plus (+) signs until the results appear. Notice that several
rows are displayed.
9. In the box, type 3333112%’; SELECT * FROM Sales.Currency --, and then click Search.
10. Browse the data grid by clicking the plus (+) signs. Notice that Table and Table1 are the two
results of the search.
11. To view the expected results, select Table.
12. Browse back by clicking the left arrow on the top right corner of the grid.
13. Select Table1 to view the currency list, which is an unexpected result.
14. Close the Search Form – Attack application.

Task 3: Viewing the Code of a Secure Application


Task Overview
This task shows a client application that proactively protects itself from SQL injection attacks.
To view how a client application proactively protects itself from SQL injection attacks, perform the
following steps.
1. Return to the Microsoft Visual Studio 2005 Beta 2.
2. In 2783M2L3Demonstrations.sln, expand the SQLInjection project.
3. Open Defend.cs in code view.
4. Scroll down to line 19, where the btSearch_Click method is defined.

MCT USE ONLY. STUDENT USE PROHIBITED


5. In the ResetDatabase project, open the Create Scripts folder, and then open SearchCreditCard.sql.
46 Session 2: Designing an Exception Handling Strategy

6. Right-click SearchCreditCard.sql, and select Run.


7. If prompted to select a database reference, from the Available References list, click mia-
sql\sqlinst1.AdventureWorks.dbo, or click Add New Reference. In the New Database Reference
window, in the Server Name box, type MIA-SQL\SQLINST1, select Windows Authentication,
and from the list of databases, select AdventureWorks.
8. In the SQLInjection project, open Program.cs.
Notice that there are two lines of code. One line of code executes the Attack form, and the other
executes the Defend form. Type // before the Application.Run(new Attack()); line of code to
make it a comment. Comment out the Application.Run(new Defend()); line of code.
9. To execute the SQLInjection client application, press F5.
10. Execute the application again by following the same steps that you followed in Task 2
11. Close Search Form – Defend.
12. Close the Microsoft Visual Studio 2005 Beta 2.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 2: Designing an Exception Handling Strategy 47

Next Steps

Introduction
The information in this section supplements the content provided in Session 2.
Next steps include visiting the Microsoft Web site to download:
„ Exception Management Architecture Guide
• http://www.microsoft.com/downloads/details.aspx?FamilyId=73742594-DB15-4703-8892-
75A569C4EB83&displaylang=en

MCT USE ONLY. STUDENT USE PROHIBITED


48 Session 2: Designing an Exception Handling Strategy

Discussion: Session Summary

Discussion Questions
1. What was most valuable to you in this session?
2. Based on this session, have you changed your mind about anything?
3. Are you planning to do anything differently on the job based on what you learned in this session?
If so, what?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy

Contents
Session Overview 1
Section 1: Common Scenarios for Cursor-
Based vs. Result Set–Based Operations 2
Section 2: Selecting Appropriate Server-Side
Cursors 13
Section 3: Selecting Appropriate Client-Side
Cursors 24
Next Steps 34
Discussion: Session Summary 35

MCT USE ONLY. STUDENT USE PROHIBITED


Information in this document, including URL and other Internet Web site references, is subject to
change without notice. Unless otherwise noted, the example companies, organizations, products,
domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious,
and no association with any real company, organization, product, domain name, e-mail address,
logo, person, place or event is intended or should be inferred. Complying with all applicable
copyright laws is the responsibility of the user. Without limiting the rights under copyright, no
part of this document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or
otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory,
regarding these manufacturers or the use of the products with any Microsoft technologies. The
inclusion of a manufacturer or product does not imply endorsement of Microsoft of the
manufacturer or product. Links are provided to third party sites. Such sites are not under the
control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link
contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for
webcasting or any other form of transmission received from any linked site. Microsoft is providing
these links to you only as a convenience, and the inclusion of any link does not imply endorsement
of Microsoft of the site or the products contained therein.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual property.

© 2006 Microsoft Corporation. All rights reserved.

Microsoft, <The publications specialist places the list of trademarks provided by the copy editor
here. Microsoft is listed first, followed by all other Microsoft trademarks in alphabetical order.>
are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or
other countries.

<The publications specialist inserts mention of specific, contractually obligated to, third-party
trademarks, provided by the copy editor>

All other trademarks are property of their respective owners.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 1

Session Overview

*****************************illegal for non-trainer use******************************

Relational database systems work in a set-oriented manner. To save valuable system resources, they
use specific algorithms to optimize the processing of multiple rows.
However, many applications work with individual objects that are stored in individual rows in one
or more tables. This is often referred to as row-by-row processing, because the applications need to
access the database one row at a time.
All SQL dialects provide cursors to support row-by-row operations. As developers and database
administrators, you must understand the importance of cursors in a database application and ensure
that the cursors perform only those functions that they were designed to perform.
Depending upon the scenario, it might be appropriate to use server-side cursors or client-side
cursors. In some cases, it is better not to use cursors at all.
There are no definite guidelines on whether or not to use cursors. Each programming technique has
its own advantages and disadvantages, and this is also true for cursors. Discovering the adequate
application scope for cursors is the main purpose of this session. It explains the scenarios in which
cursors are appropriate, and how to use them to optimize the use of system resources.

Session Objectives
„ Explain when cursors are appropriate and when they are not.
„ Explain the considerations for selecting server-side cursors.
„ Explain the considerations for selecting client-side cursors.

MCT USE ONLY. STUDENT USE PROHIBITED


2 Session 3: Choosing a Cursor Strategy

Section 1: Common Scenarios for Cursor-Based vs.


Result Set–Based Operations

*****************************illegal for non-trainer use******************************

Section Overview
Many applications use cursors. Developers, as well as systems and database administrators, must
understand why they use cursors instead of other programming techniques.
This section compares the different programming techniques that you can use when dealing with
operations that are ideal for row-by-row operations. This section also explains the scenarios in
which you should use a cursor-based approach and the scenarios in which you should use a set-
based approach.
Database programmers should select an approach based on the design of the database system. They
should also consider the performance, usability, and maintainability of the system.
In this section, you will learn when cursors are appropriate and when they are not.

Section Objectives
„ Explain why cursors are used in an application.
„ Explain the difference between row-based and set-based operations.
„ Explain the guidelines for using set-based operations.
„ Explain the guidelines for using row-based operations.
„ Explain how to replace cursors with set-oriented operations.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 3

Discussion: Why Use Cursors?

*****************************illegal for non-trainer use******************************

Introduction
Programmers use cursors in database applications for various reasons, including the following:
„ Cursors are a natural programming technique.
„ Using alternatives to cursors is not always optimal.
„ It might be too difficult to solve a problem without using cursors.
In some database applications, the data access component automatically creates cursors. However,
most database applications do not create Transact-SQL cursors explicitly, and they implement row-
by-row operations by looping through prebuilt row-sets. Adopting this technique deteriorates the
database application performance. Instead, improve performance by using cursors natively in
database applications.
This section discusses the various reasons for using cursors and the problems that are considered
difficult or impossible to solve without using cursors.

MCT USE ONLY. STUDENT USE PROHIBITED


4 Session 3: Choosing a Cursor Strategy

Discussion Questions
1. Why do you use cursors?
2. What business problems can you solve by using cursors?
3. Do you use dynamic T-SQL execution inside cursors?
4. Do you combine cursors with temporary tables?
5. Do you fetch column values into variables? What do you do with those variables later in the
code?
6. Do you keep more than one cursor open in the same procedure?
7. What type of cursors do you typically use?
8. Do you use cursors inside triggers?
9. Do you use cursors inside user-defined functions?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 5

Multimedia: Row-Based vs. Set-Based Operations

*****************************illegal for non-trainer use******************************

Introduction
Microsoft® SQL Server™ implements different types of cursors in unique ways. Each
implementation has specific implications in terms of storage required, concurrency, and
performance.
Understanding the behavior of each cursor helps you to select the appropriate type of cursor for
each situation. You will also be able to meet business requirements by using server resources in an
optimal manner.
This presentation shows how different types of cursors run in a SQL Server database and how these
differences affect system resources, concurrency, and database server performance.

Discussion Questions
1. Which cursor type uses more disk space?
2. Which cursor type uses more processing power?
3. Which cursor type produces fewer concurrency problems?
4. What is a result-set operation?
5. Why do programmers process data row by row instead of using result sets?
6. Explain how SQL Server executes result-set operations more efficiently than row-by-row
operations do.
7. Are there good and bad cursor strategies?

MCT USE ONLY. STUDENT USE PROHIBITED


6 Session 3: Choosing a Cursor Strategy

Guidelines for Using Result Set–Based Operations

*****************************illegal for non-trainer use******************************

Introduction
The design of the relational database management system (RDBMS) optimizes input/ouput (I/O) by
using optimized algorithms that process result sets efficiently. Most RDBMS offer some type of
cursor-based operations. However, consider other alternatives before using cursor-based operations.
This topic discusses important guidelines that you should follow to benefit from the result set–
based algorithms of SQL Server.

Favor Queries that Affect Groups of Rows Rather Than One Row at a
Time
The storage architecture of SQL Server is optimized to efficiently access data by sequentially
reading the data and the index pages. Based on this architecture, applications do not need to reread
a page to search for another piece of information that might be available on the page.
When an application requests operations that affect one row at a time, SQL Server might have to
read the same page multiple times.
Searching for a range of data can be very efficient when the query uses an index designed for this
purpose. The range of data can be read sequentially because the data is stored in a specific order in
the leaf level of the index. Sending a database request that is designed to take advantage of this
database feature is more efficient than designing client-based algorithms to access the same range
of data in a different way.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 7

Minimize the Use of Conditional Branches Inside Queries


When you use conditional execution and conditional branches inside queries, SQL Server might
serialize the execution of the query. Even if you send a perfect set-oriented query, SQL Server
might serialize the execution due to the complexity of the query. This is particularly common in the
following scenarios:
„ The query includes too much conditional logic, which creates conditional execution branches.
„ CASE and other system functions are inappropriately used.
„ The query includes extremely complex WHERE clauses with many OR operators.

In these cases, the query would not benefit from being a set-oriented operation, because SQL Server
will serialize the execution.

Do Not Make Inline Calls to Scalar User-Defined Functions in Large


Result Sets
Inline calls to scalar user-defined functions might trigger serialization. The serialization effect
might occur when calling scalar user-defined functions in the SELECT clause, where the function
uses a different parameter value for each row. The execution of this type of query requires the
following actions for every row:
1. Extract the required parameter values from the columns of the current row.
2. Execute the user-defined function with the new set of parameters.
3. Return to the main execution plan to continue the process with the new row.

This repetitive call to an external execution plan could result in performance problems.

Limit Query Cardinality as Early as Possible


To optimize I/O, you must limit the number of rows to be evaluated in a query. The advantage of
using set-oriented operations is clear when the total number of rows to be processed increases.
However, you should provide adequate query filters in the WHERE clause to limit the cardinality
of the query as early as possible in the execution path. Well-formed queries require lower I/O,
thereby increasing application performance.

MCT USE ONLY. STUDENT USE PROHIBITED


8 Session 3: Choosing a Cursor Strategy

Use Result Sets Instead of Cursor-Based Processes to Minimize I/O


Result-set execution optimizes I/O by avoiding rereading the same pages. The following example
demonstrates the advantages of using set-oriented operations instead of cursor-based operations.
The data table has data with a covering index that has three non-leaf index levels. The data range
requested in the query represents 100 rows spanning three leaf index pages. The total I/O that this
operation will represent will be:
1 (IAM) + 3 (non-leaf nodes) + 3 (leaf pages) = 7 pages to process the query
However, using a cursor involves searching individually for each row in that range. This is more
expensive because the non-leaf levels of the index will need to be read for each request, and the
data leaf page will need to be read once per request. In this example, the total I/O to query 100 rows
will be:
1 (IAM) + 3 (non-leaf nodes) + 1 (leaf page) = 5 pages to process each query
100 queries * 5 pages per query = 500 pages to process the entire group of rows

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 9

Discussion: Guidelines for Using Row-Based Operations

*****************************illegal for non-trainer use******************************

Introduction
Cursors are a powerful feature that can solve complex business scenarios in a database application.
However, cursors are not properly used in many applications.
Generally, developers follow various methods of designing applications, and most of them have
strong opinions about when and why to use cursors.
During this discussion, you can share your experiences with effectively using cursors. You can also
share potential alternatives that are relevant to the discussion.

Discussion Questions
1. Is your application constrained by development time or execution time?
2. Do you need to build statements to execute dynamically depending on row values?
3. Do you need to execute data definition language (DDL) commands dynamically?
4. Are calculations so complex and varied that they need to be evaluated independently on a row-
by-row basis?
5. How many different types of calculations do you need to apply?
6. Did you try to split these operations into a group of independent, set-oriented operations?

MCT USE ONLY. STUDENT USE PROHIBITED


10 Session 3: Choosing a Cursor Strategy

Demonstration: Replacing Cursors with Set-Oriented Operations

*****************************illegal for non-trainer use******************************

Introduction
Earlier topics in this session discussed the appropriateness of using cursors and result-set
operations. As database developers and administrators, you should understand and be able to
identify the situations in which you should replace cursors with a set operation.

Demonstration Overview
In this demonstration, your instructor will explain two typical uses of cursors and demonstrate how
to convert them into solutions that do not use cursors and that improve performance.

These are only two examples of how to identify situations where cursors were used to solve
solutions that require a different approach.

Task 1: Joining Tables by Using Client-Side Cursors


Task Overview
The following task presents a typical client-side implementation to merge the results coming from
two different queries.
To join tables by using client-side cursors, perform the following steps.
1. Open the Microsoft Visual Studio® 2005 Beta 2.
2. Browse to D:\Democode\Section01, and open the 2783M3L1Demonstrations.sln solution.
3. In the ClientSideJoin project, open the Form1.cs form in design mode.
4. Double-click Start.
5. Scroll down to the RetrieveData method.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 11

6. Using the mouse, select the call to the method GetAllPOsWithClientJoin in line 78, and
then press F12 to browse to the definition of the method.
7. Notice the call to the GetPOHeaders method in line 69.
8. Notice the foreach loop in line 75.
9. Notice the call to the GetPODetailByID method in line 78.
10. Scroll up to line 13, to the definition of the DataLayer class and the
header_sql_command and details_sql_command constant variables.
11. Right-click the ClientSideJoin project, and then click Set as StartUp Project.
12. To run the application, press F5.
13. The application opens the form Client Side Join.
14. Verify that the Client Side option is selected.
15. To run the sample, click Start.
Notice the value presented for Elapsed Time.

Task 2: Joining Tables by Using a JOIN Query in the Database Server


Task Overview
The following task presents a server-side implementation to merge the results coming from two
different queries by using a JOIN query.
To join tables using a JOIN query in the database server, perform the following steps.

1. Switch to the Visual Studio 2005 Beta 2.


2. Browse to the 2783M3L1Demonstrations.sln solution.
3. In the ResetDatabase project, in the Create Scripts folder, open the
GetPOHeaderDetails.sql file.
4. Notice the JOIN query.
5. In the ResetDatabase project, in the Create Scripts folder, right-click
GetPOHeaderDetails.sql and click Run On.
6. If prompted to select a database reference to connect to the AdventureWorks database on
the MIA-SQL\SQLINST1 server, choose from the list of servers or create a new reference.
7. Right-click the ClientSideJoin project, and click Set as StartUp Project.
8. To run the application, press F5.
The application opens the form Client Side Join.
9. Verify that the Server Side option is selected.
10. To run the sample, click Start.
11. Notice the value presented for Elapsed Time.

MCT USE ONLY. STUDENT USE PROHIBITED


12 Session 3: Choosing a Cursor Strategy

Task 3: Apply a Complex Row-By-Row Process in a Client Application


To apply a complex row-by-row process in a client application, perform the following steps.
1. Switch to Visual Studio 2005 Beta 2.
2. Browse to the 2783M3L1Demonstrations.sln solution.
3. In the ExecuteComplexOperation project, open Form1.cs form in design mode.
4. Double-click Start.
5. Scroll down to the RetrieveData method in line 70.
6. Using the mouse, select the call to the method ExecuteComplexClientSide in line 78, and then
press F12 to browse to the definition of the method.
7. Notice the call to the GetProductsTable method in line 31.
8. In the ResetDatabase project, in the Create Scripts folder, open the ProductsView.sql file.
9. In the Create Scripts folder, right-click the ProductsView.sql file and on the shortcut menu, click
Run On.
10. If prompted to select a database reference to connect to the AdventureWorks database on the
MIA-SQL\SQLINST1 server, choose from the list of servers, or create a new reference.
11. Return to the DataLayer.cs file in the ExecuteComplexOperation project.
12. Scroll down to line 42, and show the CalculateProductChange method.
13. Scroll down to line 66, and show the ApplyCategoryAdjustment method.
14. Scroll down to line 125, and show the ApplyCostAdjustment method.
15. Right-click the ExecuteComplexOperation project, and on the shortcut menu, click Set as
StartUp Project.
16. To run the application, press F5.
17. The application opens the Execute Complex Operation form.
18. Verify that the Client Side option is selected.
19. To run the sample, click Start.

Task 4: Apply a Complex Process as a Sequence of Set-Oriented


Operations in the Database Server
Task Overview
The following task presents a server-side implementation to execute a complex query. The optimal
solution, which uses two UPDATE statements and custom helper functions, is presented first. A
solution that uses a server-side CURSOR, and is less than optimal, is presented later.
To apply a complex process as a sequence of set-oriented operations in the database server, perform
the following steps.
1. Switch to Visual Studio 2005 Beta 2.
2. Browse to the 2783M3L1Demonstrations.sln solution.

MCT USE ONLY. STUDENT USE PROHIBITED


3. In the ResetDatabase project, in the Create Scripts folder, open the NoCursorAdjustment.sql file.
Session 3: Choosing a Cursor Strategy 13

4. Notice the two UPDATE statements.


5. In the Create Scripts folder, right-click the NoCursorAdjustment.sql file, and click Run On.
6. If prompted to select a database reference, to connect to the AdventureWorks database on the
MIA-SQL\SQLINST1 server, choose from the list of servers, or create a new reference.
7. In the ResetDatabase project, in the Create Scripts folder, open the ApplyCostAdjustment.sql
file.
8. In the Create Scripts folder, right-click the ApplyCostAdjustment.sql file, and on the shortcut
menu click Run On.
9. If prompted to select a database reference, to connect to the AdventureWorks database on the
MIA-SQL\SQLINST1 server, choose from the list of servers, or create a new reference.

10. In the ResetDatabase project, in the Create Scripts folder, open the
ApplyCategoryAdjustment.sql file.
11. In the Create Scripts folder, right-click the ApplyCategoryAdjustment.sql file, and on the
shortcut menu, click Run On.
12. If prompted to select a database reference, to connect to the AdventureWorks database on the
MIA-SQL\SQLINST1 server, choose from the list of servers, or create a new reference.
13. Right-click the ExecuteComplexOperation project, and click Set as StartUp Project.
14. To run the application, press F5.
15. The application opens the Execute Complex Operation form.
16. Verify that the Server Side option is selected.
17. To run the sample, click Start.
18. Close the Execute Complex Operation application.
19. Return to Visual Studio 2005 Beta 2.
20. In the ResetDatabase project, open the CursorAdjustment.sql file.
21. Close Visual Studio 2005 Beta 2.

Discussion Questions
1. Referring to the two examples in this demonstration, explain why these cursors need to be
replaced.
2. Is there additional development overhead to replace these cursors?
3. Is there additional maintenance overhead?
4. Do you actually gain performance?
5. Do you reduce contention?

MCT USE ONLY. STUDENT USE PROHIBITED


14 Session 3: Choosing a Cursor Strategy

Section 2: Selecting Appropriate Server-Side Cursors

*****************************illegal for non-trainer use******************************

Section Overview
Server-side cursors should be used only when cursor operations are required to support application
functionality, such as when you want to process only certain rows of the entire result set or when
you want to retrieve only a part of the result set from the server.
Server-side cursors offer better performance because only the fetched data is sent over the network,
and the client application does not need to cache large amounts of data.
To select which type of server-side cursors to use, database developers must understand their
functionality, how SQL Server 2005 implements server-side cursors, and how conditions in the
database server affect cursor functioning. This section explains the factors to consider before
selecting server-side cursors.

Section Objectives
„ Explain the life cycle of a T-SQL cursor and the server-side resources that the cursor uses.
„ Explain the performance implications of using row-by-row operations that do not use cursors.
„ Explain how the transaction isolation level affects the behavior of various cursor types.
„ Describe scenarios in which positional updates are appropriate.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 15

Demonstration: The Life Cycle of a T-SQL Cursor

*****************************illegal for non-trainer use******************************

Introduction
The life cycle of a cursor consists of the following stages:
1. Declaring the cursor and associating it with the result set of a T-SQL statement
2. Executing the T-SQL statement to populate the cursor
3. Retrieving rows in the cursor
4. Closing the cursor

Each of these operations affects resource utilization on the database server. Resource utilization
depends on several factors, such as the cursor type, the number of rows read, cursor behavior, and
the locking scheme.

Demonstration Overview
There are some general steps followed when creating a cursor:
„ Declare the cursor, and associate it with the result set of a Transact-SQL statement.
„ Execute the Transact-SQL statement to populate the cursor.
„ Retrieve the rows in the cursor that interest you.
„ Close the cursor.
Each of these operations represents a cost in terms of resource usage on the database server. This
cost can be high or low, depending on several factors such as the cursor type, number of rows read,
cursor behavior, and locking scheme.
In this demonstration, your instructor will demonstrate how to measure the cost of running different
types of cursors.

MCT USE ONLY. STUDENT USE PROHIBITED


16 Session 3: Choosing a Cursor Strategy

Additional Information Cursors force the database engine to repeatedly fetch rows, negotiate
blocking, manage locks, and transmit results.
Using more locks than required impacts the tempdb database. The impact varies according to
the type of cursor used.
The forward-only, read-only cursor is the fastest and least resource-intensive way to get data
from the server. This type of cursor was earlier known as a “firehose” cursor or a local fast-
forward cursor.
A cursor that has a DECLARE statement that selects data from a temporary table usually
causes a recompile. Therefore, avoid using cursors over temporary tables.
To learn more about cursor types, cursor locking, and the impact of these on the tempdb
database, read the chapter “Cursors (Database Engine)” in SQL Server 2005 Books Online
(BOL).

Task 1: Setting Up the Monitoring Environment


Task Overview
This task sets up the environment to start monitoring and profiling the execution of T-SQL scripts
to exercise different types of cursor declarations.
To set up the monitoring environment, perform the following steps.
1. Open Windows® Performance Monitor.
2. To delete the default running counters, press the DEL key, or click the X sign on the toolbar
three times.
3. To add new counters, on the toolbar, click the plus (+) sign.
4. To add the following counters, select the appropriate values from the lists below, and click Add:

Performance Object Counter Instance

Processor % Processor Time _Total


MSSQL$SQLINST1:Cursor Active cursors TSQL Local Cursor
Manager By Type
MSSQL$SQLINST1:Cursor Cursor memory usage TSQL Local Cursor
Manager By Type
MSSQL$SQLINST1::Cursor Cursor requests/sec TSQL Local Cursor
Manager By Type

5. Click Close.
6. Open SQL Server 2005 Profiler.
7. To create a new trace, on the File menu, click New Trace.
8. If asked to connect to SQL Server, use Windows Authentication to connect to the MIA-
SQL\SQLINST1 server.
9. SQL Server Profiler displays the Trace Properties window.
10. On the Use the Template property, click Blank.

MCT USE ONLY. STUDENT USE PROHIBITED


11. To move to the second tab, click the Events Selection tab.
Session 3: Choosing a Cursor Strategy 17

12. From the Events list, select the following events:

Event Category Events

Cursors CursorClose
CursorExecute
CursorOpen
CursorPrepare
TSQL SQL:BatchCompleted

13. Click Run.

Task 2: Measuring Cursor Performance


Task Overview
In this task, execute a ready-made T-SQL script with different types of cursors declared. Measure
their impact on the system, and compare the results.
To measure cursor performance, perform the following steps.
1. Open SQL Server 2005 Management Studio.
2. Browse to and open the MOC2783M3L2Demonstrations.ssmsln solution located at
D:\Democode\Section02.
3. Open Solution Explorer, and in the Queries folder, open the CursorLifeCycle.sql file.
4. Select the text from lines 1–20. (The text is enclosed with comments marked “Step #1.”)
5. To execute the selected text, press F5.
6. Select the text from lines 24–27. (The text is enclosed with comments marked “Step #2.”)
7. To execute the selected text, press F5.
8. Keep pressing F5 until the @@FETCH_STATUS value is 0.
9. Select the text from lines 31–34. (The text is enclosed with comments marked “Step #3.”)
10. To execute the selected text, press F5.
11. Select the text from lines 38–41. (The text is enclosed with comments marked “Step #4.”)
12. To execute the selected text, press F5.
13. Switch to Windows Performance Monitor, and analyze the results.

MCT USE ONLY. STUDENT USE PROHIBITED


18 Session 3: Choosing a Cursor Strategy

Observe how the values changed through the timeline.


14. Switch to SQL Server Profiler, and analyze the results.
16. Observe the combination of events between the opening and the closing of the cursor.
Pay special attention to the Reads column in each step.
17. If time permits, return to SQL Server Management Studio, and when declaring the cursor in line
10, change the script to any of the following. (The changed value is underlined.)
DECLARE SampleCrsr CURSOR FORWARD_ONLY KEYSET READ_ONLY
18. DECLARE SampleCrsr CURSOR FORWARD_ONLY DYNAMIC READ_ONLY
19. Repeat steps 4–17 in this demonstration task.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 19

Demonstration: Performing Row-By-Row Operations without Using


Cursors

*****************************illegal for non-trainer use******************************

Introduction
T-SQL is a set-based language, so it does not work well with solutions that need to perform row-
by-row operations within a result set.
Generally, the performance of server-side T-SQL cursors is lower than the performance of a set-
based solution. T-SQL cursors do not fully utilize the power of a relational database engine, which
is optimized for non-sequential, set-based queries.

Demonstration Overview
Every cursor-based query can be coded in an equivalent set-based solution. Your instructor will
demonstrate different techniques for avoiding T-SQL server side cursors and will explain the
performance implications of using row-by-row operations that do not use cursors.

Task 1: Implementing Row-By-Row Navigation without Cursors


Task Overview
In this task, your instructor will demonstrate three different techniques for implementing row-by-
row navigation without using T-SQL server-side cursors.
To implement row-by-row navigation without cursors, perform the following steps:
1. Switch to SQL Server 2005 Management Studio.
2. Browse to the solution MOC2783M3L2Demonstrations.
3. In the Queries folder, open the RowByRowWithoutCursor.sql file.

MCT USE ONLY. STUDENT USE PROHIBITED


4. Using the mouse, select the text from lines 1–30.
20 Session 3: Choosing a Cursor Strategy

5. To execute the T-SQL script code, press F5.


6. Using the mouse, select the text from lines 33–48.
7. To execute the T-SQL script code, press F5.
8. Using the mouse, select the text from lines 53–74.
9. To execute the T-SQL script code, press F5.
10. Using the mouse, select the text from lines 76–86.
11. To execute the T-SQL script code, press F5.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 21

Demonstration: How the Transaction Isolation Level Affects Cursor


Behavior

*****************************illegal for non-trainer use******************************

Introduction
Transaction isolation level affects the behavior of cursors and of regular SELECT statements in a
similar manner. Transaction isolation level is used to control how transaction activities are isolated
from each other. By using appropriate locking mechanisms, you can manage concurrent access to
information that is being used by several simultaneous transactions.
The difference between the behavior of transactions with SELECT statements and the behavior of
the transactions when using cursors is the moment in time when the locks are acquired and released.
This timing depends on the type of lock, the locking schema, and the isolation level specified.

Demonstration Overview
Your instructor will demonstrate how each T-SQL server-side cursor type is affected by the
transaction isolation level, and how both the cursor type and the isolation level affect the locking
mechanism in SQL Server 2005.

MCT USE ONLY. STUDENT USE PROHIBITED


22 Session 3: Choosing a Cursor Strategy

Task 1: Set Up the Monitoring Environment


To set up the monitoring environment, perform the following steps.
Task Overview
This task sets up the environment to start monitoring and profiling the execution of T-SQL scripts
to exercise different types of cursor declarations.
1. Open Windows Performance monitor.
2. To delete the default running counters, press the DEL key, or, on the toolbar, click the X sign as
many times as necessary.
3. To add new counters, on the toolbar, click the plus (+) sign.
4. Add the following counters by selecting the appropriate values from the lists and by clicking
Add:

Performance Object Counter Instance

MSSQL$SQLINST1:Transactions Transactions
MSSQL$SQLINST1:Locks Lock Requests/sec Database
MSSQL$SQLINST1:Locks Lock Requests/sec Key
MSSQL$SQLINST1:Locks Lock Requests/sec Object
MSSQL$SQLINST1:Locks Lock Requests/sec Page

5. Click Close.

Task 2: Measuring Acquired Locks by Using Windows Performance


Monitor
Task Overview
This task executes a sample application that opens a configurable cursor: the application selects
different options for cursor type, isolation level, and locking level. For each combination, measure
the acquired locks by using Windows Performance Monitor.
To measure acquired locks by using Windows Performance Monitor, perform the following steps.
1. Open Visual Studio 2005 Beta 2.
2. Browse to and open the project file CursorsAndTransactions.csproj, located at
D:\Democode\Section02\CursorsAndTransactions.
3. Open the DataLayer.cs file.
4. Scroll down to the definition of the ExecuteCursor method in line 9.
5. To run the sample application, press F5.
6. Arrange the windows so that Windows Performance Monitor can be maximized, and the Cursor
Locking application window can be floating over it.
7. Select the following combination of boxes in the list:
8. adOpenForwardOnly
9. adXactReadUncommitted
MCT USE ONLY. STUDENT USE PROHIBITED
Session 3: Choosing a Cursor Strategy 23

10. adLockReadOnly
11. Click Execute.
12. Select the following combination of boxes in the list:
13. adOpenDynamic
14. adXactSerializable
15. adLockPessimistic
16. Click Execute.
17. If time permits, try different combinations, and explain the effect of each combination.
18. Close Visual Studio 2005 Beta 2.
19. Close Performance Monitor.

MCT USE ONLY. STUDENT USE PROHIBITED


24 Session 3: Choosing a Cursor Strategy

Discussion: When Are Positional Updates Appropriate?

*****************************illegal for non-trainer use******************************

Introduction
When implementing server-side cursors, database developers might want to update the result set
that is being navigated on a row-by-row basis.
The function of positional updates is similar to that of the row-by-row update function of a
structural programming language. T-SQL is a set-oriented language and, therefore, trying to enforce
a row-by-row update mechanism might affect the overall performance of the database server.

Discussion Questions
1. When should you update records row-by-row?
2. Are you using variables that are being fetched as values for the columns to be updated?
3. Are you using these variables as arguments in the WHERE clause of the UPDATE statements?
4. Are there any performance implications of doing this?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 25

Section 3: Selecting Appropriate Client-Side Cursors

*****************************illegal for non-trainer use******************************

Section Overview
Some data access providers enable the client application to cache a copy of a result set into the
memory on the client machine and fetch each row one-by-one to the client application. This is
called a client-side cursor.
The main benefit of client-side cursors is that the connection to the database can be closed, and
therefore, no locks are held while browsing through the result set. This improves client application
performance.
Server-side and client-side cursors provide different approaches to browsing data on a row-by-row
basis. Based on the database requirements, application developers must decide whether to use
server-side or client-side cursors.
This section provides information about the various client data access libraries that support client-
side cursors. It explains how to use cursors from SQL Native Client (SQLNCLI) and how to
perform row-by-row operations by using DataSet and DataReader objects. You will also learn
about the SQL Server activities produced by these operations.

Section Objectives
„ Describe the client data access libraries that support client-side cursors.
„ Explain how to use cursors from SQLNCLI.
„ Explain how to perform row-by-row operations by using DataSets and DataReaders, and explain
the SQL Server activity produced by these operations.
„ Explain the guidelines for selecting client-side cursors.

MCT USE ONLY. STUDENT USE PROHIBITED


26 Session 3: Choosing a Cursor Strategy

Client Data Access Libraries That Support Client-Side Cursors

*****************************illegal for non-trainer use******************************

Introduction
Implementation of client-side cursors depends on which data access provider you choose.
Application developers usually choose data access providers based on factors such as database
server support, transaction support, security features, performance, and technical support.
Developers should also consider client-side features such as connection pooling and client-side
cursors.
Each data access provider supports a different feature set. Some of the data access providers, such
as OLE Database (OLE DB) and Open Database Connectivity (ODBC), were designed to work
with multiple data sources. Some of the providers, such as SqlClient and the Sql Native data access
provider, were designed to work specifically with a single data source and to provide native support
for that data source.
In the OLE DB, ODBC, and Active data object (ADO) specifications, a cursor is implicitly opened
over any result set that is returned by a T-SQL statement. However, application developers can
modify this behavior by changing the properties of the object that will execute the T-SQL
statement.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 27

Features of Client Data Access Libraries


The following table summarizes the various features of the client data access libraries that support
client-side cursors.
Data Access Library Features

OLE DB ƒ The term “rowset” refers to a combination of a


resultset and its associated cursor behaviors.
ƒ Does not natively support client-side cursors, but it
can be used with Microsoft Cursor Services for
OLE DB to provide client-side cursors.
ODBC ƒ The terms “resultset” and “cursor” are used
interchangeably because a cursor is automatically
mapped to a result set.
ƒ Implements client cursors through the ODBC
Cursor Library.
ƒ Enables multiple active statements on a connection
if used in conjunction with SQLNCLI.
ƒ Supports read-only and updatable cursor types.
ƒ Supports forward-only and scrollable cursor
navigation.
ƒ Data access providers allow you to configure and
specify the cursor type, concurrency, and rowset
size.
ADO ƒ The term “recordset” is a combination of a result
set and its associated cursor behaviors.
ƒ Supports only static read-only cursor types.
ƒ Supports forward-only and scrollable cursor
navigation.
ƒ Supports asynchronous retrieval of results from the
database server.
ADO.NET - SqlClient ƒ Provides a separation between the result set
(DataSet) and the cursor (SqlDataReader and
TableDataReader) that are implemented by
different classes.
ƒ Supports only read-only, forward-only cursors.
ƒ Allows multiple active statements on a connection.
ƒ Supports asynchronous retrieval of results from the
database server.

Additional Information For more information about data access providers, refer to Session 1,
“Choosing Data Access Technologies and an Object Model.”

MCT USE ONLY. STUDENT USE PROHIBITED


28 Session 3: Choosing a Cursor Strategy

Demonstration: Using Cursors from the SQL Native Client Data


Access Provider

*****************************illegal for non-trainer use******************************

Introduction
The SQLNCLI data access provider is the most recent SQL Server 2005 data provider. It supports
the previous OLE DB and ODBC interfaces, as well as the new features of SQL Server, such as
XML, new data types, Multiple Active Result Sets (MARS), and user-defined types.

Demonstration Overview
In this demonstration, your instructor will show how to use cursors from SQLNCLI.

Task 1: Executing Client-Side Cursors Using OLE DB and ODBC with


the SQL Native Client Data Access Provider

In this task, your instructor will show how to navigate a client-side cursor opened using both OLE
DB and ODBC providers with the SQL Native Client data access provider. Monitor the T-SQL
code created by using SQL Server Profiler.
To execute client-side cursors by using OLE DB and ODBC with the SQL Native Client data
access provider, perform the following steps.
1. Open Visual Studio 2005 Beta 2.
2. Browse to D:\Democode\Section03, and open the MOC2783M3L3.sln solution.
3. In the SNACursors project, open the DataLayer.cs file.
4. Scroll to the definition of the ExecuteCursor method in line 21.

MCT USE ONLY. STUDENT USE PROHIBITED


5. Open SQL Server Profiler.
Session 3: Choosing a Cursor Strategy 29

6. To create a new trace, on the File menu, click New Trace.


7. If you are prompted to connect to SQL Server, connect to the MIA-SQL\SQLINST1 server.
8. SQL Server Profiler will display the Trace Properties window. To select the default settings,
click Run, and then start monitoring.
9. Right-click the SNACursors project, and click Set as StartUp Project.
10. To run the sample application, press F5.
The application will open a window that contains a box and a GO button.
11. Select the OLE-DB (SQLNCLI) option, and then click GO.
The output window should display the rows while being navigated.
12. Select the ODBC (SQLNCLI) option, and then click GO.
The output window should show the rows being navigated.
13. Close the sample application.
14. Return to SQL Server Profiler.
15. Navigate through all the recorded rows to see the T-SQL code that was executed.
16. Close SQL Server Profiler.

MCT USE ONLY. STUDENT USE PROHIBITED


30 Session 3: Choosing a Cursor Strategy

Demonstration: SQL Server Activity Produced by DataReaders,


DataSets, and DataTableReader Objects

*****************************illegal for non-trainer use******************************

Introduction
The Dataset and DataTableReader classes are parts of the Microsoft ADO.NET generic classes, and
the DataReader class is a part of the ADO.NET SqlClient data access provider.
SqlClient supports only client-side cursors. You should use an SqlCommand object to execute an
SQL query and return the result set back to the calling object in the client application. The client
application can manipulate this buffered result set on a row-by-row basis by using the
SqlDataReader class, or as a set by using the Dataset class. The DataSet class can also be navigated
with a DataTableReader object.

Demonstration Overview
In this demonstration, your instructor will demonstrate how to create a client-side cursor in
ADO.NET and the T-SQL statements that this generates.

Task 1: Implementing a Client-Side Cursor with DataReaders


Task Overview
In this task, your instructor will show you how to navigate a result set on a row-by-row basis by
using a SqlDataReader. Monitor the T-SQL code created by using SQL Server Profiler.
To implement a client-side cursor with DataReaders, perform the following steps:
1. Open SQL Server Profiler.
2. To create a new trace, on the File menu, click New Trace.

MCT USE ONLY. STUDENT USE PROHIBITED


3. If you are prompted to connect to SQL Server, connect to the MIA-SQL\SQLINST1 server.
Session 3: Choosing a Cursor Strategy 31

4. SQL Server Profiler will display the Trace Properties window. To select the default settings,
click Run, and then start monitoring.
5. Switch to Visual Studio 2005 Beta 2.
6. Browse to the MOC2783M3L3.sln solution.
7. In the ADO.NETCursors project, open the DataLayer.cs file.
8. Scroll down to the definition of the ExecuteDataReader method in line 71.
9. To set a breakpoint on the method definition, press F9.
10. Right-click the ADO.NETCursors project, and click Set as StartUp Project.
11. To run the sample application, press F5.
12. Select SqlDataReader, and then click GO.
13. The control will return to Visual Studio Beta 2. To navigate through the code execution in
debug mode, press F10.
14. After navigating through all the source code on the debug path, switch to SQL Server Profiler.
15. Navigate through all the recorded rows, and notice the T-SQL code that was executed.
16. In SQL Server Profiler, close the running trace. On the File menu, click Close, and in the box,
click Yes.

Task 2: Implementing a Client-Side Cursor with DataSets


Task Overview
In this task, your instructor will show you how to navigate a result set on a row-by-row basis by
using a DataSet. Monitor the T-SQL code created by using SQL Server Profiler.
To implement a client-side cursor with DataSets, perform the following steps.
1. Switch to the SQL Server 2005 Profiler.
2. To create a new trace, on the File menu, click New Trace.
3. If prompted to connect to SQL Server, connect to the MIA-SQL\SQLINST1 server.
4. SQL Server Profiler will display the Trace Properties window. To select the default settings,
click Run, and then start monitoring.
5. Switch to Visual Studio 2005 Beta 2.
6. Browse to the MOC2783M3L3.sln solution.
7. In the ADO.NETCursors project, open the DataLayer.cs file.
8. Scroll down to the definition of the ExecuteDataSet method in line 39.
9. To set a breakpoint on the method definition, press F9.
10. To run the sample application, press F5.
11. Click DataSet, and then click GO.
12. The control will return to Visual Studio Beta 2. To navigate through the code execution in

MCT USE ONLY. STUDENT USE PROHIBITED


debug mode, press F10.
32 Session 3: Choosing a Cursor Strategy

13. After navigating through all the source code on the debug path, switch to SQL Server Profiler.
14. Browse through all the recorded rows, and notice the T-SQL code that was executed.
15. Close the running trace in SQL Server Profiler; on the File menu, click Close; and click Yes in
the box that appears.

Task 3: Implementing a Client-Side Cursor with DataTableReaders


Task Overview
In this task, your instructor will show you how to navigate a result set on a row-by-row basis by
using a DataTableReader. Monitor the T-SQL code created by using SQL Server Profiler.
To implement a client-side cursor with DataTableReaders, perform the following steps.
1. Switch to SQL Server 2005 Profiler
2. To create a new trace, on the File menu, click New Trace.
3. If you are prompted to connect to SQL Server, connect to the MIA-SQL\SQLINST1 server.
4. SQL Server Profiler will display the Trace Properties window. To select the default settings,
click Run, and then start monitoring.
5. Switch to Visual Studio 2005 Beta 2.
6. Browse to and open the MOC2783M3L3.sln solution.
7. In the ADO.NETCursors project, open the DataLayer.cs file.
8. Scroll down to the definition of the ExecuteDataTableReader method in line 57.
9. To set a breakpoint on the method definition, press F9.
10. To run the sample application, press F5.
11. Select DataTableReader, and then click GO.
12. The control will return to Visual Studio Beta 2. To navigate through the code execution in
debug mode, press F10.
13. After navigating through all the source code on the debug path, switch to SQL Server Profiler.
14. Navigate through all the recorded rows, and notice the T-SQL code that was executed
15. Close SQL Server Profiler.
16. Close Visual Studio 2005 Beta 2.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 33

Considerations for Selecting Client-Side Cursors

*****************************illegal for non-trainer use******************************

Introduction
Both client-side and server-side cursors are designed to work with data on a row-by-row basis, but
they provide different implementation architectures and are not designed to work together.
You should not use client-side and server-side cursors together because the potential benefits of one
type of cursor could be drastically reduced by using the other cursor type in the same process. You
must choose between these two implementations. Both provide advantages and disadvantages in
specific scenarios.
It is important to note that the data access provider that you choose will determine the feature set
available when implementing client-side cursors.

Considerations for Using Client-Side Cursors


„ Network latency (performance)
• Client cursors use more network resources because they transport the entire result set to the
client computer.
„ Additional cursor types
• Whether support exists for additional cursor types depends on the data access provider. Most
data access providers support only a limited number of cursor types, such as static or forward-
only cursors.
• Client cursors support only a limited functionality.
„ Positioned updates
• Some data access providers support navigation of the client-side cursor in a disconnected
manner. Updating the values on the result set while it is disconnected will affect only the local
copy of the data and not the original database.
• A new connection needs to be established and a new command needs to be issued to update the
MCT USE ONLY. STUDENT USE PROHIBITED
copied data to the database.
34 Session 3: Choosing a Cursor Strategy

• Changes in the database will not be visible to the client-side cursors until the changes are
synchronized with the database.
• Synchronizing the changes with the database could create concurrency violations that must be
handled through one of the following recommended practices:
• Including only the primary key columns in the WHERE clause
• Including all columns in the WHERE clause
• Including Unique key columns and the Timestamp columns in the WHERE
clause
• Including Unique key columns and the Modified columns in the WHERE
clause
„ Memory usage
• The client computer needs to cache large amounts of data and maintain information about the
cursor position. Therefore, the client machine should have enough memory to handle the size
of the entire result set.

Note For more information on handling concurrency issues, see the “Managing Concurrency”
topic in the .NET Data Access Architecture Guide at:
http://msdn.microsoft.com/practices/guidetype/Guides/default.aspx?pull=/library/en-
us/dnbda/html/daag.asp

Typical Scenarios Where Client-Side Cursors Might be Appropriate


„ Client-side cursors should be used only to alleviate the restriction that server-side cursors do not
support all T-SQL statements or batches.
„ Client-side cursors provide better performance if an efficient filter is applied to restrict the number
of rows sent to the client machine. If you cannot filter the rows easily, you can use paging
techniques to retrieve the rows in small groups, which provides a better performance than
retrieving the entire result set does.
„ Static read-only, forward-only client-side cursors provide the best performance.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 3: Choosing a Cursor Strategy 35

Next Steps

*****************************illegal for non-trainer use******************************

Introduction
The information in this section supplements the content provided in Session 3.
„ Improving SQL Server performance
• http://msdn.microsoft.com/library/en-us/dnpag/html/scalenet.asp
„ “Sequential to Set-Based,” SQL Server Magazine, November 2001
• http://www.windowsitpro.com/Article/ArticleID/22431/22431.html
„ Client-Side Cursors Versus. Server-Side Cursors
• http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/vsentpro/html/veconclientsidecursorsversusserversidecursors.asp

MCT USE ONLY. STUDENT USE PROHIBITED


36 Session 3: Choosing a Cursor Strategy

Discussion: Session Summary

*****************************illegal for non-trainer use******************************

Discussion Questions
1. What was most valuable to you in this session?
2. Based on this session, have you changed your mind about anything?
3. Are you planning to do anything differently on the job based on what you learned in this session?
If so, what?
4. Would you still use cursors in the same ways that you did at the beginning of the session? If not,
how will you use them differently?
5. Do you still think that cursors are an appropriate solution for some scenarios?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using
Multiple Active Result Sets

Contents
Session Overview 1
Section 1: Introduction to MARS 2
Section 2: Designing Query Strategies for
Multiple Reads 17
Section 3: Designing Query Strategies for
Mixing Reads and Writes in the Same
Connection 25
Section 4: Concurrency Considerations When
Using MARS 37
Next Steps 45
Discussion: Session Summary 46

MCT USE ONLY. STUDENT USE PROHIBITED


Information in this document, including URL and other Internet Web site references, is subject to
change without notice. Unless otherwise noted, the example companies, organizations, products,
domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious,
and no association with any real company, organization, product, domain name, e-mail address,
logo, person, place or event is intended or should be inferred. Complying with all applicable
copyright laws is the responsibility of the user. Without limiting the rights under copyright, no
part of this document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or
otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory,
regarding these manufacturers or the use of the products with any Microsoft technologies. The
inclusion of a manufacturer or product does not imply endorsement of Microsoft of the
manufacturer or product. Links are provided to third party sites. Such sites are not under the
control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link
contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for
webcasting or any other form of transmission received from any linked site. Microsoft is providing
these links to you only as a convenience, and the inclusion of any link does not imply endorsement
of Microsoft of the site or the products contained therein.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual property.

© 2006 Microsoft Corporation. All rights reserved.

Microsoft, Windows, ActiveX and Visual Studio are either registered trademarks or trademarks of
Microsoft Corporation in the United States and/or other countries.

All other trademarks are property of their respective owners.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 1

Session Overview

*****************************illegal for non-trainer use******************************

Versions of Microsoft® SQL Server™ earlier than SQL Server 2005 have a restriction on issuing
multiple requests (for read or write operations) over the same connection. This is a limitation of the
communication protocol used in those versions.
Several data access providers try to simulate the behavior on the client, but this affects application
performance.
In SQL Server 2005, Multiple Active Result Sets (MARS) allows applications to have multiple
result sets open and to interleave reading from them. MARS also makes it possible to execute
stored procedures or INSERT, UPDATE, or DELETE operations while the result sets are open.
This session focuses on when and how MARS can improve application response.

Session Objectives
„ Explain why MARS is more useful than the result-set execution in SQL Server 2000 is.
„ Explain when multiple simultaneous reads can be beneficial for an application and the
implications of using this technique.
„ Describe scenarios in which it might be beneficial to use MARS to combine write and read
operations.
„ Explain the locking implications of using MARS and how these locks affect other transactions.

MCT USE ONLY. STUDENT USE PROHIBITED


2 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Section 1: Introduction to MARS

*****************************illegal for non-trainer use******************************

Section Overview
In earlier versions of SQL Server, database applications were not able to maintain multiple active
statements on a connection. An application had to process or cancel all the result sets from one
batch before it could execute any other batch on that connection.
In SQL Server 2005, connections can be enabled to support MARS. MARS allows applications to
have more than one active result set per connection, thereby improving application response time
and the end-user experience.
In this section, you will learn how MARS works and compare its use with that of the result-set
execution in SQL Server 2000.

Section Objectives
„ Explain how Microsoft ActiveX® Data Objects (ADO) manages multiple result sets created from
the same connection.
„ Explain how SQL Server 2005 processes batches that use MARS.
„ Identify the client libraries that support MARS, and explain how to enable MARS in each of these
libraries.
„ Explain how row versioning works.
„ Describe the implications that using the Snapshot isolation level has on SQL Server resources.
„ Explain the situations in which MARS is appropriate.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 3

Demonstration: How ADO Manages Multiple Result Sets Created


from the Same Connection

*****************************illegal for non-trainer use******************************

Introduction
In earlier versions of SQL Server, the Microsoft OLE DB provider for SQL Server (SQLOLEDB)
was the only data access provider that enabled applications to use implicit multiple connections.
These multiple connections, which were opened by the data access provider in the context of the
client side, resulted in multiple results executing over the same connection. However, the SqlClient
provider in Microsoft ADO.NET did not allow multiple result sets to execute over the same
connection, so it could not use SQLOLEDB's multiple connection function.

Demonstration Overview
In this demonstration, your instructor will show how SQLOLEDB simulates multiple result sets
created from the same connection and will explain how this internal process might affect
application performance.

MCT USE ONLY. STUDENT USE PROHIBITED


4 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Task 1: Reviewing the Code for Executing Multiple Commands with


SQLOLEDB

Task Overview
In this task, you will review the code needed to use SQLOLEDB to simulate MARS behavior by
opening implicit, non-pooled connections to cause the execution of multiple statements.
To review the code for executing multiple commands with SQLOLEDB, perform the following
steps.
1. Open the Microsoft Visual Studio® 2005 development environment.
2. Browse to D:\Democode\Section01\Demonstration1, and open the Demonstration1.sln solution.
3. Open the Form1.cs file in code view and view the connection string in line 11.
4. Scroll down to the ExecuteADOQuery method in line 32. Notice that only one
ADODB.Connection object is created.
5. In line 44, the connection cn is assigned to the qry1 command.
6. In line 47, the connection cn is assigned to the qry2 command.
7. In line 52, view the WHILE construction, and notice that qry2 is executed for every record in
qry1

Task 2: Executing Multiple Commands with SQLOLEDB

Task Overview
This task illustrates how SQLOLEDB simulates MARS behavior by opening implicit, non-pooled
connections to execute multiple statements.
To execute multiple commands with SQLOLEDB, perform the following steps.
1. If you have closed Visual Studio, follow steps 1–4 from Task 1 before continuing.
2. Open SQL Server Profiler.
3. To create a new trace, on the File menu, click New Trace.
4. If prompted to connect to SQL Server, connect to the MIA-SQL\SQLINST1 server. SQL Server
Profiler will show the Trace Properties window.
5. To select the default values, click Run. SQL Server Profiler will start monitoring.
6. Switch to Visual Studio, and press F5 to start running the sample application in Debug mode. An
application named Multiple Results Sets with ADO loads.
7. To start running the sample application, click GO. The mouse cursor will change to an hourglass
icon. SQL Server should start receiving many requests. The two list boxes in the application user
interface (UI) will start to fill up with values. The box on the left displays the EmployeeIDs, and the
box on the right displays the employee first name and last name properties. The execution ends
when the mouse cursor changes to the arrow icon, and the application UI displays the elapsed time.
8. Close the Multiple Results Sets with ADO application.

MCT USE ONLY. STUDENT USE PROHIBITED


9. Return to SQL Server Profiler.
Session 4: Designing Query Strategies Using Multiple Active Result Sets 5

10. Quickly scroll through the results to view the pattern of events that was recorded. The SQL
Server Profiler window should show a regular pattern like the following:
Open connection (Audit Login event)
Execute Query (SQL:BatchStarting and SQL:BatchCompleted events)
Close connection (Audit Logout event)
11. Close SQL Server Profiler.
12. Close Visual Studio 2005.

MCT USE ONLY. STUDENT USE PROHIBITED


6 Session 4: Designing Query Strategies Using Multiple Active Result Sets

How MARS Works

*****************************illegal for non-trainer use******************************

Introduction
The implementation of MARS in SQL Server 2005 allows applications to submit more than one
batch of statements in an interleaved fashion over the same connection to SQL Server.
Data access providers that support MARS provide some optimization to enhance connections using
MARS. The optimization involves pooling expensive resources, such as the request-level execution
environment that is created as a copy of the session-level execution environment. To understand
how MARS works, you must understand the features of the session-level and request-level
execution environments.

Session Level Execution Environment


Following are some important features of the session-level execution environment:
„ When a connection to SQL Server is opened, it creates a session-level default execution
environment for the connection on the server’s memory.
„ This session-level execution environment consumes about 40 kilobytes (KB) of memory.
„ The session-level execution environment contains connection global values such as SET option
values, the current database context, execution-state variables, cursors, and references to
temporary tables and worktables.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 7

Request Level Execution Environment


Following are some of the important features of the request-level execution environment:
„ MARS requires each active statement to contain and manage its own copy of all the connection
global values.
„ A request-level execution environment represents a copy of the session-level execution
environment values for a specific active statement.
„ A request-level execution environment is represented on the database server as a virtual
connection that is dependent on a parent connection (the physical connection opened).
„ A new request-level execution environment is opened for each active statement that is executing at
the same time.

Pooling the Request Level Execution Environment


Request level execution environments are valuable resources. Creating and destroying these
environments results in a high overhead:
„ SqlClient and SQL Native Client have an internal structure to pool request level execution
environments.
„ SqlClient will create a nonconfigurable pool with a maximum of 10 mapping structures.
„ If you use more than 10 SqlCommands over the same SqlConnection, the extra elements that are
created will be more expensive to create and will be destroyed after use.

Additional Information For more information on MARS, read “Multiple Active Result Sets
(MARS) in SQL Server 2005” by Christian Kleinerman at:
(http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnsql90/html/MARSinSQL05.asp)

MCT USE ONLY. STUDENT USE PROHIBITED


8 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Client Libraries That Support MARS

*****************************illegal for non-trainer use******************************

Introduction
MARS requires changes to the communication protocol used to communicate with SQL Server.
Client applications communicate with the database server by using a data access provider library.
Not all data access provider libraries support MARS. Database developers should know which data
access providers allow applications to create connections that support MARS.
In this topic, you will learn about some of the features that data access libraries implement to
support MARS.

Data Access Libraries that Support MARS


The following are the data access libraries that support MARS:
„ ADO.NET 2.0 SqlClient
„ SQL Native Client
• SQL Native Client for Open Database Connectivity (ODBC)
• SQL Native Client for OLE Database (OLE DB)
• SQL Native Client for ActiveX Data Objects (ADO)

Important Features of MARS


„ MARS is disabled by default when opening a new connection with any data access provider.
„ The infrastructure needed to support MARS is always created, regardless of whether MARS is
used.
„ Disabling MARS indicates that an exception should be thrown to the client application when
trying to use multiple active statements over the same connection.

MCT USE ONLY. STUDENT USE PROHIBITED


„ MARS is supported only in SQL Server 2005.
Session 4: Designing Query Strategies Using Multiple Active Result Sets 9

Connection Strings
To configure MARS, use the following code:
„ ADO.NET 2.0 SqlClient
• multipleActiveResultSets = true | false (connection string setting)
„ SQL Native Client ODBC
• Set SQLSetConnectAttr with: SQL_COPT_SS_MARS_ENABLED =
SQL_MARS_ENABLED_YES | SQL_MARS_ENABLED_NO
• Mars_Connection = yes | no (connection string setting)
„ SQL Native Client OLE-DB:
• SSPROP_INIT_MARSCONNECTION = VARIANT_TRUE | VARIANT_FALSE (data
source initialization property)
• MarsConn = true | false (connection string setting)
„ SQL Native Client ADO:
• Mars_Connection = true | false (connection string setting)

MCT USE ONLY. STUDENT USE PROHIBITED


10 Session 4: Designing Query Strategies Using Multiple Active Result Sets

How Row Versioning Works

*****************************illegal for non-trainer use******************************

Introduction
Row versioning provides the required infrastructure for optimistic concurrency control. Row
versioning is based on the principle that multiple readers and writers should not block each other,
and that each should work with its own version of the data. However, writers always block writers.
New readers will, depending on the chosen transaction isolation level, be able to see the data that is
being modified by other transactions.
Row versioning forms the basis for a new transaction isolation level called Snapshot isolation, and
for a new setting for the Read Committed isolation level called Read Committed Snapshot.
Row versioning can also be used for creating inserted and deleted tables that are used by triggers or
when using MARS.
In this topic, you will learn how row versioning works and how it is used by MARS.

How Row Versioning Works


Logical copies or versions are maintained for all data modifications that are performed in a
database. The following steps explain how row versioning works.
1. Each time a row is modified, a version of the previously committed image of the row is stored in
TempDB.
2. The versions of the modified rows are linked by a link list in TempDB.
3. The newest version is stored in the current database.
4. All read operations will retrieve the last version that was committed at the time that the
transaction started.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 11

Row-Level Versioning and MARS


MARS uses row versioning to control multiple active requests to manipulate the same data.
Isolation should be maintained between multiple requests if there are incompatible operations (for
example, one request trying to read a record from a table while another request tries to modify the
record).
If a MARS session issues a data modification statement such as INSERT, UPDATE, or DELETE
when there is an active result set, the rows that are affected by the modification statement are
versioned.
Additional Information
Because TempDB is important during the row-versioning process, storage for the TempDB
database should be carefully optimized to work appropriately with the row-versioning
workload.

As a best practice, optimize TempDB to create one file per physical CPU in the server. Store
TempDB files in fast arrays of disks configured as RAID 5 or RAID 10.
For more information on MARS in SQL Server 2005, see the article “Multiple Active Result
Sets (MARS) in SQL Server 2005” on the Microsoft TechNet Web site
(http://www.microsoft.com/technet/prodtechnol/sql/2005/marssql05.mspx).
For information about the overhead and costs of using row-level versioning, see the “Row
Versioning Resource Usage” topic in SQL Server Books Online (ms-
help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/0d4d63f4-7685-44a7-9537-
20fe2f97dfc1.htm).
For more information about optimizing TempDB, see the Knowledge Base article “FIX:
Concurrency Enhancements for the TempDB Database” on the Microsoft Help and Support
Web site (http://support.microsoft.com/default.aspx?scid=kb;en-us;328551).

MCT USE ONLY. STUDENT USE PROHIBITED


12 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Demonstration: How the Snapshot Isolation Level Affects SQL


Server Resources

*****************************illegal for non-trainer use******************************

Introduction
SQL Server provides a new transaction isolation level called Snapshot isolation. The Snapshot
isolation level uses row versioning to allow concurrent read operations over the same data, thereby
avoiding locking mechanisms.

Demonstration Overview
In this demonstration, your instructor will show how the Snapshot isolation level works and how it
affects server resources such as space consumption in TempDB.

Benefits of Using Snapshot Isolation


Following are the benefits of using Snapshot isolation:
„ Snapshot transactions reading data do not block other transactions from writing data. Transactions
writing data do not block Snapshot transactions from reading data.
„ Snapshot isolation is based on optimistic concurrency. (All other isolation levels are pessimistic
concurrency mechanisms.)

Costs Associated with Using Snapshot Isolation


Following are some of the costs associated with using Snapshot isolation:
„ The TempDB database must have enough space for the version store.
„ If TempDB runs out of space, update operations will continue to succeed, but read operations
might fail.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 13

Task 1: Setting Up the Monitoring and Execution Environment

Task Overview
In this task, you will set up the environment needed to execute the sample application. You will
create a new table to simulate concurrent read and write operations, set up Microsoft Windows®
Performance Monitor to monitor the relevant performance counters, and modify the
AdventureWorks database to support Snapshot transactions.
To set up the monitoring and execution environment, perform the following steps.
1. Open the Visual Studio 2005 development environment.
2. Browse to D:\Democode\Section01\Demonstration1, and open the Demonstration2.sln file.
3. In the ResetDatabase project, in the Change Scripts folder, open the EnableSnapshotIsolation.sql
file. View the ALTER DATABASE statement.
4. Right-click the EnableSnapshotIsolation.sql file, and then click Run On.
5. If prompted, create a new database reference, and use the MIA-SQL\SQLINST1 server instance
to connect to the AdventureWorks database.
6. In the Create Scripts folder, in the ResetDatabase project, open the CreateTable.sql file. View
the CREATE TABLE statement.
7. Right-click the CreateTable.sql file, and then click Run On.
8. If prompted, create a new database reference, and use the MIA-SQL\SQLINST1 server instance
to connect to the AdventureWorks database.
9. Open Microsoft Windows® Performance Monitor.
10. Press DEL or click the X sign on the toolbar as many times as necessary to delete all the
default running counters (if any).
11. To add new counters, click the plus (+) sign on the toolbar.
12. Add the following counters by selecting the appropriate values from the lists and clicking the
Add button:
Performance Object: MSSQL$SQLINST1:Transactions, Counter: Free Space in TempDB
(KB)
Performance Object: MSSQL$SQLINST1:Transactions, Counter: Snapshot Transactions
Performance Object: MSSQL$SQLINST1:Transactions, Counter: Transactions
Performance Object: MSSQL$SQLINST1:Locks, Counter: Lock Requests/sec,
Instance:_Total
13. To close the Add Counters window, click Close. Start monitoring.

MCT USE ONLY. STUDENT USE PROHIBITED


14 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Task 2: Reviewing the Code

Task Overview
In this task, you will review the code needed to show how the Snapshot Isolation level works.
To review the code, perform the following steps:
1. Return to the Visual Studio 2005 development environment.
2. In the Demonstration2 project, open the Form1.cs file in code view.
3. Scroll down to the definition of the ExecuteWriteOperations method in line 74.
4. Scroll down to the definition of the ExecuteReadOperations method in line 101.
5. Scroll down to the definition of the OpenReadConnection method in line 152.

Task 3: Executing Multiple Read and Write Operations

Task Overview
In this task, you will execute the sample application to show how the Snapshot Isolation level
works.
To execute multiple read and write operations, perform the following steps:
1. Right-click the Demonstration2 project, and then click Set as StartUp Project.
2. To start executing the application in Debug mode, press F5.
3. Arrange the windows so that the sample application UI and Windows Performance Monitor are
both visible.
The Transaction Snapshot Isolation Level window is loaded and will start executing.
Notice how the Last Read field is updated every two seconds.
Notice the current number of rows retrieved by the read operation on the SELECT COUNT(*)
field.
Notice that the Performance Monitor window displays regular lines with constant values.
4. Click Start.
Notice the Number of writes executed field.
Notice the current number of rows retrieved by the read operation on the SELECT COUNT(*)
field.
Notice that the Performance Monitor window displays the increment in the Lock Request/sec
counter value.
5. Click Stop.
6. Click Reset Connection.
Notice that the Performance Monitor window displays constant values again after the Lock
Request/sec counter stabilizes.

MCT USE ONLY. STUDENT USE PROHIBITED


7. Click Start.
Session 4: Designing Query Strategies Using Multiple Active Result Sets 15

Notice the Number of writes executed field.


Notice the current number of rows retrieved by the read operation on the SELECT COUNT(*)
field.
8. Click the Reset Connection button multiple times—for example, every three seconds.
Notice the current number of rows retrieved by the read operation on the SELECT COUNT(*)
field.
9. Click Stop.
10. Close the Transaction Snapshot Isolation Level application.
11. Close Performance Monitor.
12. Close the Visual Studio 2005 development environment.

MCT USE ONLY. STUDENT USE PROHIBITED


16 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Discussion: When Is MARS Appropriate?

*****************************illegal for non-trainer use******************************

Introduction
MARS is a programming model enhancement that allows multiple requests to interleave on a client
computer over the same connection to the server. Although MARS does not support parallel
execution on a client computer, it might yield some performance benefits if used correctly and
when appropriate.
MARS is almost invisible to the application developer because it is only a setting on the connection
string. Be careful, however, when enabling it for certain scenarios.

Discussion Questions
1. What techniques do you use to minimize response time?

2. What techniques do you use to minimize network traffic when reading related data?

3. Do you have ADO code that needs to be migrated to ADO.NET and is missing this functionality?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 17

4. How many connections do you open to SQL Server from each instance of an application? Why?

5. Is it possible to replace server-side, cursor-based operations by using MARS? Would that be a


good idea?

MCT USE ONLY. STUDENT USE PROHIBITED


18 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Section 2: Designing Query Strategies for Multiple Reads

*****************************illegal for non-trainer use******************************

Section Overview
The MARS infrastructure permits multiple requests to execute in an interleaved fashion over the
same connection. MARS utilizes row-level versioning as an optimistic concurrency control so that
multiple readers will not block each other and writers will not block readers. When reading related
results, MARS improves application response time and the overall performance.
This section focuses on the scenarios in which multiple simultaneous reads can be beneficial for an
application, and on the implications of using this technique.

Section Objectives
„ Explain how MARS can improve the user experience by reducing response time.
„ Evaluate the considerations for using MARS to support multiple related results.
„ Compare the alternatives to MARS for implementing multiple active read operations.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 19

Demonstration: Reducing Response Time Using MARS

*****************************illegal for non-trainer use******************************

Introduction
Using MARS is beneficial in specific scenarios, such as reading multiple related result sets over the
same connection.

Demonstration Overview
In this demonstration, your instructor will show how MARS can improve application performance
by reading multiple result sets using the same connection and filling user interface controls while
the results arrive, thereby reducing response time.

Task 1: Reviewing the Code

Task Overview
In this task, you will review the code needed to create a MARS connection.
To review the code, perform the following steps:
1. Open the Visual Studio 2005 development environment.
2. Browse to D:\Democode\Section02\, and open the MARSResponseTime.sln solution.
3. Open the code view for the Form1.cs file.
4. View the connection string in line 17.
5. View the sqlCommandText variable in line 18.
6. View the btGO_Click method in line 58.

MCT USE ONLY. STUDENT USE PROHIBITED


7. View the FillList method in line 33.
20 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Task 2: Monitoring MARS

Task Overview
In this task, you will set up the monitoring environment by using a dynamic view that returns the
number of physical and logical connections currently being used to connect to the database server.
To monitor MARS, perform the following steps:
1. Open SQL Server Management Studio.
2. If prompted to connect to SQL Server, connect to the MIA-SQL\SQLINST1 server.
3. To open a new query window, press CTRL+N.
4. Type the following query in the query window:
SELECT * FROM SYS.DM_EXEC_CONNECTIONS
WHERE SESSION_ID = ?

Task 3: Executing MARS

Task Overview
In this task, you will execute the sample application, creating five threads, executing a
SqlCommand on each thread, and filling a box from each thread. All five commands use the same
physical connection.
You will use SQL Server Management Studio to monitor the number of physical and logical
connections currently opened to SQL Server.
To execute MARS, perform the following steps:
1. Return to Visual Studio 2005 development environment.
2. To start running the sample application, press F5. The MARS Response Time window is loaded.
3. Click GO. The application displays a message box with the System Process ID. Notice the ID,
and do not close the message box.
4. Return to SQL Server Management Studio.
5. In the query window opened in Task 2, modify the query by replacing the ? symbol with the
number displayed in the message box.
6. To execute the query, press F5.
7. Return to the sample application, and then in the message box, click OK. The application starts
filling the five boxes with names.
8. Return to the sample application, and click Close Connection.
9. Return to SQL Server Management Studio.
10. To execute the query again, press F5.
11. Close the MARS Response Time application.
12. Close SQL Server Management Studio.

MCT USE ONLY. STUDENT USE PROHIBITED


13. Close the Visual Studio 2005 development environment.
Session 4: Designing Query Strategies Using Multiple Active Result Sets 21

Considerations for Using MARS to Support Multiple Related Results

*****************************illegal for non-trainer use******************************

Introduction
MARS changes some of the basic assumptions about multiple statements that execute concurrently.
The main benefit of MARS is that you can use the same connection for multiple reads without
having to create new connections.
The MARS implementation in SQL Server 2005 avoids blocking issues by allowing the
interleaving of compatible commands at specific points while some other commands are not
interleaved to avoid conflicts.
This topic focuses on the various points that you should consider for using MARS.

Considerations for Using MARS


Following are some things to consider when using MARS to read multiple results and execute
stored procedures based on the data retrieved from result sets.
„ MARS ignores changing connection or transaction properties with T-SQL.
• To change connection properties and manage transactions, use API calls rather than T-SQL
statements. By using API calls, the data access provider is aware of any change in the
execution environment. Do not change the connection environment settings by using T-SQL
commands when working with data access applications. It is very important to keep this in
mind when working with MARS.
„ Data modification statements block the ability to interleave requests.
• Executing stored procedures or statements that modify data (INSERT, UPDATE, DELETE)
will cause blocking among multiple requests. This is because a statement must run until it is
completed before the execution can be switched to other MARS requests.

MCT USE ONLY. STUDENT USE PROHIBITED


22 Session 4: Designing Query Strategies Using Multiple Active Result Sets

„ Using the same connection for more than 10 concurrent commands causes overhead.
• If you are using ADO.NET 2.0 SqlClient to connect to the database server, do not use the same
connection for more than 10 commands at the same time, because the application will incur
serious overhead. The MARS implementation keeps a hard-coded pool of up to 10 reusable
logical sessions. After the tenth session, sessions are created and destroyed per request.
„ Issuing short result sets generated by single SQL statements that read information (SELECT,
FETCH, RECEIVE) provides a better overall performance.
• MARS uses row-level versioning when mixing multiple read operations. This new type of
concurrency management mechanism does not use locks.
• Single SQL statements execute in an implicit and independent transactional context. Therefore,
there is no resource contention on the database server.

Additional Information
For more information on MARS, visit the blog
http://blogs.msdn.com/dataaccess/archive/2005/08/02/446894.aspx.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 23

Discussion: Alternatives to MARS for Implementing Multiple Active


Reads

*****************************illegal for non-trainer use******************************

Introduction
There are many alternatives to MARS for implementing multiple active read requests. However,
none of these alternatives is designed specifically to allow multiple requests over the same
connection. By using these alternatives, applications incur extra overhead.
MARS was designed to allow multiple requests to execute over the same connection. Moreover,
MARS implicitly takes care of the complexity involved in implementing such a solution.
In this discussion, you will compare the alternatives to MARS for implementing multiple active
reads.

Discussion Questions
1. Can server-side cursors replace MARS?

2. Do you need to use MARS to improve response time?

MCT USE ONLY. STUDENT USE PROHIBITED


24 Session 4: Designing Query Strategies Using Multiple Active Result Sets

3. How can you interleave multiple reads without using MARS?

4. Is there any task that can be solved only by using MARS?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 25

Section 3: Designing Query Strategies for Mixing Reads


and Writes in the Same Connection

*****************************illegal for non-trainer use******************************

Section Overview
When using MARS, applications can use the same connection to issue multiple requests to the
database server. For example, you can read multiple related sets through a single connection to a
database server. Database developers and administrators must understand what happens when there
is a mix of read and write operations over the same connection.
Because MARS allows mixing read and write operations over the same connection, write
statements can potentially block read statements that are being executed over the same connection.
To take advantage of MARS and minimize blocking between read and write operations, carefully
design the execution order of the read and write statements in queries.

Section Objectives
„ Explain the valid combinations of read and write operations that you can use in MARS.
„ Explain how to use MARS to implement some common combinations of read and write
operations.
„ Explain the guidelines for using MARS to mix read and write operations.
„ Explain how to perform SQL Server Service Broker operations while reading an active result set.
„ Compare the alternatives to MARS for combining read and write operations.

MCT USE ONLY. STUDENT USE PROHIBITED


26 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Valid Combinations of Operations

*****************************illegal for non-trainer use******************************

Introduction
Any database request can be sent over MARS, including the following:
„ Requests to retrieve data
„ Requests to execute stored procedures
„ Data definition language (DDL) statements
„ Data manipulation language (DML) statements

However, a database request might be incompatible with the previous requests executed over the
same connection in terms of locking, blocking, and transactional behavior.
To use MARS to send multiple database requests over a connection, you must arrange the requests
in the correct order and combination.

How Does MARS Interleave Multiple Statements?


„ In order to execute multiple read and write requests over the same connection, MARS arranges the
network buffers as multiple buffers so that an application’s requests do not pile up or mix with
other requests.
„ MARS executes multiple statements in an interleaved manner by switching between the
compatible statements and executing them as they arrive—for example, executing a SELECT
statement and fetching a cursor with the FETCH statement.
„ Interleaving is a synchronous process. When interleaving, the thread must wait for the completion
of the statement on the database server.
„ Depending on the type of operation to be executed, some statements might create conflicts
between interleaved statements.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 27

Statements That Can Run in an Interleaved Manner


The following statements are compatible and do not create blocking situations:
„ SELECT
„ FETCH
„ READTEXT
„ RECEIVE
„ BULK INSERT
„ Asynchronous cursor population

Statements That Cannot Run in an Interleaved Manner


Statements that require exclusive access to data before switching to other statements cannot run in
an interleaved manner. The following are examples of such statements:
„ DDL statements
„ DML statements
„ Calls to stored procedures
„ Statements inside a transaction

MARS serializes the execution of such statements.

Additional Information
• For more information about how interleaving operations work, read “Multiple Active Result
Sets (MARS) in SQL Server 2005” by Christian Kleinerman
(http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnsql90/html/MARSinSQL05.asp).
• For more information about how MARS executes multiple requests within transactions, read
the next section of this session, “Concurrency Considerations When Using MARS.”

MCT USE ONLY. STUDENT USE PROHIBITED


28 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Demonstration: Combining Read and Write Operations by Using


MARS

*****************************illegal for non-trainer use******************************

Introduction
Combining read and write operations by using MARS results in a non-deterministic execution,
because interleaving depends on which type of commands are to be executed and on the order in
which the commands are processed by the database server.
Row versioning permits you to ease the requirements for interleaving incompatible commands,
because each read command works with its own copy or snapshot of the rows, thereby reducing
contention and blocking.
In this demonstration, you will learn how to implement some common combinations of read and
write operations by using MARS.

Demonstration Overview
In this demonstration, some common combinations of read and write operations are implemented
by using MARS. The sample application uses the following types of queries:
• Interleaving Update commands with Select commands
• Interleaving Delete commands with Select commands
• Interleaving Insert commands with Select commands
• Interleaving stored procedure execution with Select commands
These all follow the same basic code template, which can be reutilized in other applications.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 29

Task 1: Setting Up the Execution Environment

Task Overview
In this task, you will configure the execution environment. In this task, two new tables will be
created and one of them will be filled with sample data. A new stored procedure also needs to be
created. All of these objects will support the application code, which you will review in the next
task.
To set up the execution environment, perform the following steps:
1. Open the Visual Studio 2005 development environment.
2. Browse to D:\Democode\Section03\Demonstration1, and open the MARSReadAndWrite.sln
solution.
3. In the Create Scripts folder, right-click the CreateTable.sql file, and then click Run On.
4. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference, and connect to the AdventureWorks database.
5. Right-click the CreateDoSomethingProc.sql file, and then click Run On.
6. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference, and connect to the AdventureWorks database.
7. In Visual Studio Solution Explorer, right-click the Demonstration1 project, and then click Set
as StartUp Project.

Task 2: Reviewing a Pattern to Call Read and Write Operations with


MARS

Task Overview
In this task, you will review the code needed to call multiple read and write operations with MARS.
A common template for such calls is shown. You can reuse the template in your applications.
To review a pattern to call read and write operations with MARS, perform the following steps:
1. In Visual Studio Solution Explorer, in the Demonstration1 project, open the code view of the file
Form1.cs.
2. Scroll down to the btGO_Click method in line 18.
3. Scroll down to the ExecuteINSERT method in line 51.
4. Scroll down to the ExecuteUPDATE method in line 68.
5. Scroll down to the ExecuteSP method in line 85.
6. Scroll down to the ExecuteDELETE method in line 102.
7. In Visual Studio Solution Explorer, in the Demonstration1 project, open the code view of the file
CallingPattern.cs.

MCT USE ONLY. STUDENT USE PROHIBITED


30 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Task 3: Executing Multiple Read and Write Operations with MARS

Task Overview
In this task, you will execute the sample application, which uses the previously-reviewed common
template to call multiple read and write operations with MARS. The application will report the
execution time for each statement by changing some configurations. For example, response time
will vary depending on whether or not transactions are enabled.
To execute multiple read and write operations with MARS, perform the following steps:
1. To start executing the sample application, press F5. The Combining Read and Write Operations
Using MARS window is loaded on the screen.
2. Click GO. Notice the elapsed times.
3. Click GO again. Notice the elapsed times again.
4. Select the Transactional check box, and then in the list, select the RepeatableRead value.
5. Click GO again. Notice the elapsed times again.
6. In the list box, select the Serializable value.
7. Click GO again. Notice the elapsed times again.
8. Close the Combining Read and Write Operations Using MARS application.
9. Close the Visual Studio 2005 development environment.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 31

Guidelines for Mixing Read and Write Operations by Using MARS

*****************************illegal for non-trainer use******************************

Introduction
Because of the way that MARS is designed and the way that database servers maintain data
consistency, be careful when mixing read and write operations.
Database developers should understand the implications that invalid combinations of operations
might have on both the client and the server. On the client, invalid combinations of operations
might affect the interleaving performed by MARS. Poor usage of MARS can lead to latency and
blocking on the server.
In this topic, you will learn about the guidelines for mixing read and write operations using MARS.

Choose the right sequence of execution of statements


If possible, order the execution sequence of the mixed statements so that all the read operations
execute first, and then all the write operations execute. Consider the following facts when selecting
the correct sequence of execution of statements:
„ Compatible operations or operations that read data execute in an interleaved fashion.
„ Incompatible operations or operations that modify data or schema block interleaved execution.
„ Row versioning enables simultaneous execution of compatible and incompatible operations
without blocking. However, read operations will not see the results of modifying operations.

Keep the result set size minimal


„ Small-sized data takes less time to process, so locks will also be kept for less time.
„ Reduce blocking over large result sets to favor concurrency.

MCT USE ONLY. STUDENT USE PROHIBITED


32 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Open just one active transaction over MARS


„ Only one transaction at a time can be executed through a physical connection to a database server.
„ MARS raises an exception if two statements, one of which is transactional and the other is not, are
executed using the same connection to a database server.

Include multiple statements inside the same local transaction


„ In SQL Server, every batch, including SELECT statements, is executed inside an implicit
transaction.
„ Single batches do not mix with transactional batches because this has the same effect as two
transactions over the same connection. You can solve this problem by opening an explicit
transaction on a bigger scope so that all batches are executed inside the same transactional context.

Do not execute more than 10 concurrent commands over the same


MARS connection
„ Data access providers such as SqlClient and SQL Native Client (SQLNCLI) can support a limited
number of commands without negatively affecting performance.
„ Both SqlClient and SQLNCLI provide pooling of execution sessions that support only 10
commands over a connection. After the tenth command, the application incurs unnecessary
overhead to create and destroy execution sessions on demand.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 33

Demonstration: How to Perform SQL Server Service Broker


Operations While Reading an Active Result Set

*****************************illegal for non-trainer use******************************

Introduction
Using SQL Server SQL Server Service Broker, you can build reliable, asynchronous, and message-
based database applications.
SQL Server SQL Server Service Broker operations can also be called through a connection that is
configured with MARS. SQL Server Service Broker operations interleave with other compatible
read operations and wait for the completion of incompatible operations. For example, one request
may execute a DDL statement to modify a SQL Server Service Broker queue schema while another
request tries to read from the same queue.

Demonstration Overview
In this demonstration, your instructor will show how to perform SQL Server Service Broker
operations while reading an active result set and executing a stored procedure. Using MARS
eliminates the need for a separate connection to SQL Server to use SQL Server Service Broker.

MCT USE ONLY. STUDENT USE PROHIBITED


34 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Task 1: Setting Up the Execution Environment

Task Overview
In this task, you will configure the execution environment. A new stored procedure and all the
infrastructure for SQL Server Service Broker need to be created. All of these objects will support
the application code, which will be reviewed in the next task.
To set up the execution environment, perform the following steps:
1. Open the Visual Studio 2005 development environment.
2. Browse to D:\Democode\Section03\Demonstration2, and open the SSBandMARS.sln file.
3. In the Section03 folder, open the ResetDatabase project.
4. In the Create Scripts folder, right-click the CreateTable.sql file, and then click Run On.
5. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference, and connect to the AdventureWorks database.
6. In the Create Scripts folder, open the CreateComplexSP.sql file.
7. Right-click the CreateComplexSP.sql file, and then click Run On.
8. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference, and connect to the AdventureWorks database.
9. In the Create Scripts folder, open the CreateSSBInfrastructure.sql file.
10. Right-click the CreateSSBInfrastructure.sql file, and then click Run On.
11. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference, and connect to the AdventureWorks database.
12. In Visual Studio Solution Explorer, right-click the Demonstration2 project, and then click Set
as StartUp Project.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 35

Task 2: Reviewing How to Call Read Operations with MARS Using SQL
Server SQL Server Service Broker

Task Overview
In this task, you will review the code needed to call multiple read operations using MARS,
including a SELECT statement, a stored procedure, and a SQL Server Service Broker service.
To call read operations with MARS and use SQL Server Service Broker, perform the following
steps:
1. In Visual Studio Solution Explorer, in the Demonstration2 project, open the code view of the
Program.cs file.
2. Scroll down to the Main method in line number 28. Your instructor will review the code
implemented by the Main method. Scroll down as required to keep pace with the instructor.
3. To execute the sample application, press F5. A console window loads into memory and starts
executing. The following messages is printed on the screen:
“SQL Server Service Broker conversation handler: identifier”
From 1 to 99 of:
“Navigating record: n”
“Sending message to SQL Server Service Broker”
“Retrieving all messages waiting in SQL Server Service Broker queue”
From 0 to 98 of:
Message0: identifier

MCT USE ONLY. STUDENT USE PROHIBITED


36 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Discussion: Alternatives to MARS for Combining Read and Write


Operations

*****************************illegal for non-trainer use******************************

Introduction
MARS provides a way for client-side logic to execute multiple read and write operations over the
same connection to a database server. MARS is appropriate in some usage scenarios, but in some
others, MARS could signify an overhead or is otherwise not appropriate.
Are there any alternatives?

Discussion Questions
1. Can server-side cursors replace MARS?

2. Do you need to interleave read and write operations in the same connection?

3. How can you interleave read and write operations without using MARS?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 37

4. Is there any task that can be solved only by using MARS?

MCT USE ONLY. STUDENT USE PROHIBITED


38 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Section 4: Concurrency Considerations When Using


MARS

*****************************illegal for non-trainer use******************************

Section Overview
Using MARS to issue multiple requests to a database server through a single physical connection
has various implications on the client and the server, especially with regard to locking behavior.
MARS requires various server resources to control concurrent access. For example, MARS
requires:
„ Memory to maintain multiple request-level sessions.
„ Locks on tables and databases to control concurrency.
„ Disk space to support row versioning in the TempDB database.

This section focuses on the locking implications of using MARS and on how these locks affect
other transactions running concurrently on a database server.

Section Objectives
„ Explain the locking behavior of MARS for the various transaction isolation levels.
„ Explain how to monitor locks and blocked connections while using MARS.
„ Explain the guidelines for maximizing concurrency while using MARS.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 39

Locking Behavior When Using MARS

*****************************illegal for non-trainer use******************************

Introduction
Similar to other database requests, requests made over a MARS-enabled connection need to interact
with the SQL Server Lock Manager to control concurrent access to resources such as tables, rows,
and indexes.
Allocation pages, system tables usage, locking mechanisms, and transaction isolation levels behave
in the same way, regardless of whether the connection uses MARS.
However, because MARS allows multiple requests to be sent over the same connection, SQL
Server needs to prevent requests executing over the same connection from blocking each other.
When using MARS connections, locking behavior will depend on:
„ The execution of transactional requests.
„ The type of operations to be executed.

Locking behavior when using MARS with transactional requests


The transaction isolation level indicates how long the transaction will keep locks on read rows.
The following table describes the locking behavior of MARS when you execute transactional and
nontransactional requests.
Transactional requests Nontransactional requests

MARS allows only one transaction over a physical MARS executes multiple implicit transactions (one for
connection, regardless of the number active requests. each active request) over a physical connection.
The database server uses regular locking, blocking, and The database server uses regular locking, blocking, and
isolation semantics at the transaction level. isolation semantics at the request level.

MCT USE ONLY. STUDENT USE PROHIBITED


40 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Locking behavior when using MARS for different types of operations


The locking behavior of MARS is also dependent on the type of operation being executed.
The following table describes the locking behavior of MARS when you execute compatible and
incompatible operations.
Compatible operations Incompatible operations

Compatible operations are requests that will not block Incompatible operations are requests that will block each
each other when executed concurrently over the same other when executed concurrently over the same MARS
MARS connection. connection.
MARS will interleave multiple requests over the same MARS will not interleave other requests until the
connection. completion of the blocking request.
The database server uses row-level versioning. The database server will request locks on resources from
the SQL Server Lock Manager.

Additional Information For more information about locking, read the chapter “Locking in
the Database Engine” in SQL Server Books Online (ms-
help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/c626d75f-ff62-41bb-9519-
10db3b50bee5.htm).

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 41

Demonstration: Monitoring Locks and Blocked Connections While


Using MARS

*****************************illegal for non-trainer use******************************

Introduction
Database administrators should continuously monitor MARS behavior. SQL Server 2005
implements a set of tools, counters, and dynamic views to obtain real-time information about
resource consumption by MARS.
The benefits of monitoring MARS behavior are:
• Early detection of locking and blocking.
• Understanding the access pattern of applications using MARS.

Demonstration Overview
In this demonstration, the sample application will execute different type of queries combining read
and write operations. Your instructor will use SQL Server tools to monitor locks and blocked
connections while using MARS and will show how to detect, in the code, the issue that is producing
the block.

MCT USE ONLY. STUDENT USE PROHIBITED


42 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Task 1: Setting Up the Execution Environment

Task Overview
In this task, you will configure the execution and monitoring environments.
To set up the execution environment, perform the following steps.
1. Open SQL Server Management Studio.
2. If prompted to connect to SQL Server, connect to the MIA-SQL\SQLINST1 server.
3. To open the Object Explorer window, press F8.
4. In the Object Explorer window, open the tree shown.
5. Expand the Management folder.
6. Double-click the Activity Monitor node.
7. Open the Visual Studio 2005 development environment.
8. Browse to D:\Democode\Section04\Demonstration1, and open the Demonstration1.sln file.
9. In the Create Scripts folder of the ResetDatabase project, right-click the CreateTable.sql file
and then click Run On.
10. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference, and connect to the AdventureWorks database.
11. Right-click the CreateDoSomethingProc.sql file, and then click Run On.
12. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference, and connect to the AdventureWorks database.

Task 2: Monitoring Locking and Blocking in SQL Server 2005

Task Overview
In this task, you will use the SQL Server Management Studio Activity Monitor tool to monitor the
type of locks being used and the T-SQL command that is causing the locks to occur.
To monitor locking and blocking in SQL Server 2005, perform the following steps.
1. To start running the sample application, press F5.
2. The Combining Read and Write Operations Using MARS window is loaded on the screen.
3. Click GO. (Do not wait for any results yet. Continue with the next step.)
4. Return to the Activity Monitor window and ensure that the Process Info option is selected in the
Select a Page pane in the upper left.
5. To refresh the window, press F5.
Scroll to the right and view the other columns that were not visible earlier.
6. To exercise the window, return to the sample application, click GO, quickly return to the
Activity Monitor window, and press F5. Continue doing this until you see a value other than 0 on

MCT USE ONLY. STUDENT USE PROHIBITED


the Blocked by or Blocking columns.
Session 4: Designing Query Strategies Using Multiple Active Result Sets 43

The columns can be reordered for easier viewing by clicking the column header and dragging it to
the desired position on the grid.
Notice the value in the Blocked by column for all the rows that have the Database column set to
AdventureWorks and that have the Application column set to .NET SqlClient Data Provider.
Notice the Process ID shown in the Blocked by column; it will be used in the subsequent steps.
7. In the Select a page pane in the upper left, select the Locks by Process option.
8. In the Selected process box, select the process ID that appeared in the Blocked by column in the
Process Info view.
Notice the value in the Object ID column.
9. In the Select a Page pane in the upper left, select the Locks by Object option.
10. In the Selected object box, click AdventureWorks.Table2.
11. Click the Description column header to order by this column.
12. Scroll down if necessary to see all the values.

MCT USE ONLY. STUDENT USE PROHIBITED


44 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Guidelines for Maximizing Concurrency When Using MARS

*****************************illegal for non-trainer use******************************

Introduction
Concurrency defines the amount of parallelism accepted by an application. The larger the number
of concurrent actions occurring in an application, the better the response time for providing the
results to the caller application or end-user.
Concurrency is not an automatic feature of using MARS, and you should follow guidelines to
maximize concurrency.
These guidelines provide information about how to implement efficient MARS-enabled
applications.
The benefits of following these guidelines include:
„ Improved application performance.
„ Better execution of applications running concurrently on a database server.
„ Better resource management by the database server.

When executing requests under a MARS-enabled connection, consider the following two potential
bottlenecks:
„ Contention at the connection level when requests are not able to execute in an interleaved fashion.
„ Contention at the database level because of the locks held by the operations that are being
executed.

In this topic, you will learn about the various guidelines for maximizing concurrency while using
MARS.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 45

Execute as many read operations as possible over a MARS connection


When you use result-set based operations, you must execute as many read operations as possible over
a MARS connection. This is because:
„ Read operations execute in an interleaved fashion in a connection.
„ Read operations use row-level versioning, and therefore, no locking is required.

Execute write operations on a different connection from MARS


Write operations run exclusively. They do not run in an interleaved manner. Write operations also
acquire locks. Therefore, if possible, execute write operations on a connection other than a MARS
connection.

Execute transactional requests over exclusive connections


Transactional requests under a MARS connection need to run until they are completed before other
requests can be executed. Only one transaction per physical connection can be executed. Therefore, if
possible, you must execute transactional requests over exclusive connections.

Execute transactional requests and write operations as quickly as


possible
Keep transactional requests and write operations as simple and as small as possible. The longer these
operations execute, the longer the locks are held over shared resources, and this leads to contention at
the server level.

Constantly monitor resource usage while using MARS


When executing requests with MARS, understanding the usage patterns and how they affect database
server resources helps application and database developers calibrate MARS usage.

Maintain sufficient disk space availability for TempDB to grow


„ The TempDB database is used by the row-level versioning mechanism to store the various row
snapshots when executing read operations.
„ TempDB can grow to accommodate the number of read operations combined with write
operations.
„ If TempDB cannot grow, SQL Server will start to refuse read operations.

Optimize access to the TempDB database


Because of the key role played by the TempDB database in the MARS process, optimize access to this
database by using striped disk arrays and multiple data files.

MCT USE ONLY. STUDENT USE PROHIBITED


46 Session 4: Designing Query Strategies Using Multiple Active Result Sets

Next Steps

*****************************illegal for non-trainer use******************************

Introduction
The information in this section supplements the content provided in Session 4.
„ “Multiple Active Result Sets (MARS) in SQL Server 2005” by Christian Kleinerman
• http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnsql90/html/MARSinSQL05.asp
„ “Locking in the Database Engine” in SQL Server Books Online
• ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/c626d75f-ff62-41bb-9519-
10db3b50bee5.htm
„ “Tuning and Optimizing Queries using Microsoft SQL Server 2005.” This course is a part of the
Microsoft official curriculum for SQL Server 2005. This course covers the server-side aspects of
using MARS connections.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 4: Designing Query Strategies Using Multiple Active Result Sets 47

Discussion: Session Summary

*****************************illegal for non-trainer use******************************

Discussion Questions
1. What was most valuable to you in this session?

2. Based on what you learned in this session, have you changed your mind about anything?

3. Are you planning to do anything differently on the job? If so, what?

MCT USE ONLY. STUDENT USE PROHIBITED


48 Session 4: Designing Query Strategies Using Multiple Active Result Sets

THIS PAGE INTENTIONALLY LEFT BLANK

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for
Database Applications

Contents
Session Overview 3
Section 1: Why Caching Is Important 4
Section 2: Data and Query Caching in SQL
Server 2005 15
Section 3: Using Caching Technologies
Outside of SQL Server 25
Section 4: Custom Caching Techniques 36
Discussion: Session Summary 48

MCT USE ONLY. STUDENT USE PROHIBITED


Information in this document, including URL and other Internet Web site references, is subject to
change without notice. Unless otherwise noted, the example companies, organizations, products,
domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious,
and no association with any real company, organization, product, domain name, e-mail address,
logo, person, place or event is intended or should be inferred. Complying with all applicable
copyright laws is the responsibility of the user. Without limiting the rights under copyright, no
part of this document may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or
otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory,
regarding these manufacturers or the use of the products with any Microsoft technologies. The
inclusion of a manufacturer or product does not imply endorsement of Microsoft of the
manufacturer or product. Links are provided to third party sites. Such sites are not under the
control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link
contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for
webcasting or any other form of transmission received from any linked site. Microsoft is providing
these links to you only as a convenience, and the inclusion of any link does not imply endorsement
of Microsoft of the site or the products contained therein.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual property.

© 2005 Microsoft Corporation. All rights reserved.

Microsoft, Windows, and Visual Studio are either registered trademarks or trademarks of
Microsoft Corporation in the United States and/or other countries.

All other trademarks are property of their respective owners.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications i

MCT USE ONLY. STUDENT USE PROHIBITED


MCT USE ONLY. STUDENT USE PROHIBITED
Session 5: Designing Caching Strategies for Database Applications 1

Session Overview

*****************************illegal for non-trainer use******************************

Introduction
Caching is the process of persisting a copy of data and objects locally. Caching allows for faster
access to the required data and reduces the overhead associated with frequent retrieval of data from
the original data source. Object caching reduces the overhead of object creation and destruction.
Developers can improve the performance and scalability of database applications by applying
caching techniques in various layers of the application. Different caching techniques can be applied
to the database, middle tier, and client layers of an application.
Developers must understand the appropriate situations in which to use the various caching
techniques. By designing an appropriate caching strategy, developers can quickly build high-quality
applications that require the use of these techniques.
Microsoft® SQL Server™ 2005 provides various internal caching mechanisms. Although these
caching mechanisms are automatically managed by SQL Server, developers can optimize their use
by making them interact with the database management system in appropriate ways.
This session focuses on optimizing system resources by caching data and objects in the appropriate
layers. Correctly optimizing applications by implementing caching will result in reduced resource
utilization and therefore better performance of the system. Resources such as memory, physical I/O,
and network bandwidth can also be optimized by using caching methodologies.

MCT USE ONLY. STUDENT USE PROHIBITED


2 Session 5: Designing Caching Strategies for Database Applications

Session Objectives
„ Explain why caching is important.
„ Explain the advantages of using the data and query caching that is automatically performed by
SQL Server 2005.
„ Explain how caching data outside of SQL Server works and how to manage conflicts that might
arise.
„ Explain the various ways to cache frequently used data, objects, and results in the appropriate tier
to improve performance.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 3

Section 1: Why Caching Is Important

*****************************illegal for non-trainer use******************************

Section Overview
The various types of caching technologies that can be used include connection pooling, parameter
set caching, and in-memory caching of data or business objects.
Developers must understand that there are certain disadvantages in any caching scenario. In most
situations, developers worry about not caching enough data to ensure application performance.
However, excessive data can also be cached, and in such cases, the amount of resource utilization
involved in building the cache can outweigh its benefits to the application.
The appropriate caching technique to be used depends on the application tier that is being
considered. For example, denormalization of data can be used in the database, but it cannot be used
in the application tier, where an in-memory cache might be more effective. When designing a
solution, developers must consider the scenario in which caching will occur.
In this section, you will learn about the performance implications of different types of caching. You
will also learn how to determine the appropriate amount of data to cache.

Section Objectives
„ Describe how caching affects application performance.
„ Explain the performance implications of using various types of caching.
„ Describe how you can help application developers determine the amount of data to cache.
„ Explain the guidelines for using caching techniques in each layer of a database application.

MCT USE ONLY. STUDENT USE PROHIBITED


4 Session 5: Designing Caching Strategies for Database Applications

Demonstration: Executing an Application With and Without Caching

*****************************illegal for non-trainer use******************************

Introduction
Data caching plays an important role in improving performance for most database applications. In
addition, caching frequently used objects might also produce important performance benefits. This
fact is often ignored by database developers.
You can use various types of caching techniques in the application tier, such as:
„ Connection pooling, which caches database connections in a shared pool, avoiding expensive and
repetitive allocation and deallocation of connection resources.
„ Stored procedure parameter set caching, where commonly used parameters for stored procedures
are held in the application memory instead of being reinitialized for each call.
„ In-memory caching of DataSet objects to hold data that does not change frequently.

Demonstration Overview
In this demonstration, the instructor will run an application containing prebuilt tests that show the
results of data access with and without each of these caching mechanisms. The demonstration will
explain the benefit of using these caching techniques to improve application performance.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 5

Task 1: Using Database Connection Pooling

Task Overview
In this task, your instructor will demonstrate the benefits of database connection pooling by running
tests with and without connection pooling enabled.
To use database connection pooling, perform the following steps:

1. Start Microsoft Visual Studio® 2005.


2. Browse to and open the solution MOC2783M5L1.sln.
3. To start the demonstration application, press F5.
4. Select the Create connections without pooling option.
5. Click the Start Test button.
6. After the students have viewed the number of iterations per second, click the Cancel
button.
7. Select the Create connections with pooling option.
8. Click the Start Test button.
9. After the students have viewed the number of iterations per second, click the Cancel
button.

Task 2: Using In-Memory Data Caching

Task Overview
In this task, your instructor will demonstrate the benefits of in-memory data caching by running a
test that retrieves data from the database and a test that retrieves data directly from an in-memory
cache after the first request.
To use in-memory data caching, perform the following steps:

1. Select the Retrieve data without caching option.


2. Click the Start Test button.
3. After the students have viewed the number of iterations per second, click the Cancel
button.
4. Select the Retrieve data with in-memory cache option.
5. Click the Start Test button.
6. After the students have viewed the number of iterations per second, click the Cancel
button.

MCT USE ONLY. STUDENT USE PROHIBITED


6 Session 5: Designing Caching Strategies for Database Applications

Task 3: Using Stored Procedure Parameter Caching

Task Overview
In this task, your instructor will shows the benefits of caching parameter sets by using the parameter
caching mechanism of the DAAB. Two tests will be run: a test in which the cache is bypassed and a
test in which the cache is used.
To use stored procedure parameter caching, perform the following steps:

1. Select the Retrieve data without caching option.


2. Click the Start Test button.
3. After the students have viewed the number of iterations per second, click the Cancel
button.
4. Select the Retrieve data with in-memory cache option.
5. Click the Start Test button.
6. After the students have viewed the number of iterations per second, click the Cancel
button.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 7

Caching Data vs. Caching Objects

*****************************illegal for non-trainer use******************************

Introduction
Most developers are familiar with caching data in the application tier to reduce database roundtrips.
In some cases, caching business objects can also be beneficial. Data and object caching enable
reduction of resource bottlenecks, thereby improving application performance and scalability.
However, it is important to remember that when designing a caching mechanism, cached data or
objects can change. In these cases, appropriate action must be taken to ensure that data consumers
are also updated so that they do not receive the wrong information.
In this topic, you will learn about the benefits of data and object caching.

Benefits of Data Caching


Data can be cached in memory by using DataSets or other data structures. Following are the
benefits of data caching:
„ Reduces the load on the database server.
• By caching data in the application tier, you can avoid holding large quantities of data in the
buffer pool cache of the database.
• Freeing up the buffer pool cache means less recycling (higher page life expectancy) and
therefore less I/O operations on the database server.
„ Helps eliminate network bottlenecks.
However, you must ensure that there is enough memory on the application server to hold the cached
data.
Overcaching can result in many problems, such as:
„ Increased CPU activity due to garbage collection.
„ AppDomain restarts by the runtime’s memory leak detection algorithm (especially in Microsoft
ASP.NET).
MCT USE ONLY. STUDENT USE PROHIBITED
8 Session 5: Designing Caching Strategies for Database Applications

Because you must not cache excessive data, determining the appropriate level of caching is an
important task to perform during the design and testing phases of the database application.
You should cache only data that has a high probability of being reused or data that requires
significant system resources to be retrieved.
Be aware of the fact that caching excessive data might hurt response time, decreasing the perceived
performance from the user point of view.

Benefits of Object Caching


Creation of business objects can be resource intensive for the following reasons:
„ Creating objects might require creation of intermediate objects.
„ Intermediate objects take up memory and CPU time and will have to be garbage collected.
When you create a business object cache, you incur the expense of creating objects only once
instead of creating objects on every request. However, you must avoid caching objects that change
frequently, because managing the rules of object change resolution in a Microsoft .NET application
can be difficult. In the Microsoft .NET Framework, objects can be referenced by more than one
thread at a time. In caching scenarios, it is a common practice to pass references to objects instead
of sending complete copies of the objects. When an object changes, it is difficult to know whether
another thread is relying on the previous data. If the data changes, the thread might enter an
inconsistent state.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 9

Discussion: Helping Application Developers Determine How Much


Data to Cache

*****************************illegal for non-trainer use******************************

Introduction
When assisting application developers in designing caching strategies, it is important that database
developers thoroughly understand the problem domain. Application developers will often be
tempted to cache more data than necessary and might not fully consider the following issues:
„ Security of cached data
„ Resource utilization for holding the data in memory
„ Changes requiring conflict resolution when the data is modified

Database developers should be able to work with application developers to help them to understand
these issues.

Discussion Questions
1. How do you help application developers determine how much data to cache?

2. Do application developers need to cache large quantities of data locally?

3. Is it a good idea to pin tables in SQL Server memory?


MCT USE ONLY. STUDENT USE PROHIBITED
10 Session 5: Designing Caching Strategies for Database Applications

4. How should you persist cached data?

5. How can an application determine whether cached data has changed since the last time it
was fetched?

6. What security implications can data caching cause?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 11

Guidelines for Using Caching Techniques in Each Layer of a


Database Application

*****************************illegal for non-trainer use******************************

Introduction
Database applications are generally split into layers to facilitate the encapsulation of the logic and
overall application scalability. Various caching techniques can be used at each layer to maximize
the performance of that layer’s functionality and overall application performance. Techniques that
are applicable to any given layer might not be applicable to another layer. For example, caching of
business objects might not apply outside the business layer and maintaining cached data by
serializing to disk might not be a good idea outside the user interface layer. Developers must have
adequate knowledge of the caching techniques to determine the appropriate caching option for each
layer and also to determine why those techniques can be useful in certain scenarios.

Applying Appropriate Caching Techniques


In this topic, you will learn about the various guidelines that you should follow when applying the
appropriate caching techniques to the appropriate layers.

MCT USE ONLY. STUDENT USE PROHIBITED


12 Session 5: Designing Caching Strategies for Database Applications

Database Server
Guideline Reason

Read only as much data as necessary when Reading unnecessary data into memory will force other
querying tables. important data out of the buffer cache.
Monitor buffer cache usage by using the Page This counter shows the average life span of a page in the
Life Expectancy performance counter. buffer cache. The higher this number, the longer the data
will stay in memory, and the fewer physical I/O
operations will be necessary to retrieve the data.
Avoid recompilation of stored procedures. When stored procedures are compiled, the query plan is
put into a cache called the procedure cache.
Recompilation invalidates this cache and can be
expensive.
Denormalize data for queries involving If you denormalize the data, especially for summary
complex joins or aggregations. reports, fewer data pages will have to be read to satisfy
the query. If fewer pages are read, fewer physical I/O
operations will occur, resulting in better performance.
Do not denormalize data that changes often. If the data changes often, updating the denormalized table
can make data modifications expensive. Queries do not
return synchronized data when the denormalized table
does not contain data that is consistent with the
normalized version.

Application Layer
Guideline Reason

Use Microsoft ADO.NET connection pooling ADO.NET connection pooling, which is enabled by
to reduce the cost of opening and closing default, reuses database connections instead of opening
database connections. and closing these connections for each database request.
This can significantly improve efficiency. Note that only
connections with the same parameters (including security
context) can be pooled.
Avoid caching sensitive data in shared Data in shared memory areas might be accessed by
memory. consumers who do not have appropriate credentials. You
must consider this security risk when designing a caching
strategy.
Be careful when caching data that can be If another process updates data that is cached, the data in
updated in the database by other applications the cache will no longer be valid. You might need to
or processes. implement custom conflict resolution logic, which can be
complex and in itself cause performance degradation.
Create a business object cache to reduce Some business objects are very expensive to create and
resource utilization. destroy. If these objects can be reused or shared, this cost
can be decreased.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 13

Web Services
Guideline Reason

Cache intermediate results to reduce calls to Reducing the workload of the Web service will result in
the application tier. faster response. You can cache partial results to reduce
calls to the application tier.
Be aware of data that can change between If data changes in the database or application level, the
requests. Web service’s cache will be invalid.

Front-End Applications
Guideline Reason

Cache data in-memory for same-session reuse, Data that can be read by a client in many different
such as reformatting or sorting. formats does not need to be re-retrieved from the
application tier. Caching that data will reduce network
traffic and conserve server resources.
Cache dynamic user interface components in Interface components, which are often stored in the
memory to save application and database database, can be cached in memory instead of being re-
roundtrips. retrieved each time the user selects a new option.
In Microsoft Windows® or on a Web server Objects and interface components can be cached to disk
hosting ASP.NET applications, applications so that when the application starts, it will not need to
can cache objects to disk if they are expensive extract all of the data from the database.
to create and their data does not change often.
Use timestamps or checksums to update When data is cached to disk, the application can request
objects cached to disk when the application changes from the database based on a timestamp or
starts. checksum, instead of re-retrieving the entire set of data.

Important The most important consideration when working with a caching scheme is to
determine what to do in the case of outdated data. If you do not consider how to update the
cache when the data changes, applications might return inconsistent and inaccurate results.
Although performance is important, it is more important that applications deliver correct
information to users. By carefully balancing caching techniques in different tiers of an
application, developers can improve performance, thereby creating a positive user experience.

MCT USE ONLY. STUDENT USE PROHIBITED


14 Session 5: Designing Caching Strategies for Database Applications

Section 2: Data and Query Caching in SQL Server 2005

*****************************illegal for non-trainer use******************************

Introduction
SQL Server 2005 uses complex data caching mechanisms to deliver the best performance possible.
Although SQL Server can automatically manage the cache, database developers can write code that
will help maximize the performance of the server. It is therefore important for database developers
to understand how the memory pools of SQL Server operate and to learn to work with the internal
caching mechanisms of SQL Server.
In this section, you will learn about the advantages of using the data caching and query caching that
are automatically performed by SQL Server 2005.

Section Objectives
„ Explain how data caching works in SQL Server 2005.
„ Explain how SQL Server 2005 caches queries and objects.
„ Explain the guidelines for maximizing cache utilization.
„ Explain how to monitor cache utilization.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 15

Multimedia: How Data Caching Works in SQL Server 2005

*****************************illegal for non-trainer use******************************

Introduction
SQL Server manages a single piece of data in multiple places at any point in time. Therefore,
multiple copies of the data item must be synchronized to avoid data consistency problems. SQL
Server manages this synchronization internally, but the synchronization can be complex. Database
developers must understand how these processes work so that applications can be written to make
the best use of the caching mechanisms that SQL Server provides.

Discussion Questions
1. How many copies of data does SQL Server keep at any point in time?

2. Does the number of copies change when row versioning is used?

3. Does the number of copies change when Multiple Active Result Sets (MARS) are used?

MCT USE ONLY. STUDENT USE PROHIBITED


16 Session 5: Designing Caching Strategies for Database Applications

4. Does the number of copies change when triggers are run?

5. Is there any difference in the way SQL Server caches transaction log records?

6. When are committed transactions hardened to disk?

7. When are transactions that are not committed hardened to disk?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 17

Demonstration: How SQL Server 2005 Caches Queries and Objects

*****************************illegal for non-trainer use******************************

Introduction
The main cache memory pools of SQL Server 2005 are the procedure cache and the buffer cache.
Both caches support developer interaction to some extent; developers can view the contents of the
caches and also clear them. Database developers must understand how these memory pools work
and know how to control them when tracking down performance problems.

Demonstration Overview
In this demonstration, the instructor will explain how SQL Server 2005 caches queries and objects.

Task 1: Displaying the Contents of the Procedure Cache and Then


Clearing the Cache

Task Overview
In this task, your instructor will use dynamic management views to display the contents of the
procedure cache.
To display the contents of the procedure cache and then clear the cache, perform the following
steps:

1. Open SQL Server Management Studio.


2. Browse to and open the script file MOC2783M5L2_2.sql.
3. Select lines 1 through 16 of the script, and then press F5.
4. Select lines 21 through 28 of the script, and then press F5.

MCT USE ONLY. STUDENT USE PROHIBITED


5. Select lines 31 and 32 of the script, and then press F5.
18 Session 5: Designing Caching Strategies for Database Applications

6. Select lines 21 through 28 of the script, and then press F5.

Task 2: Displaying the Contents of the Buffer Cache and Then Clearing
the Cache

Task Overview
In this task, your instructor will use Dynamic Management Views to show the contents of the buffer
cache.
To display the contents of the buffer cache and then clear the cache, perform the following steps:

1. Select lines 15 and 16 of the script, and then press F5.


2. Select lines 37 through 46 of the script, and then press F5.
3. Select lines 49 and 50 of the script, and then press F5.
4. Select lines 37 to 46 of the script, and then press F5.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 19

Guidelines for Maximizing Cache Utilization

*****************************illegal for non-trainer use******************************

Introduction
Although SQL Server manages its cache resources automatically, there are many ways in which a
database developer can help maximize their usage. By following specific guidelines, database
developers can ensure that:
„ Cached query plans will be reused.
„ Cached data will stay in the buffer pool for as long as possible.

The most important benefits of SQL Server cache reuse are that query plan recompiles are avoided
and large amounts of data are not read unnecessarily. Reading excessive data might invalidate other
parts of the buffer cache. By carefully monitoring and working to maximize cache reuse,
developers can create applications that perform well and are scalable.

MCT USE ONLY. STUDENT USE PROHIBITED


20 Session 5: Designing Caching Strategies for Database Applications

Guidelines for Maximizing Cache Utilization


Guideline Reason

Read only rows that need to be processed. When reading more rows than required, some
unnecessary rows might be read from other data pages.
Reading excessive data pages into memory will force
other data out of the buffer cache.
Avoid reading too many columns This guideline specifically applies to large, variable-
unnecessarily, especially in shared stored width column types such as VARCHAR and TEXT.
procedures and views. The data for these columns might reside on data pages
that are separate from the rest of the data. Unnecessarily
reading this data will force SQL Server to load those
data pages into memory.
Avoid reading large amounts of binary large The SQL Server buffer pool is shared among all the data
object (BLOB) data in SQL Server instances at any point in time. Therefore, when large amounts of
that host online transaction processing (OLTP) data are read, the cache from an OLTP application can
applications. quickly be pushed out of memory.
Avoid using Extensible Markup Language Reading a single scalar value from an XML column can
(XML) in situations where relational data is be expensive because the entire column will have to be
more appropriate. loaded into memory first. Shredding XML into
relational structures eliminates this overhead.
Avoid using nonparameterized, ad hoc, and Nonparameterized SQL might not be able to participate
dynamic SQL. in query plan reuse and might produce many new query
plans, each of which will require memory.
Monitor for and try to avoid procedure Procedure recompilation can be expensive and will
recompilation. block the execution of the procedure.
Install multiple SQL Server instances to adjust Multiple instances can be used to isolate memory, keep
shared resources. certain processes from consuming more memory than
they should, and ensure that those processes will not
destroy the cache reutilization potential of the data that
is already in the cache.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 21

Demonstration: Monitoring Cache Utilization

*****************************illegal for non-trainer use******************************

Introduction
SQL Server 2005 provides many tools (such as SQL Server Profiler and Performance Monitor) that
database developers can use to determine how the Database Engine is working.
Using Performance Monitor, developers can view the state of the various counters that are exposed
by SQL Server, including the counters that help to determine the state of the SQL Server memory
caches.
Cache events are also exposed in SQL Server Profiler. You can run traces to determine when
cached objects are reused and when objects are added or removed from the cache.

Demonstration Overview
In this demonstration, the instructor will explain the use of two SQL Server 2005 tools: SQL Server
Profiler and Performance Monitor.

Task 1: Using SQL Server Profiler to Show Query Cache Utilization

Task Overview
In this task, your instructor will use SQL Server Profiler to show how the procedure cache is used
by the SQL Server query optimizer.
To use SQL Server Profiler to show query cache utilization, perform the following steps:

1. Switch to SQL Server Management Studio.


2. Browse to and open the MOC2783M5L2_4.sql file.
3. Select lines 1 through 20 of the script, and then press F5.
MCT USE ONLY. STUDENT USE PROHIBITED
22 Session 5: Designing Caching Strategies for Database Applications

4. Open SQL Server Profiler.


5. To start a new trace, press CTRL+N.
6. Connect to the MIA_SQL\SQLINST1 instance of SQL Server.
7. On the Events Selection tab, select the Show all events check box.
8. In addition to the default selections, in the Stored Procedures section, select
SP:CacheHit, SP:CacheInsert, and SP:CacheMiss.
9. To start the trace, click Run.
10. Switch back to SQL Server Management Studio.
11. Select lines 24 through 27 of the script, and then press F5.
12. Switch back to SQL Server Profiler, and show the students that two CacheInsert events
have been fired.
13. Switch back to SQL Server Management Studio.
14. Select lines 30 through 33 of the script, and then press F5.
15. Switch back to SQL Server Profiler, and show the students that one CacheHit and one
CacheInsert event have been fired.
16. Switch back to SQL Server Management Studio.
17. Select lines 37 and 38 of the script, and then press F5.
18. Switch back to SQL Server Profiler, and show the students that one CacheMiss event and
one CacheInsert event have been fired.
19. Switch back to SQL Server Management Studio.
20. Select lines 41 and 42 of the script, and then press F5.
21. Switch back to SQL Server Profiler, and show the students that one CacheHit event has
been fired.
22. Close SQL Server Profiler.

Task 2: Using Performance Monitor to Monitor Cache Utilization

Task Overview
In this task, your instructor will use Performance Monitor to show buffer cache activity.
To use Performance Monitor to monitor cache utilization, perform the following steps:

1. In SQL Server Management Studio, select lines 47 and 48 of the script, and then press F5.
2. Open Performance Monitor.
3. Right-click the graph area, and then click Add Counters.
4. In the Performance object list, select MSSQL$SQLINST1:Buffer Manager.
5. In the Select counters from list, select Buffer cache hit ratio, and then click the Add

MCT USE ONLY. STUDENT USE PROHIBITED


button.
Session 5: Designing Caching Strategies for Database Applications 23

6. Select the counter Database pages, and then click the Add button.
7. Click Close.
8. Show the class the initial state of the performance counters in the Performance Monitor
graph.
9. Switch back to SQL Server Management Studio.
10. Select lines 51 and 52 of the script, and then press F5.
11. Switch back to Performance Monitor, and show the students the new state of the
performance counters in the graph.
12. Close Performance Monitor.

Task 3: Using Dynamic Management Views to Show Memory Used by


Cached Query Plans

Task Overview
In this task, your instructor will use Dynamic Management Views to show the memory consumed
by cached plans.
To use Dynamic Management Views to show memory used by cached query plans, perform the
following step:
„ In SQL Server Management Studio, select lines 57 and 73 of the script, and then press F5.

MCT USE ONLY. STUDENT USE PROHIBITED


24 Session 5: Designing Caching Strategies for Database Applications

Section 3: Using Caching Technologies Outside of


SQL Server

*****************************illegal for non-trainer use******************************

Section Overview
Although SQL Server has various internal caches that can provide higher performance for data
retrieval, other types of external caches should also be considered to improve the overall application
performance. For example:
„ The DAAB provides parameter set caching.
„ The Caching Application Block provides object caching.

Using query notifications, which is a new feature in ADO.NET, the application can easily detect
data changes that occur in the database.
In this section, you will learn how to cache data outside SQL Server and how to manage the data
modification conflicts that might occur.

Section Objectives
„ Explain how ADO.NET caching works.
„ Explain how the DAAB caches objects.
„ Explain how the Caching Application Block works.
„ Explain the process of query notification.
„ Explain how conflicts can occur when using cached data is used.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 25

Demonstration: How ADO.NET Caching Works

*****************************illegal for non-trainer use******************************

Introduction
ADO.NET is a cache-friendly database connection library that is designed for data reuse and
database disconnection. ADO.NET uses pooling technology to avoid repeatedly creating and
disposing of database connections during times when the database application is busy.
The ADO.NET DataSet class is the key to the ADO.NET disconnected architecture. DataSet
objects retrieve the requested data from the database by using an intermediate DataAdapter object
and are then disconnected from the database. DataSets do not require an active database connection
to maintain a set of data in memory. Database connections are opened only to update data that
changes in the DataSet.

Demonstration Overview
In this demonstration, the instructor will illustrate how ADO.NET provides reuse and disconnection
functionality while working with database connections. This demonstration also illustrates the use of
pooling technology to avoid having to continually create and dispose of database connections in busy
database application scenarios.
This demonstration also demonstrates that the ADO.NET DataSet class is the key to the disconnected
architecture.

MCT USE ONLY. STUDENT USE PROHIBITED


26 Session 5: Designing Caching Strategies for Database Applications

Task 1: Configuring ADO.NET Connection Pooling

Task Overview
In this task, your instructor will change the ADO.NET connection string to configure the
connection pool.
To configure ADO.NET connection pooling, perform the following steps:

1. Open Visual Studio 2005.


2. Browse to and open the MOC2783M5L3.sln solution.
3. In Solution Explorer, expand the MOC2783M5L3_1 project.
4. View the code for the DataImplementation.cs file.
5. Scroll down to the ConnectionString property in line 18.

Task 2: Using ADO.NET DataSets to Cache Data

Task Overview
In this task, your instructor will show that the ADO.NET DataSet object is completely
disconnected from the database and that it is a data cache.
To use ADO.NET DataSets to cache data, perform the following steps:

1. In DataImplementation.cs, view the code for the GetProductCategories method in lines


36 through 54.
2. Close DataImplementation.cs.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 27

Demonstration: How the Data Access Application Block Caches


Objects

*****************************illegal for non-trainer use******************************

Introduction
The DAAB can cache stored procedure parameter sets. The application block can query the
database and determine the parameters for a stored procedure. The application block then builds a
set of parameters that are cached at the application layer and are reusable for subsequent requests.
This caching improves the performance of the system, because determining the parameters for a
stored procedure and creating the set of parameter objects can be resource intensive.

Demonstration Overview
In this demonstration, your instructor will illustrate the capability of the DAAB to cache stored
procedure parameter sets. The application block can interrogate the database and determine the
parameters for a stored procedure and then build a set of parameters that are cached at the application
tier and are reusable for subsequent requests. This caching can be a huge performance improvement,
because determining the parameters for a stored procedure and creating the set of parameter objects
can be expensive.

MCT USE ONLY. STUDENT USE PROHIBITED


28 Session 5: Designing Caching Strategies for Database Applications

Task 1: Using the Data Access Application Block to Cache Parameter


Sets

Task Overview
In this task, your instructor will demonstrate how the DAAB can be used to cache parameter sets.
To use the DAAB to cache parameter sets, perform the following steps:

1. In Solution Explorer, expand the MOC2783M5L3_2 project.


2. View the code for the DataImplementation.cs file.
3. View the code for the ParameterCache method in lines 33 through 40.
4. Close DataImplementation.cs.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 29

Demonstration: How the Caching Application Block Works

*****************************illegal for non-trainer use******************************

Introduction
The Caching Application Block provides a standard solution for applications that require object
caching capabilities. Objects are cached by using a key-value pair with a string as the key.
Any type of object can be cached by using the application block, but it is best to cache objects that
do not change often. The Caching Application Block passes back a reference to the cached object
instead of creating a deep copy. Therefore, multiple threads can hold a reference to the same cached
object at the same time.
Consider a situation in which one of the threads changes the object. The changes will be visible to
every thread, but any of the threads might be relying on the previous value of the object. Such
situations are difficult to resolve in shared cache situations, and so it is better to avoid caching
objects that change often.

Demonstration Overview
In this demonstration, your instructor will illustrate a standard solution that the Caching Application
Block provides for applications that require object caching capabilities. The Caching Application
Block passes back a reference to the cached object, rather than a deep copy. Many threads can
therefore hold a reference to the same cached object at the same time. If one of these threads changes
the object, the changes will be visible to every thread, but one of the threads might rely on the
previous value of the object. Scenarios such as these are difficult to resolve in shared cache situations,
and it is better to avoid them if possible.

MCT USE ONLY. STUDENT USE PROHIBITED


30 Session 5: Designing Caching Strategies for Database Applications

Task 1: Using the Caching Application Block

Task Overview
In this task, your instructor will demonstrate how to use the Caching Application Block to cache
objects.
To use the Caching Application Block to cache objects, perform the following steps:

1. In Solution Explorer, expand the MOC2783M5L3_3 project.


2. View the code for the DataImplementation.cs file.
3. View the code for the RetrieveDataMemoryCache method in lines 46 through 76.
4. Close DataImplementation.cs.

Discussion Questions
1. Do you need to repeatedly access static data or data that rarely changes?

2. Do you have data access availability issues?

3. Did you implement your own caching strategy?

4. Do you think that using a standard application block could provide any benefits? Why?

5. Would you like to customize this application block? How?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 31

Multimedia: The Process of Query Notification

*****************************illegal for non-trainer use******************************

Introduction
The ADO.NET DataSet class is an ideal data caching container. It requires no active connection to
the database and operates in a fully disconnected mode. However, because the DataSet class is
disconnected, it is not aware of the changes that are made to the underlying data in the database.
Query notification is a new feature in ADO.NET 2.0 that can be used to solve this problem. This
feature uses a SQL Server Service Broker queue to wait for changes to data. When data is changed,
an event is immediately raised and the application is notified about the situation. The application
will have to be programmed to resolve the conflict. It is easier to detect data changes by using query
notifications than by using other techniques, such as continually polling the database for changes or
implementing custom triggers. Using query notifications requires minimal programming by the
developer and less work by the system, because this feature is designed to conserve system
resources.

Discussion Questions
1. Do you think your applications could benefit from the query notification feature?

2. Would you expect a reduction in network traffic by using this feature or an increase?

MCT USE ONLY. STUDENT USE PROHIBITED


32 Session 5: Designing Caching Strategies for Database Applications

3. How do you plan to deal with query notifications that represent changes to data being
edited in the client application?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 33

Demonstration: How Cached Data Can Cause Conflicts

*****************************illegal for non-trainer use******************************

Introduction
When caching any kind of data, one of the main considerations is what to do when the data changes
outside the cache. It is important to consider the following:
„ How will the cache be notified about the changes to the data?
„ How will the cache retrieve the new data?
„ Should other levels of the application, or users, be notified about the changes?

When designing caching strategies, developers must keep in mind that the underlying data might
change at some point in time. This change can cause conflicts if the data in the cache is not the
same as the data in the database. Based on your specific business requirements, appropriate action
to resolve these conflicts must be initiated. Resolution can mean updating the cache, notifying the
user that there is a change, or in some cases, taking no action.

Demonstration Overview
In this demonstration, your instructor will illustrate that when caching any kind of data, the main
consideration is what to do when the data changes outside the cache. It is important to consider how
the cache will be notified that the data has changed, how the cache will retrieve the new data, and
whether other levels of the application, or users, should be notified of the change.

MCT USE ONLY. STUDENT USE PROHIBITED


34 Session 5: Designing Caching Strategies for Database Applications

Task 1: Identifying Cached-Data Conflicts

Task Overview
In this task, your instructor will show how to identify cached-data conflicts by using query
notifications.
To identify cached-data conflicts, perform the following steps:

1. In Solution Explorer, right-click the MOC2783M5L3_1 project, and then click Set as
StartUp Project.
2. To start the application, press F5.
3. Click the Start Test button.
4. Open SQL Server Management Studio.
5. Browse to and open the MOC2783M5L3.sql file.
6. Run the script.
7. Switch back to the running application, and view the message box.

Discussion Questions
1. How do you plan to detect conflicts?

2. If changes are applied to columns other than the column you edited in the client
application, would you consider this a conflict?

3. If the same column is edited on the server and the client, would you consider this a
conflict?

4. Would you implement a library of some default conflict resolution rules?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 35

Section 4: Custom Caching Techniques

*****************************illegal for non-trainer use******************************

Introduction
You can use different caching techniques in various layers. In the data layer, denormalization can
improve the performance of complex queries. In the user interface layer, data can be cached in
memory or serialized to disk to reduce network and database traffic when requesting dynamic
options.
Each of these caching techniques can help in certain scenarios. However, database developers must
ensure that cached data is consistent with source data. Caching mechanisms should be clearly
defined and implemented during the initial development process.
This section focuses on the various ways to cache frequently used data and objects in the
appropriate level to improve overall application performance.

Section Objectives
„ Identify alternative techniques for caching data and objects, and describe scenarios in which each
technique is appropriate.
„ Explain the best practices for using denormalization to cache frequently used data.
„ Explain the best practices for managing dynamic user interfaces efficiently by caching data in the
client.
„ Explain the best practices for serializing objects to improve the performance of client applications.
„ Explain how to use custom caching strategies on the server and the client.

MCT USE ONLY. STUDENT USE PROHIBITED


36 Session 5: Designing Caching Strategies for Database Applications

Discussion: Alternative Techniques for Caching Data and Objects

*****************************illegal for non-trainer use******************************

Introduction
Caching is a broad topic and depends on the context of specific development challenges. There are
no concrete answers to the questions of database developers about caching, but various general
guidelines can be followed to maximize the return on investment and reduce the time taken to build
caching solutions.
Apart from the system and standard caching techniques, database developers can use alternative
caching techniques, such as denormalization or building a distributed hierarchical cache structure.
Database developers should have a good understanding of the various types of caching techniques
so that they can choose the appropriate technique for each scenario.

Discussion Questions
1. Why should you consider data caching?

2. Are there ways to cache data and objects other than what has been discussed in this course?

3. Is the default system-supplied caching technique the best option in all scenarios?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 37

4. Have you ever denormalized data to improve performance? If so, to which level?

5. Do you keep application strings and resources cached locally in the client application?

6. Have you ever thought about a distributed hierarchical cache structure?

MCT USE ONLY. STUDENT USE PROHIBITED


38 Session 5: Designing Caching Strategies for Database Applications

Scenario 1: Denormalization as a Caching Technique

*****************************illegal for non-trainer use******************************

Introduction
Denormalizing (or flattening) database tables can significantly improve the performance of large,
complex queries—especially those with many joins or complex aggregations. This technique can be
useful when queries need to read many data pages to produce a few pages of output. For example,
denormalization can be used in reporting systems in which data does not need to be updated in real
time.
Mixed OLTP and Decision Support systems are a common environment for denormalization
techniques. However, when using denormalization, developers must consider the anomalies that
might occur during data updates. The developer must ensure that when data changes in the source
tables, the denormalized tables return data that is consistent with the change. Failure to do so can
result in situations such as applications returning summary data on some screens that does not
properly correspond to the detailed data on other screens.

Scenario
When you develop a complex reporting application, queries commonly use many tables and require
a variety of aggregations. In some situations, you cannot improve the performance of the
application to fulfill the user requirements even after creating many indexes and fine-tuning the
queries. In such cases, denormalization might improve the performance of the application to fulfill
the user requirements.
After you have tried all of the standard reporting techniques—including hardware tuning, index
tuning, and query tuning—this scenario becomes applicable. Because this is a reporting application,
it can be assumed that the data does not change often and that the old data in the cache will not be a
problem. Remember that denormalized tables should be used in addition to normalized tables, not
instead of them. Normalized tables should still be used as the source of the data for the
denormalized tables to guarantee data integrity and correctness. In this situation, because the data
will not change, it might be safe to assume that the cost of updating the denormalized tables will not

MCT USE ONLY. STUDENT USE PROHIBITED


be prohibitive. For that reason, denormalization can be a good solution.
Session 5: Designing Caching Strategies for Database Applications 39

You can denormalize complex queries by using a table that will be updated during a data load
process or by using an indexed view. This reduces the work when returning the data, thereby
improving the performance of the application and the query time.

Techniques for Keeping Data Up to Date


„ You can use a “last modified date” column or a checksum to ensure that data in denormalized
tables is up to date with the source tables.
„ If a small time delay between updating data in denormalized tables from source tables is
acceptable, as in the case of reporting systems, you can determine delta rows during
denormalization based on whether the checksum value changes.
„ When denormalizing by using tables, you can use a trigger on the source table to update the
denormalized table appropriately.
„ The indexed view feature in SQL Server allows developers to create denormalized tables that will
automatically be kept up to date when data in source tables is modified. When using this feature,
keep in mind the following points:
• Keeping indexed views updated can cause severe degradation of inserts and updates.
• Indexed views cannot be used for queries that make use of outer joins or subqueries.

Best Practices
„ Before considering denormalization, you should try other techniques such as index tuning,
partitioning, and client-side caching. You must also ensure proper configuration of the hardware.
„ Use SQL Server Profiler or SET STATISTICS IO to determine the number of pages that
SQL Server reads for a given query.
„ Use denormalization when data does not need to be updated often. When data is updated, it will
have to be maintained in both the source tables and the denormalized tables, which might result in
slow inserts and updates.

MCT USE ONLY. STUDENT USE PROHIBITED


40 Session 5: Designing Caching Strategies for Database Applications

Scenario 2: Managing Dynamic User Interfaces Efficiently by


Caching Data in the Client

*****************************illegal for non-trainer use******************************

Introduction
Some applications use the database as a store for dynamic user interface elements, returning to the
database repeatedly as users navigate between various options.

Scenario
Consider an application that organizes products by category. Whenever a user selects a valid
category from a list, the application sends a request to the database and displays the appropriate
products in another list. In such situations, user interface elements can be cached to reduce database
sever and network traffic and increase the responsiveness of the user interfaces.
In this example, if a user selects a product category, the products in that category can be cached in
memory so that if the user reselects the same category later, a database query will be unnecessary.
Although this seems like a small improvement when you consider a single user or session, the
amount of network and database server resources that are saved can be large when you consider a
large number of concurrent users.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 41

When to Cache User Interface Elements


Many applications maintain user interface options, prompts, and captions in the database. As users
navigate the user interface, other options or captions might need to be displayed.
„ Following are some application scenarios in which caching user interface elements can be useful:
• Selecting one option modifies another option.
• Users flip rapidly back and forth between screens.
• Options are stored in the database but do not change often in real time.

„ Developers should determine whether changes to the data must be reflected immediately in the
user interface.
• If options need to be updated in real time as they change in the database, caching is not
appropriate.
• If options must be validated only at application startup time and retrieving the options is not
expensive, an in-memory caching scheme might be appropriate.
• If retrieval of options is expensive, serialization of options to a local disk might be a good
option. This is covered in detail in the next section.
• Checksums or timestamps can be used to query the database for updates as they occur.

Techniques for Caching User Interface Elements


„ Elements can be precached or cached on first request.
• Precaching involves retrieving and caching all possible user interface elements at application
startup. This technique is best used when there are many small sets of options. Retrieving all of
these options as a single large set can save resources.
• Caching on first request involves caching elements as they are needed, for later reuse if the
application enters the same state again. This technique is best used for applications in which
users revisit similar screens or options repeatedly.

„ Elements can be cached in memory by using hash tables or other associative container objects,
using parent options as keys.

MCT USE ONLY. STUDENT USE PROHIBITED


42 Session 5: Designing Caching Strategies for Database Applications

Scenario 3: Serializing Objects to Improve Performance of Client


Applications

*****************************illegal for non-trainer use******************************

Introduction
Although in-memory caching of user interface elements can improve the performance of the
system, for some applications, an in-memory cache might not provide sufficient benefits. When the
number of user interface elements is large or when those interface elements are complex,
serialization and local on-disk caching might be appropriate so that the client does not have to re-
create the objects when needed.
To determine whether serialization techniques should be applied to an application, developers
should consider factors such as the amount of network traffic needed to retrieve a full set of user
interface elements and the amount of time required to create objects for the user interface elements.
These factors should be balanced against the amount of client-side disk space that local caching will
consume and the amount of time required for deserializing objects from disk.
The on-disk caching technique should update the data. Generally, this type of update will occur
when the application starts and will involve a check against a checksum or timestamp column in the
database. If these are found to be different, the cached objects can be selectively updated on the
client.
The common language runtime (CLR) user-defined type feature in SQL Server 2005 can be used as
a serialized object store, but this might not be a good way of implementing this caching technique
because it might incur performance penalties when making network requests to the database server.
Other performance problems might occur due to increasing database server resource contention.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 43

Scenario
When working with an application that has an especially dynamic user interface, you might find
that many user interface elements are stored in the database—for example, strings that describe
options or that are used to populate drop-down menus. A good example of this would be an
application that deals with geographical information. The user interface might have a drop-down
menu that enables the user to select a country, which upon selection populates a drop-down menu
for states/provinces within the selected country. Upon selecting an area from the second drop-down
list, the user might be able to select a city from a third drop-down list, and so on.
When testing the application, you might discover that when the application requests these values
every time they are required, the performance of the system is adversely impacted. Instead of
querying the database each time, strings and other client resources can be locally serialized and
stored on disk. This will allow the application to start without querying the database and causing a
performance problem.

When to Cache Objects Locally


Developers should consider various factors when deciding whether to cache objects locally. These
considerations include:
„ How resource intensive is retrieval of the data for the object?
• If retrieval is expensive, the object is a good candidate for local caching.

„ How expensive is creation of the object?


• If creating this object uses a large amount of CPU time and deserialization from disk will use
less, the object should be considered for local caching.

„ Will serialization and deserialization be faster?


• It is important to test and ensure that deserializing objects from disk will not be slower than
creating them.

„ How often will the object need to change?


• As data changes, objects might need to be updated.

„ Will updating data be more expensive than retrieving fresh data?


„ Is creating a mechanism for data updates manageable?

Techniques for Maintaining Local Objects over Time


„ User interfaces and other client-side objects can be serialized to disk from the in-memory cache
when the application shuts down, to serialize any changes retrieved during application run time.
„ Serialized objects should maintain a timestamp or checksum, and an equivalent timestamp or
checksum should be present in the database.
„ At application startup, the application can request any changed data, based on the timestamp or
checksum, and update the in-memory cache appropriately.
„ You should keep in mind that as applications are updated, object definitions might also change and
older versions of serialized objects might fail to load. You must make appropriate updates to
prevent user interface errors.

MCT USE ONLY. STUDENT USE PROHIBITED


44 Session 5: Designing Caching Strategies for Database Applications

Demonstration: Custom Caching Strategies on the Server and


Client

*****************************illegal for non-trainer use******************************

Introduction
This demonstration will show some of the benefits of the caching techniques discussed in this
section. The demonstration will show denormalization by using an indexed view, in-memory
caching of user interface elements, and on-disk caching of user interface elements.
Each of these methods can be used in various scenarios to improve application performance and
scalability. Indexed views are useful in database scenarios for decreasing the cost of complex
queries. User interface elements can be cached in memory on the client to reduce database
roundtrips. On-disk caching can be useful in situations in which elements are expensive to create
and do not need to be completely refreshed every time the application is started.

Demonstration Overview
In this demonstration, your instructor will show some of the benefits of the caching techniques
discussed in this section. The demonstration will illustrate denormalization by using an indexed view,
in-memory caching of user interface elements, and on-disk caching of user interface elements.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 45

Task 1: Using an Indexed View to Denormalize Data

Task Overview
In this task, your instructor will show how to create an indexed view based on a query and that
querying the view instead of the source data will provide a much faster response.
To use an indexed view to denormalize data, perform the following steps:

1. Open SQL Server Management Studio.


2. Browse to and open the MOC2783M5L4.sql file.
3. Select the text from lines 1 through 24 of the script.
4. To execute the query, press F5.
5. Select the text from lines 28 through 56 of the script.
6. To execute the query, press F5.
7. Select the text from lines 1 through 24 of the script.
8. To execute the query, press F5.
9. Close SQL Server Management Studio.

Task 2: Caching User Interface Elements in Memory

Task Overview
In this task, your instructor will show that by caching user interface elements in memory instead of
retrieving them dynamically from the database, user interface response time can be improved.
To cache user interface elements in memory, perform the following steps:

1. Open Microsoft Visual Studio 2005.


2. Browse to and open the MOC2783M5L4.sln solution.
3. To launch the demonstration application, press F5.
4. Select the Get Options – No Caching option.
5. Click the Start Test button.
6. After the students have viewed the number of iterations per second, click the Cancel
button.
7. Select the Get Options – In-memory cache option.
8. Click the Start Test button.
9. After the students have viewed the number of iterations per second, click the Cancel
button.

MCT USE ONLY. STUDENT USE PROHIBITED


46 Session 5: Designing Caching Strategies for Database Applications

Task 3: Caching Serialized User Interface Elements to Disk

Task Overview
In this task, your instructor will show that by caching serialized user interface elements to disk,
applications that make use of caching can more quickly start and respond to user requests.
To cache serialized user interface elements to disk, perform the following steps:

1. Select the Serialize Options to disk option.


2. Click the Start Test button.
3. When a Serialization Complete message is displayed, click OK.
4. Select the Deserialize Options from disk option.
5. Click the Start Test button.
6. When a Deserialization Complete message is displayed, click OK.
7. Close the application.
8. Close Microsoft Visual Studio 2005.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 5: Designing Caching Strategies for Database Applications 47

Discussion: Session Summary

*****************************illegal for non-trainer use******************************

Introduction
This session focused on optimizing system resources by caching data and objects in the appropriate
layers. You learned that correctly optimizing applications by implementing caching results in reduced
resource utilization and consequently better system performance. You also learned that resources such
as memory, physical I/O, and network bandwidth can also be optimized by using caching
methodologies.

Discussion Questions
1. What was most valuable to you in this session?

2. Based on this session, have you changed any of your previous ideas regarding data
caching?

3. Are you planning to do anything differently on the job based on what you learned in this
session? If so, what?

MCT USE ONLY. STUDENT USE PROHIBITED


MCT USE ONLY. STUDENT USE PROHIBITED
Session 6: Designing a Scalable Data Tier
for Database Applications

Contents
Session Overview 1
Section 1: Identifying the Need to Scale 2
Section 2: Scaling Database Applications to
Avoid Concurrency Contention 13
14
Section 3: Scaling SQL Server Database
Systems 24
Section 4: Scaling Database Applications
by Using Service-Oriented Architecture 40
Section 5: Improving Availability and
Scalability by Scaling Out Front-End
Systems 53
Discussion: Session Summary 62
Clinic Evaluation 63

MCT USE ONLY. STUDENT USE PROHIBITED


Information in this document, including URL and other Internet Web site references, is subject to
change without notice. Unless otherwise noted, the example companies, organizations, products,
domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious,
and no association with any real company, organization, product, domain name, e-mail address,
logo, person, place or event is intended or should be inferred. Complying with all applicable
copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part
of this document may be reproduced, stored in or introduced into a retrieval system, or transmitted
in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or
for any purpose, without the express written permission of Microsoft Corporation.

The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory,
regarding these manufacturers or the use of the products with any Microsoft technologies. The
inclusion of a manufacturer or product does not imply endorsement of Microsoft of the
manufacturer or product. Links are provided to third party sites. Such sites are not under the
control of Microsoft and Microsoft is not responsible for the contents of any linked site or any link
contained in a linked site, or any changes or updates to such sites. Microsoft is not responsible for
webcasting or any other form of transmission received from any linked site. Microsoft is providing
these links to you only as a convenience, and the inclusion of any link does not imply endorsement
of Microsoft of the site or the products contained therein.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual
property rights covering subject matter in this document. Except as expressly provided in any
written license agreement from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual property.

© 2006 Microsoft Corporation. All rights reserved.

Microsoft, Windows, and Visual Studio are either registered trademarks or trademarks of
Microsoft Corporation in the United States and/or other countries.

All other trademarks are property of their respective owners.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications i

MCT USE ONLY. STUDENT USE PROHIBITED


MCT USE ONLY. STUDENT USE PROHIBITED
Session 6: Designing a Scalable Data Tier for Database Applications 1

Session Overview

*****************************illegal for non-trainer use******************************

Session Overview
The latest software development tools, together with readily available, proven patterns and best
practices, enable you to easily create and develop complex software. Because these development
tools deal with many of the low-level details, software developers can focus on writing the code
necessary to implement the business logic. Although these tools make software development easier,
software complexity continues to increase, primarily because of increasing user expectations.
All applications have functional requirements that define what the database application should do,
as well as nonfunctional requirements that define service-level expectations, such as application
response time, security needs, and communication protocol restrictions. To meet these
requirements, it is important to plan for scalability in your database applications. Scalability is the
measure of how the software reacts to the load placed on it as the amount of data and the number of
user connections and transactions increase.
This session focuses on how to assess scalability needs and design the best architecture to scale
your system to meet the needs and expectations of users.

Session Objectives
„ Identify when to scale database applications and what layer to scale.
„ Select the appropriate technology to avoid concurrency problems and improve application
performance.
„ Evaluate whether scaling out or scaling up is appropriate for the scalability requirements of your
database system.
„ Explain how to improve middle-tier processing by using multiple instances of Web services and
object pooling.

MCT USE ONLY. STUDENT USE PROHIBITED


„ Explain how to improve response time and availability by scaling out front-end systems.
2 Session 6: Designing a Scalable Data Tier for Database Applications

Section 1: Identifying the Need to Scale

*****************************illegal for non-trainer use******************************

Section Overview
Scaling is the process of efficiently supporting system growth. Identifying the need to scale
essentially means adjusting expectations to newer performance objectives.
To identify when to scale database applications, the development team must regularly test and
monitor thresholds. Thresholds are constraints based on the limitations of the current hardware and
software configuration.
The process of capacity planning should be complemented with careful analysis of business
requirements, expected business growth, and solution usage patterns. These are some of the factors
that can aid you in predicting the future load on an application.
This section focuses on identifying when to scale database applications and how to design a
scalable data access system.

Section Objectives
„ Describe the process of capacity planning.
„ Explain the considerations for designing a scalable data access system.
„ Explain the methods for determining whether data access is a performance and scalability
bottleneck, and explain the guidelines for choosing a particular method.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 3

The Process of Capacity Planning

*****************************illegal for non-trainer use******************************

Introduction
Capacity planning is the process that is used to ensure that a solution will be able to support future
demand and workload. It is important to incorporate capacity planning into the planning phase to
ensure that the system will be able to bear future performance and scalability demands caused by
higher workloads.
Capacity planning is supported by many subordinate activities, such as testing and performance
monitoring and log analysis. It is also supported by other activities that provide information about
how the system is being used and how it responds to the current workload.
The capacity planning process is a critical input when designing and architecting your solution.
Most application performance and scalability issues are not caused by scarce system resources but
by ineffective architecture and coding practices.

Scale Up vs. Scale Out in Capacity Planning


Scale up Scale out

Quick definition Upgrade or fine-tune current hardware. Distribute the load over a group of federated
servers.
Decision • Costs. • Data partitioning.
parameters • Hardware upgrade capacity. • Replication configuration.
• Configuration. • Complex disaster recovery and
failover.
• System availability requirements.
Recommended • When bottleneck is on system • When heavy load requires more power
resources. than can be handled by one server.

MCT USE ONLY. STUDENT USE PROHIBITED


• As a first approach. • As a second approach.
4 Session 6: Designing a Scalable Data Tier for Database Applications

Deciding Which Layer to Scale


Physical distribution of an application among different layers might affect application performance
in the event of an increase in network traffic. However, when there is a heavy load, distributing the
workload across multiple physical servers might improve application scalability, availability, and
performance. Following are some of the reasons for distributing an application among different
layers:
„ To provide for specific components or processes that might use different types and amounts of
physical resources. For example, the database tier might be more disk-intensive.
„ To provide for an independent execution environment for costly offline and asynchronous
processing. For example, asynchronous execution is recommended for certain resource-intensive
business processing scenarios.
„ Corporate, government, political, or legal considerations that might constrain a specific server
configuration or distribution. For example, some governments require that interbank transactions
must be executed only after working hours, thereby requiring a separate physical infrastructure to
dispatch all transactions that have been processed during the day.
„ Licensing considerations—for example, when a licensing scheme requires a certain physical
infrastructure.
„ Data partitioning to accommodate different access patterns, maintenance needs, and usage of the
data—for example, distributing historical data in multiple servers partitioned by quarter or by
month and accessed through a partition view.
„ Geographical considerations—for example, situations in which the system requires access to
remote resources such as remote databases or Web services.
„ Infrastructure considerations—for example, in a Web farm, when a specific infrastructure
configuration requires distributing the load through multiple servers.

If a database application is distributed among different infrastructure layers, you should conduct
extensive testing to monitor the performance objectives and thresholds of each layer.

The Process of Capacity Planning


There are several different methodologies for capacity planning, and many companies also have
their own customized methodologies. The methodology explained here is a derivation of the
Transaction Cost Analysis methodology.
The process of capacity planning involves the following steps:
1. Analyze current processing volumes and how they affect system resources.
• Identify usage patterns—for example, the average duration of opened connections to the
database, the number of requests executed each time a connection is opened, and the most
common requests.

2. Measure the thresholds on constrained resources. Constrained resources are system resources
that are reaching their threshold level with the current load on the system; for example, an
application that is consuming 90 percent of processor time.
• Identify the thresholds.
• Identify the average usage pattern of each constrained resource.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 5

3. Measure operation costs.


• Focus on critical operations or processes that represent frequently used or resource-intensive
operations.
• Calculate the cost per request, the cost per operation, or both, in terms of the constrained
resources.
• Identify how much of each important resource—such as memory, network bandwidth, and
CPU time—is consumed per operation.

4. Calculate the future volume and the usage pattern costs of the future volume.
• Based on the previous measurements, calculate the approximate load level that the application
can handle under the current conditions.
• Modify the variables and calculate which conditions should vary to reach the expected load
level.

5. Verify the capacity.


• Continue load testing and monitoring to verify the accuracy of the calculations.

When calculating future capacity, you must consider the possibility of code changes and future
additions that might require more system resources.

Discussion Question
„ What do you think about first when you think you need to scale a database application? Consider
the following:
• Scale up vs. scale out
• Which layer to scale

Additional Information
For an introduction to performing capacity planning, see “How to: Perform Capacity Planning
for .NET Applications” in the Improving .NET Application Performance and Scalability Guide
on the Microsoft Patterns & Practices Web site at
http://msdn.microsoft.com/practices/guidetype/Guides/default.aspx?pull=/library/en-
us/dnpag/html/scalenethowto06.asp.
For more information about how scaling up or scaling out affects database applications, see
“Improving SQL Server Performance” in the Improving .NET Application Performance and
Scalability Guide on the Microsoft Patterns & Practices Web site at
http://msdn.microsoft.com/practices/guidetype/Guides/default.aspx?pull=/library/en-
us/dnpag/html/scalenetchapt14.asp.
For more information about the layering pattern and how it might affect application
performance, see “Deployment Patterns” on the Microsoft Patterns & Practices Web site at
http://msdn.microsoft.com/practices/Topics/arch/default.aspx?pull=/library/en-
us/dnpatterns/html/EspDeploymentPatterns.asp.
Many other third-party methodologies for measuring and benchmarking system performance
are available—for example, the methodologies defined by Transaction Processing Performance
Council (http://www.tpc.org).

MCT USE ONLY. STUDENT USE PROHIBITED


6 Session 6: Designing a Scalable Data Tier for Database Applications

Considerations for Designing a Scalable Data Access System

*****************************illegal for non-trainer use******************************

Introduction
The database server is expected to be the most optimized and efficient component in the entire
application architecture. For example, the response time of the database server is measured in
milliseconds. The database server manages large and complex sets of data and is heavily accessed
for read and write operations. The database server must manage concurrent access to the data, and it
must maintain data integrity and security. There are various considerations for designing a scalable
database access system. This topic focuses on the following considerations:
„ Appropriate situations for scaling the system hardware
„ Considerations for designing a database system to support an increased number of connections
„ Considerations for designing a database system to support an increased number of transactions

Appropriate Situations for Scaling the System Hardware


Following are some of the considerations for identifying when scaling a system’s hardware is
appropriate:
„ Processor and memory-related bottlenecks
„ Disk I/O–related bottlenecks, especially in online transaction processing (OLTP) applications
„ Network bandwidth bottlenecks, caused by high volumes of network traffic

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 7

Considerations for Designing a System to Support an Increased


Number of Connections
The following table describes some of the considerations for designing a database system to support
an increase in the number of connections to the system.

Consideration Reason

Reutilization of already opened connections Connection pooling allows applications to


reutilize already opened connections, thus
reducing the overhead of establishing new
connections. This means that a small number of
opened connections can service a large number
of requests.
Locking and blocking between connections Long-running transactions increase locking and
blocking between connections. You should
isolate locking-related issues and adopt an
alternative mechanism to locking—for
example, by using disconnected edits and
conflict resolution.
Network usage and network latency Transmitting large volumes of data between the
database server and calling applications
increases network usage and network latency.
Review the data being transferred and ensure
that you transfer only what is necessary and
only as often as necessary. Unnecessary data
transfers can severely impact the scalability and
performance of the application across all
layers. In some cases, you might also consider
data compression to resolve network-related
bottlenecks.
Roundtrips to the database server Server-side cursors increase the number of
roundtrips to the database server. Because each
request is associated with an overhead, you
should architect to minimize the number of
unnecessary requests.

MCT USE ONLY. STUDENT USE PROHIBITED


8 Session 6: Designing a Scalable Data Tier for Database Applications

Considerations for Designing a System to Support an Increased


Number of Transactions
The following table describes some of the considerations for designing a system to support an
increased number of transactions.
Consideration Reason

Locking and blocking Long-running transactions can lock resources


for more time than necessary, thereby
preventing other transactions from accessing
the data when required. You should design the
application to minimize locking data for
extended periods of time and to reduce the
possibility of deadlocks.
The sequence of request execution affects
acquiring and releasing locks and can be a
cause for deadlocks. It is a best practice to
design data access components that access data
in the same sequence to reduce the possibility
of deadlock.
The length of time each transaction holds a
lock on data affects the scalability of the
application. Design your data access
methodology to reduce the time that locks are
being held.
Ensure that queries that require exclusive locks
execute at the end of the process so that locks
are held for the shortest possible period.
Ensure that no transaction outcome depends on
user interaction—for example, by prompting
the user to commit the running transaction.
Concurrency The selected transaction isolation level directly
affects the concurrency of the database system.
The default transaction isolation level provides
the highest concurrency level. Before changing
the transaction isolation level, you should
consider the potential impact of this change on
scalability and the ability of the application to
support a large number of concurrent
transactions.
Isolation level Queries can provide isolation-level hints so that
Microsoft® SQL Server™ can dynamically
adjust the locking mechanism. By using this
feature to hint for a less restrictive isolation
level, you can increase the number of
transactions that the system can process.
However, you must carefully weigh the impact
of this change on data accuracy and integrity
against the ability to support more concurrent
transactions.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 9

Demonstration: Determining If Data Access Is the Performance and


Scalability Bottleneck

*****************************illegal for non-trainer use******************************

Introduction
The primary input for any capacity planning is testing and monitoring. Application developers
should continuously monitor the application’s health to identify performance and scalability
bottlenecks and verify that resource usage remains lower than the thresholds set by the performance
objectives.
Data access could become a bottleneck in situations in which the application suffers scalability
problems due to inappropriate connection patterns and excessive data consumption.

Demonstration Overview
The best performance and scalability gains result from fine-tuning the application code rather than
spending excessive time tuning the operating system or SQL Server configuration. Applying some
database development best practices can help database administrators achieve better levels of
database execution performance.
However, it is important to monitor a database application to be able to detect when data access is
the performance and scalability bottleneck in the application.
In this demonstration, your instructor will show how to use common monitoring tools to detect
potential performance issues, identify code that is causing performance problems, and determine
the best way to solve the problem.

MCT USE ONLY. STUDENT USE PROHIBITED


10 Session 6: Designing a Scalable Data Tier for Database Applications

Task 1: Setting Up the Execution Environment


Task Overview
This task configures the execution and monitoring environments.
To set up the execution environment, your instructor will perform the following steps:
1. Start Microsoft Visual Studio® 2005.
2. Browse to D:\Democode\Section01\, and then open the Demonstration1.sln solution.
3. In the ResetDatabase project, in the Create Scripts folder, right-click the CreateTable.sql file,
and then click Run.
4. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference and connect to the AdventureWorks database.
5. Open Microsoft Windows® Performance Monitor.
6. Press the DEL key or click the X on the toolbar as many times as necessary to delete all of the
default running counters.
7. Click the plus sign (+) on the toolbar to add new counters.
8. Add the counters in the following table by selecting the appropriate values from the lists and then
clicking the Add button.
Performance object Counter Instance

MSSQL$SQLINST1: General Statistics Processes blocked —


MSSQL$SQLINST1: General Statistics Transactions —
MSSQL$SQLINST1: General Statistics User Connections —
MSSQL$SQLINST1: Locks Average Wait Time (ms) _Total
MSSQL$SQLINST1: Locks Lock Requests/sec _Total

9. When you have finished adding counters, click Close.

Task 2: Reviewing the Testing Environment


Task Overview
The testing environment is made up of two components: a test application and a load test. In this
task, your instructor will review these two components and their configurations.
To review the test environment, your instructor will perform the following steps:
1. In Visual Studio 2005, open Solution Explorer.
2. In the WebApp project, open the Default.aspx file in the Code Editor.
3. Notice the Page_Load method.
4. In the WebApp project, in the App_Code folder, open the Datalayer.cs file.
5. Review the ExecuteProcess method.
6. Notice the call to the BeginTransaction method in line 27.
7. Notice the call to the ExecuteNonQuery method in line 39.
8. Notice the call to the Thread.Sleep method in line 45.
9. Notice the call to the ExecuteScalar method in line 48.

MCT USE ONLY. STUDENT USE PROHIBITED


10. Notice the call to the Tx.Commit method in line 51.
Session 6: Designing a Scalable Data Tier for Database Applications 11

11. Right-click Default.aspx, and then on the shortcut menu, click View in Browser.
12. The application opens in a Microsoft Internet Explorer window. When the page finishes loading,
it shows a phrase with the text The average value isin red.
13. To copy the URL, in Internet Explorer, on the Address bar, right-click the URL, and then click
Copy.
14. To close the Default.aspx file, on the title bar, click Close.
15. Return to Visual Studio 2005.
16. In the LoadTest project, double-click WebTest1.webtest.
17. To open the Properties pane, in the WebTest1.webtest window, click the URL under the
WebTest1 node, and then press F4.
18. In the Properties pane, select the URL item, paste the URL copied from Internet Explorer, and
then press ENTER.
19. In Solution Explorer, in the LoadTest project, double-click 15UserLoad.loadtest.
20. To open the Properties pane, in the 15UserLoad.loadtest window, in the Run Settings folder,
click the Local node, and then press F4.
21. Notice the properties for the load test.

Task 3: Running the Load Test


Task Overview
In this task, your instructor will execute the load test and show how to monitor the performance
counters set on the Performance Monitor application.
To run the load test, your instructor will perform the following steps:
1. To refresh the window, return to the Performance Monitor window, and then press CTRL+D.
2. Return to Visual Studio 2005.
3. In the 15UserLoad.loadtest window, on the toolbar, click Run. The load test starts executing.
4. When the test is complete, to freeze the display, return to the Performance Monitor window, and
then press CTRL+F.

Task 4: Identifying the Code Causing Performance Problems


Task Overview
In this task, your instructor will show how to analyze the monitored results by reviewing each of
the performance counters on Windows Performance Monitor, reviewing the source code, and then
analyzing the cause of the problems measured.
To identify the code causing performance problems, your instructor will perform the following
steps:
1. To view the data as a graph, in the Performance Monitor window, press CTRL+G.
2. To enable the highlighter, press CTRL+H.
3. Click each of the counters and review the graph highlighted in the display.
4. For each counter, review the minimum, maximum, and average values.
5. Return to Visual Studio 2005.

MCT USE ONLY. STUDENT USE PROHIBITED


6. In the WebApp project, in the App_Code folder, open the Datalayer.cs file.
12 Session 6: Designing a Scalable Data Tier for Database Applications

Task 5: Modifying the Source Code


Task Overview
This is an optional task and will be performed only if time permits.
In this task, your instructor will modify the source code to improve the response time, minimize
locking, and lower the resource contention on the database server.
To modify the source code, your instructor will perform the following steps:
1. In the WebApp project, in the App_Code folder, open the Datalayer.cs file.
2. In line 27, specify different isolation levels. For example, instead of Serializable, specify
ReadCommitted or ReadUncommitted.
3. In line 45, reduce the thread sleep time. For example, instead of 1500 (1.5 seconds), select 500 or
even 0.
4. Run the test again by following the steps specified in Task 3.
5. Close the application.
6. Close Performance Monitor.
7. Close Visual Studio 2005.

Guidelines for Determining If Data Access Is the Performance and


Scalability Bottleneck
„ Implement effective data-caching strategies.
„ Use appropriate connection strings for using connection pooling.
„ Use appropriate filtering when reading data. By filtering the data, you can avoid transmitting more
data than necessary.
„ Keep transactions short.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 13

Section 2: Scaling Database Applications to Avoid


Concurrency Contention

*****************************illegal for non-trainer use******************************

Section Overview
Locking is a technique used by SQL Server 2005 to synchronize the actions executed by multiple
concurrent transactions. Sometimes transactions and requests begin to queue when they are waiting
for a lock to be released. In such cases, concurrency contention occurs. This type of contention
severely affects the scalability of database applications by limiting the number of concurrent
operations that these systems can execute.
A database system is scalable if it is possible to increase the number of transactions, the number of
simultaneous connections, and the volume of data to be processed. Contention is a major barrier to
scalability. This section explains the different techniques for minimizing contention and thereby
maximizing the scalability of a database system.
There are various techniques that avoid holding locks—for example, row-level versioning, which is
the basis of the new Snapshot isolation level in SQL Server 2005.
This section focuses on selecting the appropriate methodology to avoid concurrency problems and
thereby improve application performance.

Section Objectives
„ Explain how concurrency contention occurs.
„ Explain the guidelines for using database snapshots for improving concurrency.
„ Explain the advantages and disadvantages of using the Snapshot isolation level.
„ Explain the process of denormalizing a database to improve concurrency.

MCT USE ONLY. STUDENT USE PROHIBITED


14 Session 6: Designing a Scalable Data Tier for Database Applications

How Concurrency Contention Occurs

*****************************illegal for non-trainer use******************************

Introduction
Application users often complain about poor system performance. However, in most cases, what
the users actually experience is poor response time. Concurrency and locking are among the causes
of poor response time, as database processes wait for the release of locks held by other transactions.
When two requests try to modify the same row concurrently, SQL Server will raise a concurrency
error to the last executing process. The error indicates that the row is being modified by another
uncommitted transaction.
There are two types of concurrency control: pessimistic and optimistic.
Pessimistic concurrency control assumes that reading operations will be affected by data-
modification operations from other processes. Therefore, this type of concurrency enforces a
locking mechanism to coordinate access to resources such as tables, rows, extents, and pages.
Optimistic concurrency control assumes that reading operations must not be affected by data-
modification operations from other processes. Therefore, instead of locks, optimistic concurrency
control is implemented in SQL Server 2005 as a row-versioning technique for read data. This
allows read and write operations to execute concurrently. Concurrency contention does not occur
when optimistic concurrency is used.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 15

When Does Concurrency Contention Occur?


Concurrency contention can occur in the following scenarios when using pessimistic concurrency:
„ A request is waiting for another request to release a lock on a particular resource.
„ A transaction set to a highly restrictive transaction isolation level (for example, serializable) could
block several tables that other transactions are waiting to read.
„ A long-running transaction started from the application layer is holding locks on resources that
other transactions are waiting to access.

Effects of Concurrency Contention


Any of the following situations could result from concurrency contention:
„ Overall system performance is degraded.
„ Application scalability is negatively affected, as contention grows with the number of requests.
„ Errors caused by transaction time-outs decrease the capacity of the system in terms of the number
of transactions that can be executed in a specific period.
„ The database server might run out of connections or available working threads, thereby severely
degrading the scalability of the database system.
„ Available memory is reduced on the database server overloaded with supporting the locking
infrastructure.
„ Deadlocks might occur between processes.
„ Database errors are caused by server overload.

Techniques for Avoiding Concurrency Contention


Technique Description Effect on the database system

Use database snapshots. Provides a read-only static view of Allows separation of read and write
a database. operations.
Use optimistic concurrency Does not keep active locks for a Allows separation of read and write
with the Snapshot isolation long period. operations.
level. Keeps multiple versions of updated
data.
Denormalize the database. Introduces preaggregated or Improves query response time and
summarized data to satisfy minimizes locking, thereby
commonly executed queries. increasing system scalability.
Avoid user input in Avoids running a transaction that Avoids holding locks until the input
transactions. requires end-user input to commit. is received.
Keep transactions as short Executes validation or any other Locks are held only as long as they
as possible. logic code that is not dependent on are required.
the transaction outside the
transaction scope.
Opens lazy; commits early.

MCT USE ONLY. STUDENT USE PROHIBITED


16 Session 6: Designing a Scalable Data Tier for Database Applications

Keep transactions in one If a transaction spans more than one Locks are held longer than needed
batch. batch, network latency affects the while waiting for network packets.
network packet sending and
retrieval process, thereby increasing
the transaction execution time.
Use transaction isolation Lowers the isolation level to less Some scenarios can avoid locking
hints where appropriate. restrictive isolation levels where or at least use less restrictive
appropriate. locking schemes.
Access resources in the When multiple processes execute Avoids deadlocking.
same order. the same steps, accesses shared
resources in the same order.

tempdb Contention
The tempdb database could also suffer from contention due to concurrency issues.
The following types of operations use the tempdb database intensively and should be carefully used
and monitored frequently as the load on the application grows:
„ Multiple Active Result Sets (MARS)
„ The Snapshot transaction isolation level
„ Repeated creation and dropping of temporary tables (local or global)
„ Table variables that use tempdb for storage purposes
„ Work tables associated with CURSORS
„ Work tables associated with an ORDER BY clause
„ Work tables associated with an GROUP BY clause
„ Work files associated with HASH PLANS

Additional Information
Read Microsoft Knowledge Base article 75722, “INF: Reducing Lock Contention in SQL
Server” at http://support.microsoft.com/default.aspx?scid=kb;en-us;75722&sd=tech.
Read the “Transactions” topic in Chapter 14, “Improving SQL Server Performance,” in the
Improving .NET Application Performance and Scalability Guide at
http://msdn.microsoft.com/practices/guidetype/Guides/default.aspx?pull=/library/en-
us/dnpag/html/scalenetchapt14.asp#scalenetchapt14%20_topic9.
For more information about tempdb contention and some strategies for enhancing concurrency
on the tempdb database, read “KB328551 – FIX: Concurrency enhancements for the tempdb
database” at http://support.microsoft.com/default.aspx?scid=kb;en-us;328551.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 17

Demonstration: Improving Concurrency by Using Database


Snapshots

*****************************illegal for non-trainer use******************************

Introduction
Database snapshots are a new feature in SQL Server 2005. A database snapshot is a static, read-
only copy of the current committed state of the source database. You can use database snapshots to
improve concurrency in a database application by taking a snapshot of the current state of the data
for reporting purposes, separating read and write operations.
The internal structures of SQL Server that support database snapshots are optimized in such a way
that space consumption is minimized when reading data.

Demonstration Overview
In this demonstration, your instructor will explain the guidelines for using database snapshots to
improve concurrency. Two applications will run in parallel. Each application will show the number
of transactions being executed per second. One application will write data, and the other application
will read data.
Both applications will connect to the same database. A database snapshot will then be created, and
the reader application will use the snapshot database. The number of transactions per second will be
higher for both applications after applying this change.

MCT USE ONLY. STUDENT USE PROHIBITED


18 Session 6: Designing a Scalable Data Tier for Database Applications

Task 1: Setting Up the Execution Environment


Task Overview
This task configures the execution and monitoring environments.
To configure the execution and monitoring environments, your instructor will perform the
following steps:
1. Start Visual Studio 2005.
2. Browse to D:\Democode\Section02\, and then open the SnapshotDemo.sln solution.
3. In the ResetDatabase project, in the Create Scripts folder, right-click the CreateDatabase.sql
file, and then on the shortcut menu, click Run On.
4. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference and connect to the Master database.

Task 2: Running Read and Write Operations Concurrently


Task Overview
In this task, each application will measure the number of requests being executed per second.
Because both applications execute in parallel, the writing application blocks the reading
application.
To run read and write operations concurrently, your instructor will perform the following steps:
1. Right-click the CashRegister project, and then on the shortcut menu, click Debug and then click
Start new instance. The Automatic Cash Register window opens.
1. Right-click the SalesReport project, and then on the shortcut menu, click Debug and then click
Start new instance. Notice that the connections strings used in this project might need to be
modified if this demonstration runs outside the provided Virtual PC environment. The Sales
Report Reader window opens.
2. Arrange the application windows so that both windows are visible.
3. In the Automatic Cash Register window, click Start, let the application run until the Elapsed
Time counter shows 30 seconds, and then click Stop.
4. Notice the number of total requests executed.
5. In the Sales Report Reader window, click Start, let the application run until the Elapsed Time
counter shows 30 seconds, and then click Stop.
6. Notice the number of total requests executed.
7. In the Sales Report Reader window, click Start.
8. When the Elapsed Time counter in the Sales Report Reader window reaches 10 seconds, click
Start in the Automatic Cash Register window and let both applications execute at the same time.
9. Stop one application and start it again, and then stop the other application and start it again.
10. Click the Stop button in both application windows to stop execution.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 19

Task 3: Creating a Database Snapshot


Task Overview
In this task, your instructor will create a database snapshot. The reader application will start reading
the data from the snapshot copy. As a result, blocking and contention will be reduced, producing
faster results.
To create a database snapshot, your instructor will perform the following steps:
1. In the Sales Report Reader window, click Create DB Snapshot.
2. In the Automatic Cash Register window, click Start.
3. When the Elapsed Time counter in the Automatic Cash Register window reaches 10 seconds,
click Start in the Sales Report Reader window.
4. Click the Stop button in both application windows to stop execution.
5. In the Sales Report Reader window, click Drop Snapshot.
6. Close both the applications.
7. Close Visual Studio 2005.

Guidelines for Improving Concurrency by Using Database Snapshots


„ Create database snapshots to separate readers and writers and minimize concurrency.
„ Create purpose-built database snapshots for specific time-related reports.
„ Minimize the number of database snapshots on the same database to minimize system overhead
when updating data.
„ Write applications that automatically detect the existence of snapshots.
„ Write stored procedures to transparently use newly created database snapshots.

MCT USE ONLY. STUDENT USE PROHIBITED


20 Session 6: Designing a Scalable Data Tier for Database Applications

Discussion: Advantages and Disadvantages of Using the Snapshot


Isolation Level

*****************************illegal for non-trainer use******************************

Introduction
The Snapshot isolation level is a new transaction isolation level based on row-versioning
techniques. It is considered an optimistic concurrency control technique because it minimizes
locking on the original data and permits concurrent execution of read and write operations without
blocking each other.
As a recommended technique to avoid concurrency contention, the Snapshot isolation level has
advantages as well as disadvantages. For example, it separates read and write operations, thereby
maximizing concurrency. However, this advantage comes at a cost, because the system needs to
automatically maintain multiple copies of data. In some cases, maintaining multiple copies of data
could produce update conflicts.
Developers should consider these advantages and disadvantages when designing data access
methodologies, to ensure that the Snapshot isolation level is used appropriately.

Discussion Questions
1. Do you need to migrate Oracle database applications to SQL Server?

2. Do you use the isolation level NOLOCK hint? Why?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 21

3. Have you experienced excessive lock wait time in your applications? How did you diagnose that
lock wait time was the problem?

4. Have you experienced any problems in scaling tempdb to support the Snapshot isolation level?

5. How do you plan to deal with conflicts when different data versions are updated?

6. Would you select the Snapshot isolation level per transaction or per database? Why?

7. What are some of the disadvantages of using the Snapshot isolation level?

MCT USE ONLY. STUDENT USE PROHIBITED


22 Session 6: Designing a Scalable Data Tier for Database Applications

Multimedia: The Process of Denormalizing Databases to Improve


Concurrency

*****************************illegal for non-trainer use******************************

Introduction
The normalization rules generate data models that are optimized for OLTP applications with
combined read and write operations. In some cases, an application can benefit by not fully applying
the normalization rules. This design pattern is also known as denormalizing.
Denormalization can be applied to improve the database system performance. There are various
strategies available to denormalize data. You should choose the appropriate strategy based on the
data access patterns and the data structure specific to your application.

Discussion Questions
1. Have you ever considered denormalizing data to improve concurrency?

2. Can denormalizing data improve concurrency?

3. To what extent should you denormalize data?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 23

4. Would you implement business entities as a set of related independent tables (using vertical
partitioning)?

5. When would you choose indexed views, and when would you choose redundant summary tables?

6. How would you maintain redundant tables?

7. Would indexes with included columns reduce contention?

8. Would using computed columns support better concurrency? Why would you use computed
columns?

MCT USE ONLY. STUDENT USE PROHIBITED


24 Session 6: Designing a Scalable Data Tier for Database Applications

Section 3: Scaling SQL Server Database Systems

*****************************illegal for non-trainer use******************************

Section Overview
The two strategies for scaling a database server are scaling up and scaling out. Scaling up is the
process of increasing hardware resources—for example, by adding more memory or processing
power. Scaling out refers to distributing the current installation to a multiserver physical
architecture.
Various technologies and strategies are available for distributing data among a group of federated
servers. Database administrators should be careful when choosing the appropriate technology.
Solution designers must choose to scale up or scale out, depending on the scalability requirements
of a given database system. This section provides considerations and guidelines for choosing the
appropriate option.

Section Objectives
„ Explain the guidelines for distributing data and requests across multiple tables, databases,
instances, and servers.
„ Explain how data partitioning works.
„ Explain the benefits of implementing well-defined interfaces to encapsulate the physical
implementation of distributed database systems.
„ Explain the guidelines for scaling applications by using Service Broker.
„ Compare the performance and flexibility of relational systems with that of SQL Server Analysis
Services.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 25

Considerations for Distributing Data and Requests in SQL Server


2005

*****************************illegal for non-trainer use******************************

Introduction
You can scale out a database system by separating or dividing the database tables based on certain
criteria. The pattern of data reads and writes should determine how you divide the tables themselves
and how you provide access to the data.
Separating the data at the logical level allows database administrators to distribute the data
physically. As a result, the system might benefit from using the appropriate hardware and specific
system settings to optimize data access. For example, you can distribute data across multiple disks
based on a specific column value, such as the fiscal year for sales information.
SQL Server 2005 provides various strategies for distributing data and requests.

Distributed Data Strategies


In SQL Server 2005, data can be distributed across multiple tables, databases, instances, and
servers. Data is said to be partitioned vertically when it is distributed based on the table’s schema or
columns, and data is said to be partitioned horizontally when it is distributed based on the data
itself. The following table describes the distributed data strategies.
Location Horizontal Vertical

Tables Use table partitioning. Data can be Denormalize the data schema by storing a single
partitioned according to a partition key entity across multiple tables linked by a one-to-
and distributed automatically through a one relationship.
set of filegroups.

MCT USE ONLY. STUDENT USE PROHIBITED


26 Session 6: Designing a Scalable Data Tier for Database Applications

Databases Allow different database configuration Allow different database configuration and
and security settings. security settings.
Use distributed partitioned views to Use distributed partitioned views to expose data as
expose data as a single unit that is a single unit that is independent of storage.
independent of storage.
Instances Allow different installation options and Allow different installation options and server
server resource administration. resource administration.
Use distributed partitioned views to Use distributed partitioned views to expose data as
expose data as a single unit that is a single unit that is independent of storage.
independent of storage.
Servers Use federated servers. This allows a Use federated servers. This allows a hardware and
hardware and software configuration that software configuration that provides independent,
provides independent, parallel access to parallel access to data that is stored on different
data that is stored on different servers. servers.
Use distributed partitioned views to Use distributed partitioned views to expose data as
expose data as a single unit that is a single unit that is independent of storage.
independent of storage.

Partitioning Data by Using Table Partitioning and Federated Servers


Table partitioning uses a partition scheme to declare how to distribute the data according to the
partitioning boundaries on the physical storage. The partitioning boundaries are calculated based on
a partitioning function.
By combining table partitioning and federated servers, database administrators can horizontally
partition a large table and assign a different server to handle each partition. Each server can manage
a different set of filegroups to favor parallelism while querying the data.
Note that table partitioning can also assist in managing backup and restore requirements, because
partitions can be stored across filegroups, which are independent units of backup and restore that
can have different backup schedules.

Guidelines for Scaling Out by Using Log Shipping to Improve Read


Access to Databases
Log shipping is a technique for keeping two servers synchronized by backing up the database
transaction log from the primary server and restoring it on a secondary server. Read-only activities
can be forwarded to the secondary server, thereby reducing the load on the primary server.
The secondary server must be used only for read operations, because it maintains only a copy of the
data on the primary server. During restoration on the secondary server, user connections must be
terminated.
The main advantages of using log shipping are that it is relatively simple to configure and maintain,
and that there are no constraints on the hardware configuration or location.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 27

Guidelines for Using Replication to Scale Out Read Access to


Databases
By using replication, you can distribute data among different servers. Replication provides a more
finely detailed administration model to control data distribution to subscribed servers.
To scale out read access, you can use replication techniques to distribute data to a subscribed
database that is used for read-only purposes. This solution results in less downtime than log
shipping, without affecting the current operations on the subscribed database.

Additional Information
For more information about configuring log shipping, see “Configuring Log Shipping” in SQL
Server 2005 Books Online at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx.

MCT USE ONLY. STUDENT USE PROHIBITED


28 Session 6: Designing a Scalable Data Tier for Database Applications

Multimedia: How Data Partitioning Works

*****************************illegal for non-trainer use******************************

Introduction
Data partitioning is not a new SQL Server feature. In SQL Server 6.5, database developers and
database administrators were able to partition data horizontally based on a specific value.
SQL Server 7 and SQL Server 2000 introduced the concept of partitioned views.
SQL Server 2005 offers a new approach to data partitioning that is loosely coupled, highly scalable,
and reusable and that simplifies database administration and maintenance.

Discussion Questions
1. How is data partitioning different from the partitioned view feature that is available in SQL
Server 2000?

2. What types of problems can data partitioning solve?

3. How can data partitioning improve the scalability of your application?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 29

Additional Information
For more information about data partitioning, see the article “Partitioned Tables and Indexes in
SQL Server 2005” at http://msdn.microsoft.com/SQL/learn/arch/default.aspx?pull=/library/en-
us/dnsql90/html/sql2k5partition.asp.

MCT USE ONLY. STUDENT USE PROHIBITED


30 Session 6: Designing a Scalable Data Tier for Database Applications

Considerations for Implementing Well-Defined Interfaces for


Distributed Database Systems

*****************************illegal for non-trainer use******************************

Introduction
Changes in the physical implementation of database objects affect the maintainability of the
database applications that use these objects. The maintainability of a database system depends on its
ability to adapt to these changes without affecting existing applications.
The various types of public interfaces in a database system are stored procedures, user-defined
functions, and views. Abstraction is a term that is used to define an alternative view of a system that
presents only the relevant elements and hides the implementation details that are not relevant.
Abstraction is a powerful mechanism to create maintainable systems.
Database applications can benefit from abstraction by implementing well-defined interfaces to
encapsulate the physical implementation of distributed database systems.

Separating Public Interfaces from the Physical Implementation


In a database system, it is important to scale a physical implementation without affecting the
current applications and clients that already use the database server. This can be achieved only by
separating the physical implementation from the logical implementation. By separating public
interfaces from the physical implementation, you can:
„ Hide the underlying complexity of the physical distribution of the data.
„ Define dynamic data schemas that are different from the physical constraints of the physical data
schema.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 31

„ Provide dynamic calculated and aggregated data that is not available in the physical model.
„ Maintain and change the underlying physical model without affecting the current applications.
„ Allow for different security permissions at the logical and physical levels to allow the user access
to the public interface but not to the physical implementation.
„ Provide another layer of security to protect the physical implementation—for example, from SQL
injection attacks.

MCT USE ONLY. STUDENT USE PROHIBITED


32 Session 6: Designing a Scalable Data Tier for Database Applications

Demonstration: Scaling Applications by Using SQL Server Service


Broker

*****************************illegal for non-trainer use******************************

Introduction
SQL Server Service Broker is a platform for developing loosely coupled services for database
applications. Service Broker is fully integrated with SQL Server.
When databases are heavily accessed, they can offload some of the data processing to another
server or queue the requests until there is less demand to access the database. This action
automatically scales the application. This is possible because of the transactional queuing capability
provided by Service Broker.
In this demonstration, your instructor will show how to scale applications by using Service Broker.
Your instructor will also explain the guidelines that you must follow for scaling applications by
using Service Broker.

Demonstration Overview
Service Broker includes the ability to execute stored procedures asynchronously. In this
demonstration, your instructor will first synchronously execute an expensive (resource-intensive)
procedure. To enhance application scalability, Service Broker will be enabled to execute the
expensive procedure asynchronously, thereby improving the response time.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 33

Task 1: Setting Up the Execution Environment


Task Overview
This task configures the execution environment.
To configure the execution environment, your instructor will perform the following steps:
1. Start Visual Studio 2005.
2. Browse to D:\Democode\Section03\, and then open the ScalingServiceBroker.sln solution.
3. In the ResetDatabase project, in the Create Scripts folder, double-click CreateDatabase.sql.
4. Right-click the CreateDatabase.sql file, and then on the shortcut menu, click Run On.
5. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference and connect to the Master database.
6. Right-click the RandomDataGenerator project, and then click Set as StartUp Project.
7. To run the project, press F5. The Random Data Generator window opens.
8. Click Start.
9. When the Total Requests counter reaches 100, click Stop.
10. Close the Random Data Generator application.

Task 2: Running Expensive Procedures Without Service Broker


Support
Task Overview
In this task, an expensive operation needs to be executed on the data. This operation takes a lot of
time to execute: it fetches data by using a cursor, and for each row, a transaction updates data in
two different tables.
A sample application will execute this expensive operation synchronously without using Service
Broker to show how long it takes to execute.
To run an expensive procedure without Service Broker support, your instructor will perform the
following steps:
1. Right-click the QuarterlySalesReview project, and then click Set as StartUp Project.
2. To run the project, press F5. The Quarterly Sales Review window opens.
3. Ensure that the Use SQL Service Broker check box is cleared.
4. Click Start.
5. When the Total Requests counter reaches 1, click Stop.

MCT USE ONLY. STUDENT USE PROHIBITED


34 Session 6: Designing a Scalable Data Tier for Database Applications

Task 3: Running Expensive Procedures with Service Broker Support


Task Overview
In this task, an expensive operation needs to be executed on the data. This expensive operation
takes a lot of time to execute: it fetches data by using a cursor, and for each row, a transaction
updates data in two different tables.
To improve application scalability, the sample application will push the requests into a Service
Broker queue to execute this expensive operation asynchronously.
To run an expensive procedure with Service Broker support, your instructor will perform the
following steps:
1. In the Quarterly Sales Review window, select the Use SQL Service Broker check box.
2. Click Start.
3. When the Elapsed Time counter reaches 15, click Stop.
4. Close the application.
5. Open SQL Server Management Studio.
6. Open a new query window, and then type the following code:
USE SALESDB
GO
RECEIVE * FROM Q
GO

7. To execute the code, press F5.


8. Notice that the output returns all the messages from the queue.

Task 4: Working with Service Broker Externally—.NET Framework


Samples
Task Overview
This is an optional task and will be demonstrated only if time permits.
Service Broker is enabled to work not only with SQL Server and T-SQL code, but also with
external applications. In this task, your instructor will review a sample application that is available
with the SQL Server installation.
To work with Service Broker externally using Microsoft .NET Framework samples, your instructor
will perform the following steps:
1. Browse to C:\Program Files\Microsoft SQL
Server\90\Samples\Engine\ServiceBroker\ServiceBrokerInterface\cs, and then open the
ServiceBrokerInterface.sln solution.
2. Review the code in the Service.cs file.
3. Close Visual Studio 2005.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 35

Guidelines for Scaling Applications by Using Service Broker


„ Use Service Broker activation to:
• Call stored procedures asynchronously.
• Improve application response time.

„ Use Service Broker routing for distributing messages to different computers without affecting the
application code.

Additional Information
For a complete introduction to SQL Server Service Broker, see “An Introduction to SQL
Server Service Broker” at http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dnsql90/html/sqlsvcbroker.asp.

MCT USE ONLY. STUDENT USE PROHIBITED


36 Session 6: Designing a Scalable Data Tier for Database Applications

Demonstration: Using Analysis Services as an Alternative Data


Source

*****************************illegal for non-trainer use******************************

Introduction
In some cases, OLTP database applications need to present complex reports by aggregating data
from multiple sources and making multidimensional calculations. SQL Server Analysis Services
provides tools to design, implement, and maintain such application types and can be fully integrated
with OLTP applications.
SQL Server Analysis Services provides a platform on which to build, administer, and deploy
business intelligence solutions. It also provides online analytical processing (OLAP) and data-
mining functionality.
SQL Server Reporting Services is a platform on which to build, administer, and deploy reporting
and data-analysis solutions that can be consumed from .NET Framework applications and by
applications for other platforms.
By combining the power of these two SQL Server services, regular OLTP applications can choose
from various available options when implementing complex data aggregation and reporting.

Demonstration Overview
In this demonstration, your instructor will show a report generated against relational data and will
then show the same report generated against an OLAP cube. Your instructor will also explain some
of the advantages of using OLAP in certain scenarios, especially when working with multiple-
dimension data or aggregated data.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 37

Task 1: Setting Up the Execution Environment


Task Overview
This task configures the execution environment.
To configure the execution environment, your instructor will perform the following steps:
1. To ensure that Analysis Services is running, open SQL Server Configuration Manager.
2. In the SQL Server Configuration Manager window, in the right pane, locate the SQL Server
Analysis Services icon. If the icon displays a green triangle, the service is running, and you can
proceed to step 4.
3. If the SQL Server Analysis Services icon displays a red box, right-click the icon, and then on the
shortcut menu, click Start. SQL Server Analysis Services starts, and the icon displays a green
triangle.
4. Browse to C:\Program Files\Microsoft SQL Server\90\Samples\Analysis
Services\Tutorials\Lesson7\, and then open the Analysis Services Tutorial.sln solution.
5. Change the deployment server name. Select Project, Properties, Deployment, Server, and then
change the value from localhost to MIA-SQL\SQLINST1.
6. In the Data Sources folder, right-click Adventure Works DW.ds, and then on the shortcut
menu, click View Designer.
7. In the Data Source Designer window, click Edit.
8. In the Connection Manager window, ensure that Server Name is set to MIA-SQL\SQLINST1.
9. To close the Connection Manager window, click OK.
10. To close the Data Source Designer window, click OK.
11. On the Database menu, click Process.
12. If you are prompted to choose to build and deploy the project, in the message box, click Yes.
The Process Database dialog box opens.
13. Click Run.
14. The Process Progress window shows the status of the processing in the OLAP database. When
the processing is completed, click Close in both the Process Progress window and the Process
Database dialog box.
15. Close Visual Studio.

Task 2: Reporting with Relational Data


Task Overview
In this task, your instructor will review a report generated with Reporting Services using data from
a relational database. Your instructor will show and briefly explain the Reporting Services
development environment.
To report with relational data, your instructor will perform the following steps:
1. Start Visual Studio 2005.
2. Browse to D:\Democode\Section03\, and then open the ReportingOLAP.sln solution file.

MCT USE ONLY. STUDENT USE PROHIBITED


3. In the Reports folder, double-click the Relational Report.rdl file.
38 Session 6: Designing a Scalable Data Tier for Database Applications

4. To see the report, click the Preview tab.


5. To see the information by fiscal year quarter and review the report data, click the plus sign (+).
6. Move to the next page by clicking the right arrow (►) on the toolbar of the Report Preview
page.
7. To regenerate the report, change the category to a different product (for example, Bikes), and
then click View Report.
8. To see the data sources, click the Data tab.
9. Review the T-SQL query. If the T-SQL query is not visible, on the toolbar, click the SQL button
to show the T-SQL pane.
10. To execute the query and see the results, click the ! button on the toolbar.
11. In the Query Parameters window, set the value column to –1 (minus 1), and then click OK.
12. Expand the Results pane, and then review the results.

Task 3: Reporting with OLAP Data


Task Overview
In this task, your instructor will review a report generated with Reporting Services using data from
an OLAP database. Your instructor will show and briefly explain the Reporting Services
development environment and some of the differences between OLAP cubes and relational data.
To report with OLAP data, your instructor will perform the following steps:
1. In the Reports folder, double-click the OLAP Report.rdl file.
2. To view the report, click the Preview tab.
3. To see the information by fiscal year quarter and review the report data, click the plus sign (+).
4. Move to the next page by clicking the right arrow (►) on the toolbar.
5. To regenerate the report, change the category to a different product (for example, select both
Bikes and Clothing), and then click View Report.
6. To see the data sources, click the Data tab.
7. Review the available Measures—for example, Internet Sales, Reseller Sales.
8. Review the available Dimensions—for example, Sales Territory, Product.
9. To review the MDX query, on the toolbar, click Design Mode.
10. To return to Design view, click Design Mode again.
11. To execute the query and see the results, on the toolbar, click the green right arrow (►).

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 39

Task 4: Creating a New OLAP Data Source


Task Overview
This is an optional task and will be performed only if time permits.
In this task, your instructor will show how to create a new data source from an OLAP database. The
focus of this task is on using designer tools to easily aggregate data and create complex data views
when using OLAP and Reporting Services.
To create a new OLAP data source, your instructor will perform the following steps:
1. In the Reports folder, double-click the OLAP Report.rdl file.
2. To see the data sources, click the Data tab.
3. In the Dataset list, click <New Dataset…>.
4. To accept the default values, in the Dataset window, click OK. Design view opens.
5. From the Metadata tree view on the left, drag the following items into the right pane.
• Measures: Total Product Cost
• Product: Product Name (drop this item to the left of the Total Product Cost column)
• Product: Financial: List Price
• Product: Stocking: Days to Manufacture
• When you have finished, on the toolbar, click Delete the Selected Dataset.
• In the confirmation dialog box, click Yes.

Additional Information
For more information about Analysis Services, see Course 2074, Designing and Implementing
OLAP Solutions Using Microsoft SQL Server 2000
(http://www.microsoft.com/learning/syllabi/en-us/2074afinal.mspx) and Course 2093,
Implementing Business Logic with MDX in Microsoft SQL Server 2000
(http://www.microsoft.com/learning/syllabi/en-us/2093afinal.mspx).

MCT USE ONLY. STUDENT USE PROHIBITED


40 Session 6: Designing a Scalable Data Tier for Database Applications

Section 4: Scaling Database Applications by Using


Service-Oriented Architecture

*****************************illegal for non-trainer use******************************

Introduction
Over the years, the database server’s role in system architecture has changed from being a complete
system in a monolithic architecture to being a part of a bigger, distributed, multi-tiered architecture.
Service-oriented architecture (SOA) is a new architecture model that proposes to expose internal
business processes as a set of services. The services are loosely coupled, allowing for easier
distribution, scalability, and maintainability. The number of requests to a service and the number of
transactions per second that must be supported is higher in a service-oriented architecture than in an
isolated application that does not shares the architecture’s processes.
Service-oriented architecture is based on the distributed, multi-tier architectural model. Internally,
services might distribute the workload through multiple logical tiers that define and process the
business logic and rules.
This section focuses on how to improve middle-tier processing by using various middle-tier-
specific technologies.

Section Objectives
„ Describe the different ways to create middle-tier strategies to connect to SQL Server.
„ Evaluate the considerations for building a data access layer based on COM+ components.
„ Explain the guidelines for building Web services to provide data access.
„ Explain the guidelines for building Hypertext Transfer Protocol (HTTP) endpoints to scale up data
access in SQL Server 2005.
„ Explain how to move code between tiers to reduce the workload on a bottlenecked tier.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 41

Overview of Creating Middle-Tier Strategies to Connect to SQL


Server

*****************************illegal for non-trainer use******************************

Introduction
In a multi-tiered application, much of the processing is executed on the middle tier. You should
separate the different processing needs into different logical layers that together form the middle
tier. Further dividing the middle tier into separate logical services allows for distribution,
scalability, and maintainability.
Client applications can access the middle tier directly through various service interfaces. The same
middle tier can be reused by multiple client applications that require the same services provided by
the middle tier.
There are different strategies for implementing a middle tier by using Microsoft development
platforms. Application architects should decide which strategy to follow based on their
organization’s communication, security, scalability, maintenance, and administration needs.

Why Do Applications Need a Middle Tier?


Applications need a middle tier for any of the following reasons:
„ To provide a business rules–processing framework that is independent of the user interface
presentation and data storage.
„ To define business rules in a more expressive and powerful programming language such as
Microsoft Visual Basic® .NET or C#.
„ To change business rules without needing to redeploy client applications.
„ To distribute the processing load across multiple middle-tier servers and offload work from back-
end servers such as database servers.
„ To control and execute distributed transactional operations enforced across multiple data sources.
To take advantage of more flexible deployment strategies and to be able to configure each server
MCT USE ONLY. STUDENT USE PROHIBITED
„
according to its specific processing needs.
42 Session 6: Designing a Scalable Data Tier for Database Applications

Technologies for Building a Middle Tier


Following are some of the technologies used to build a middle tier:
„ Microsoft .NET Framework components
„ Microsoft ASP.NET Extensible Markup Language (XML) Web service
„ Microsoft .NET Framework Enterprise Services—COM+ exposed as a Web service
„ Microsoft Message Queuing (MSMQ) Server exposed as a Web service
„ Microsoft BizTalk® Server

Middle-Tier Components
The middle tier serves multiple needs in a distributed application, and it can be divided into
multiple subtiers to facilitate maintenance. Each subtier specializes in a particular logical task and
can be implemented by any number of components.
Following are the recommended subtiers in a middle tier:
„ Service interfaces
„ Business workflows
„ Business components
„ Business entities
„ Service agents

Middle-Tier Responsibilities
The middle tier is responsible for the following activities:
„ Exposes business logic as a service
„ Orchestrates, enforces, calculates, and processes business rules and tasks
„ Accesses multiple data sources to retrieve or save necessary data to process the business rules
„ Provides a data-source-independent representation of the data to be passed to calling client
applications

Additional Information
For more information about the architecture of .NET Framework applications, see the
Application Architecture for .NET: Designing Applications and Services Guide on the
Microsoft Pattern & Practices Web site at
http://msdn.microsoft.com/practices/guidetype/Guides/default.aspx?pull=/library/en-
us/dnbda/html/distapp.asp.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 43

Guidelines for Building a Data Access Layer Based on COM+


Components

*****************************illegal for non-trainer use******************************

Introduction
Included in Microsoft Windows, COM+ is a set of services used to build highly scalable middle-tier
components. Middle-tier components include the business logic and data access layers. The COM+
infrastructure provides several services that assist in building and deploying robust middle-tier
components. As a result, developers can focus on the business logic rather than on low-level
requirements when scaling the application.
However, application architects should carefully evaluate when to use COM+ components. COM+
relies on other Windows services such as the Distributed Transaction Coordinator (DTC) and
Message Queuing. Using COM+ results in applications that have specific implementation,
deployment, and administration requirements.

Advantages and Disadvantages of Building COM+ Components


Implementing COM+ components imposes restrictions and constraints on applications, such as a
different deployment model and a specific application programming interface (API).
You must consider the following advantages and disadvantages when implementing middle-tier
components as COM+ components.
Advantages
„ You can develop components as if a single application would be using the components. COM+
has the ability to manage threading, synchronization, activation, transactional activities, pooling,
security, and other underlying infrastructure services.
„ COM+ services are specifically designed to scale middle-tier components.
„ COM+ components follow a declarative programming model. The application declares which
services it requires, and COM+ provides the services automatically.

MCT USE ONLY. STUDENT USE PROHIBITED


44 Session 6: Designing a Scalable Data Tier for Database Applications

„ Activities can span multiple components distributed among multiple servers—for example, to
create distributed transactions.
„ COM+ provides a hosting environment for components and allows system administrators to
configure security, activation, memory consumption, transactions, and other such services from a
visual administrative console.
Disadvantages
„ You might experience a small decrease in performance when using COM+ interoperability
services from the .NET Framework managed code.
„ Different COM+ versions provide different services—for example, COM+ 1.0 supports only the
Serializable transaction isolation level.
„ COM+ components create a dependency between the application and several Windows services
such as COM+ itself, MSMQ, and the DTC.
„ Using COM+ imposes restrictions on the deployment model, communication protocols, and
administrative tasks. For example, it is difficult to consume services provided by COM+
components over the Internet because of the underlying remote procedure call (RPC)
infrastructure used by COM+.
„ The simplicity of the declarative programming model, combined with developers’ inadequate
knowledge of the inner workings of COM+, could cause the use of the provided services in
inappropriate scenarios.

Concurrency and Transactional Issues of COM+ Components


Following are some of the considerations for implementing transactional COM+ components:
„ COM+ components depend on the DTC, which allows the declarative configuration of the
transaction isolation level, transaction support level, and transaction time-out value so that values
cannot be changed dynamically.
„ DTC transactions are controlled at the component level. Therefore, all methods in a class will
execute with the same static transaction configuration. If different behaviors are needed for two
different methods, the component must be split into two.
„ DTC transactions are bound to resource managers that manipulate durable data. They do not apply
to in-memory operations or processes.
„ DTC transactions incur more overhead than other types of transactions, such as SQL transactions
or Microsoft ADO.NET local transactions. COM+ is not recommended when the application
communicates with a single data source.

Note: Microsoft .NET Framework 2.0 proposes a new model for handling transactions on the
middle tier and solves the issues imposed by COM+. New classes for transaction management
are declared inside the System.Transactions namespace.
System.Transactions proposes two types of transactional managers: Lightweight Transaction
Manager (LTM) and OleTx Transaction Manager (functionality formerly provided by the
DTC). LTM is used for local transactions, and OleTx is used for distributed transactions. The
component does not need to be registered in COM+ to take advantage of this feature.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 45

Additional Information
For an overview of COM+, including its history and what it means for the middle tier, see
“COM+ Overview for Microsoft Visual Basic Programmers” at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dncomser/html/complus4vb.asp.
For an introduction to COM+ and the services it provides, see “Understanding Enterprise
Services (COM+) in .NET” at http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dndotnet/html/entserv.asp.
For more information about System.Transactions and the changes in the .NET Framework
2.0, see “Introducing System.Transactions in the .NET Framework 2.0” at
http://msdn.microsoft.com/library/default.asp?url=/library/en-
us/dndotnet/html/introsystemtransact.asp.

MCT USE ONLY. STUDENT USE PROHIBITED


46 Session 6: Designing a Scalable Data Tier for Database Applications

Guidelines for Building Web Services to Provide Data Access

*****************************illegal for non-trainer use******************************

Introduction
XML Web services expose the application logic to external clients and other services in a loosely
coupled manner.
If an XML Web service is designed based on the industry standards for communication, data
representation, and security, it can be consumed by applications running on different platforms and
written in different programming languages.
When exposing a middle tier through XML Web services, applications need to fine-tune, manage,
and administer the middle-tier resources carefully. This is because the application logic will be
available to external entities that the systems administrator might not be able to control or fine-tune.

Guidelines for Designing Web Services That Provide Data Access to


Database Applications
„ Decide on the communication protocol, data schema, security requirements, and transactional
capabilities required by the service early in the design process.
„ Design services in a loosely coupled manner so that changes in the internal application
implementation do not require changes in the exposed service.
„ Analyze the access patterns and data needs for all clients that communicate with the service. In
some cases, applications should publish multiple services for different audiences.
„ Design the service for maximum interoperability with other platforms and services. Whenever
possible, the service must rely on industry standards for communication, security, and data and
message formats. XML-based data representations such as Web Services Description Language
(WSDL), Extensible Schema Definition (XSD), and Simple Object Access Protocol (SOAP) are
examples of such standards.
„ A business entities layer is a logical representation of the data independent of its source and
physical storage. Design business entities to be serializable and expose them as a part of the

MCT USE ONLY. STUDENT USE PROHIBITED


service messages.
Session 6: Designing a Scalable Data Tier for Database Applications 47

„ Regularly monitor the amount of data being sent to and from the service. Serializing and
deserializing XML data can be an expensive process.
„ Watch and favor processes that can be executed asynchronously.
„ Keep the transaction scope inside the service. XML Web services do not support distributed
transactions outside the service boundary.

Technologies for Implementing Web Services on the Middle Tier


Following are some of the technologies for implementing Web services on the middle tier:
„ ASP.NET Web services
„ COM+ Web services
„ Microsoft BizTalk Server
„ SQLXML as an Internet Server API (ISAPI), Internet Information Services (IIS)–based
application

MCT USE ONLY. STUDENT USE PROHIBITED


48 Session 6: Designing a Scalable Data Tier for Database Applications

Guidelines for Building HTTP Endpoints to Scale Up Data Access to


SQL Server 2005

*****************************illegal for non-trainer use******************************

Introduction
Because earlier versions of SQL Server supported only the proprietary Tabular Data Stream (TDS)
communication protocol, client applications had to use a data access provider to create the TDS
packages to communicate with SQL Server.
SQL Server 2005 extends support to other communication protocols, such as HTTP, to allow client
applications to connect without using a data access provider.
By creating an HTTP endpoint, SQL Server starts to listen on a specific communication port for
messages that are translated into the execution of registered stored procedures or user-defined scalar
functions.

Guidelines for Building HTTP Endpoints


„ Use HTTP endpoints when client applications are on an extranet or when applications in
heterogeneous platforms need to connect to SQL Server.
„ Use Transmission Control Protocol (TCP) endpoints when client applications are on an intranet.
„ Expose SQL Server through HTTP endpoints only in controlled environments and not in heavily
loaded environments (highly concurrent access with short duration transactions). For heavily
loaded environments, implement a middle tier with ASP.NET Web services.
„ Use SQL Server behind a firewall for protection against hackers, especially when exposing SQL
Server directly to the Internet.
„ Carefully design the method signatures for stored procedures and user-defined functions that will
be exposed as Web methods. Versioning should be done carefully so that the implementation by
Web service consumers is not broken.
„ SQL Server endpoints are secure by default, which means that they are in a stopped state when

MCT USE ONLY. STUDENT USE PROHIBITED


created. Database administrators should stop an endpoint when updating it to maintain versions.
Session 6: Designing a Scalable Data Tier for Database Applications 49

„ Manage endpoints to limit connect permissions to specific users or groups that are going to be
consuming the endpoint. Avoid granting access to the public role.

Additional Information
For more guidelines on using HTTP endpoints, see the topic “Best Practices for Using Native
XML Web Services” in SQL Server Books Online at
http://www.microsoft.com/technet/prodtechnol/sql/2005/downloads/books.mspx.

MCT USE ONLY. STUDENT USE PROHIBITED


50 Session 6: Designing a Scalable Data Tier for Database Applications

Demonstration: Moving Code Between Tiers

*****************************illegal for non-trainer use******************************

Introduction
Component-based programming allows encapsulation of the programming logic in reusable units of
code. This code can be used at any layer or tier in the application and can even be reused by
multiple applications. Software architects design the distribution and location of reusable units
around the physical boundaries of the application.
The location of the code is decided based on the following factors:
„ Security restrictions
„ Class access modifiers
„ Transactional requirements
„ Communication requirements
„ Deployment issues
„ Overall physical configuration and distribution of server

In some scenarios, moving code between tiers can improve application scalability by reducing the
workload on a heavily used application tier or feature.
In this demonstration, your instructor will show how to move code between tiers to reduce the
workload on a bottlenecked tier.

Demonstration Overview
In this demonstration, your instructor will show a simple example of a business rule that is checked
initially in the database server, moved to the data access layer, moved to an XML Web service, and
finally moved to the client application.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 51

Task 1: Setting Up the Execution Environment


Task Overview
This task configures the execution environment.
To configure the execution environment, your instructor will perform the following steps:
1. Start Visual Studio 2005.
2. Browse to D:\Democode\Section04\, and then open the MovingCode.sln solution.
3. In the ResetDatabase project, in the Create Scripts folder, double-click the
RegisterAssembly.sql file.
4. Identify the code line with SET @ASSEMBLY_PATH.
5. Verify that the value set to the @ASSEMBLY_PATH variable is the correct file path to the
ServerSide.dll file.
6. In the ResetDatabase project, in the Create Scripts folder, right-click the
RegisterAssembly.sql file, and then click Run On.
7. If prompted to create a database reference, use the MIA-SQL\SQLINST1 server instance to
create a new database reference and connect to the AdventureWorks database.
8. Right-click the WebService project, and then click Add Reference.
9. In the Add Reference dialog box, click the Projects tab, select BusinessRule in the list, and
then click OK.
10. Right-click the WebService project, and then click View In Browser.
11. In the browser’s Directory Listing window, click BusinessRuleService.asmx.
12. Copy the URL in the Address bar.
13. Return to Visual Studio 2005.
14. In the ClientApplication project, open the App.Config file.
15. To update the URL, paste the URL copied earlier in the <value> node.

Task 2: Executing the Same Business Rule at Different Application


Tiers
Task Overview
In this task, you will execute an existing application that calls a business component that calculates
a complex operation. The application is built as a multi-tiered application with a presentation layer,
an XML Web service, a data access layer, and a user-defined function defined inside the database
server. The business component can be called from any of these application tiers.
To execute the same business rule at different application tiers, your instructor will perform the
following steps:
1. In Visual Studio 2005, in Solution Explorer, right-click the ClientApplication project, and then
click Set as StartUp Project.
2. To execute the application, press F5.

MCT USE ONLY. STUDENT USE PROHIBITED


52 Session 6: Designing a Scalable Data Tier for Database Applications

3. In the Numbers box, type the following numbers:


67, 542, 12, 78, 953, 11, 52, 8, 100.
4. In the Position box, type 3.
5. Ensure that the Client App option is selected, and then click Execute.
6. A message box displays the number 12. Close the message box.
7. In the Position box, change the number to 9.
8. Click Execute.
9. A message box displays the number 953. Close the message box.
10. Select the Web Service option, and then click Execute.
11. A message box displays the number 953. Close the message box.
12. Select the Data Layer option, and then click Execute.
13. A message box displays the number 953. Close the message box.
14. Select the Database option, and then click Execute.
15. A message box displays the number 953. Close the message box.
16. Close the sample application.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 53

Section 5: Improving Availability and Scalability by


Scaling Out Front-End Systems

*****************************illegal for non-trainer use******************************

Section Overview
Scaling the physical infrastructure of a distributed system usually involves two techniques: scaling
up and scaling out.
Scaling up involves increasing the resources on the current hardware—for example, by increasing
the amount of memory, adding more processors, and upgrading the hard disks.
Scaling out can be applied to any layer of a database application as follows:
„ Presentation layer, by configuring a Web farm for an ASP.NET Web application
„ Middle-tier, business layer, by configuring multiple middle-tier servers in a Windows cluster
„ Data layer, by using the various options available for scaling out SQL Server

Changes made to a single layer will have a direct impact on the other layers. Therefore, scaling out
should be a system-wide improvement.
This section focuses on scaling out front-end systems for Windows-based applications and Web
applications to improve system response time and availability.

Section Objectives
„ Explain the types of applications for which scaling out is appropriate.
„ Explain the considerations for improving the availability and scalability of Windows-based
applications.
„ Explain the guidelines for scaling out Web front-end systems.
„ Explain the guidelines for implementing data access in Web farms.

MCT USE ONLY. STUDENT USE PROHIBITED


54 Session 6: Designing a Scalable Data Tier for Database Applications

Discussion: Scaling Out Front-End Systems

*****************************illegal for non-trainer use******************************

Introduction
All applications cannot be scaled out, and scaling out does not always provide benefits.
Scaling out increases the complexity of system management, development, resource management,
networking, security, and fine-tuning.
Scaling out a system involves adding more servers and distributing the workload among them. The
operating system, application services, resource management, and all other control systems (such as
network interface controllers, multiple processor controllers, and disk controllers) should be aware
of this physical architecture to provide the expected benefits.
Scaling out a single tier will have an impact on system performance. Improper scaling out of a
single tier will create an imbalance between all of the tiers, add overhead, and decrease the effects
of scaling out.

Discussion Questions
1. Is scaling out front-end systems relevant to Web applications or Windows-based applications?

2. Would you solve your scalability needs without scaling out the back end?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 55

3. Is there any potential synchronization problem to consider when scaling out front-end systems?

4. Will Service Broker be helpful in scaling out database applications?

5. Would you implement solutions based on MSMQ instead of Service Broker?

MCT USE ONLY. STUDENT USE PROHIBITED


56 Session 6: Designing a Scalable Data Tier for Database Applications

Considerations for Improving Availability and Scalability of


Windows-Based Applications

*****************************illegal for non-trainer use******************************

Introduction
Windows-based applications can be designed according to the following architectural models:
monolithic, two-tier (client/server), N-tier (which includes three-tier), and/or service-oriented.
Some of these architecture models require the physical distribution of the application components
to allow reuse of such components and to support larger working payloads. In these cases, the
presentation layer is deployed on the client machines and communicates with the remotely
deployed middle-tier components.
Following are the benefits of developing distributed Windows-based applications:
„ Take advantage of local computer resources.
„ Provide a better end-user experience.
„ Cache data locally.
„ Connect to multiple middle-tier components dynamically or to multiple data sources such as
remote Web Services.
„ Take advantage of local disk storage in disconnected environments. For example, you can
continue working on a laptop computer even when there is a network failure.

This topic focuses on the various considerations for improving the availability and scalability of
Windows-based applications.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 57

Caching Data on the Client Side


Following are the benefits of caching data on the client side:
„ The number of calls to the back end is reduced.
„ More data is found locally, and therefore the response time decreases.
„ Static and read-only data can be kept in memory or in local storage. This data can be used later by
the applications or to support disconnected applications.
„ Caching can be implemented in memory or in disk for durability. For example, by using SQL
Server Express 2005, data can be stored on disk to support application and system restarts.
„ User activity can be saved and resumed later.

Using Asynchronous Operations to Improve Responsiveness


Multithread Windows-based applications keep the user interface responsive while executing
operations in the background. As a result, the application will be responsive to user interaction
while waiting for the results, and the user can execute multiple tasks in parallel.
Another way of executing multiple tasks in parallel is by using asynchronous messaging such as
SQL Server Service Broker or MSMQ. When sending a message to a queue, the main thread will
not wait for the results and will continue to execute. The request will remain in the queue until the
appropriate time for processing it.

Improving Response Time by Decoupling the Front End from the Back
End as Much as Possible
Decoupling involves lowering the dependency level between the front end and the back end. The
following table describes some of the dependencies you must consider.

Dependency Description

Data representation The front end should know only about the internal logical data representation
known as Business Entities. Business Entities should be designed to model data
independent of storage options and physical distribution.
Distributed User interface actions should not be enlisted inside distributed transactions
transactions because this will create long-running transactions. These transactions will keep
locks for a long time and reduce database concurrency.
Data sources The front end should not know how the data is stored physically and which data
sources exist. An abstraction layer should be built between the front end and the
back end so that the back end can be modified without affecting the front end.

Designing Applications to Connect to Alternative Servers


Following are the two main reasons for designing applications to connect to alternative servers:
„ When designing a distributed application to improve application availability, multiple back-end
servers can be available for the application to assist failover.
„ Multiple servers on the back end can be configured in such a way that they are seen by connecting
applications as one entity, effectively load-balancing the requests between them.

MCT USE ONLY. STUDENT USE PROHIBITED


58 Session 6: Designing a Scalable Data Tier for Database Applications

Storing Changes Locally in Case of Disconnections from the Database


Server
Keep in mind the following considerations when working in a disconnected environment:
„ Disconnected applications must rely on local storage for storing data and the user process state.
„ It is a recommended practice to visually let the end user know when the application is connected
or disconnected.
„ Applications could store data as files on the file system, as messages in a local MSMQ queue, or
in a local SQL Server 2005 Express database.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 59

Guidelines for Scaling Out Web Front-End Systems

*****************************illegal for non-trainer use******************************

Introduction
Web applications provide an easier administration and deployment model than that provided by
Windows-based applications. The application does not need to be installed on the client machine,
but the client will navigate to the server on which the application is hosted by using a browser.
Various servers, frameworks, and services interact with each other to serve a single Web request.
For example, IIS receives the request and passes it on to the ASP.NET runtime framework, and the
Web pages with custom code then execute the request and might also execute remote calls to
middle-tier servers.
Although database developers must have adequate knowledge of the various platforms and servers
for scaling out Web front-end systems, the ability of an application to scale out depends more on
the application architecture than on the underlying infrastructure.
In this topic, you will learn about the guidelines to apply for improving the availability and
scalability of Web front-end systems.

Use Network Load Balancing


Network load balancing (NLB) is a clustering technology that dynamically distributes the workload
of applications such as Web front-end systems.
Note that for stateful applications, you should consider using Windows Server Cluster solutions.
A single NLB cluster can have up to 32 servers and can be scaled beyond this limit by using a
round-robin Domain Name System (DNS) between multiple NLB clusters.

MCT USE ONLY. STUDENT USE PROHIBITED


60 Session 6: Designing a Scalable Data Tier for Database Applications

Check for Server Affinity in the Application Code


Server affinity occurs when a request needs to be handled by the same server even though it is in an
NLB environment. Applications should avoid creating affinity. Developers must avoid
implementations that create server affinity. Examples of such implementations include creating
local data caches, creating ASP.NET in-process sessions, saving a state to a local disk, and saving
any other activity that maintains data on a single server.

Deploy All Logical Application Layers on Each Server


Deploy all of the application layers (presentation, business processing, and data access layers) on
each of the servers that are within the NLB cluster. You must also avoid remote method calls and
roundtrips to remote middle-tier servers whenever possible.
Additional Information
For more information about clustering technologies on the Windows platform and NLB, see:
• Microsoft Windows Server 2003 Clustering Services home page at
http://www.microsoft.com/windowsserver2003/technologies/clustering/default.mspx.
• Network Load Balancing: Configuration Best Practices for Windows 2000 and Windows
Server 2003 at
http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies/clusterin
g/nlbbp.mspx.
• Network Load Balancing: Frequently Asked Questions for Windows 2000 and Windows
Server 2003 at
http://www.microsoft.com/technet/prodtechnol/windowsserver2003/technologies/clusterin
g/nlbfaq.mspx.
For more information about scaling ASP.NET Web applications, see Chapter 6, “Improving
ASP.NET Performance,” in the Improving .NET Application Performance and Scalability
Guide on the Microsoft Patterns & Practices Web site at
http://msdn.microsoft.com/practices/guidetype/Guides/default.aspx?pull=/library/en-
us/dnpag/html/scalenet.asp.

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 61

Guidelines for Implementing Data Access Layers in Web Farms

*****************************illegal for non-trainer use******************************

Introduction
When scaling out a Web application, requests will be distributed across multiple servers configured
in an NLB cluster.
It is important to provide an optimal data access architecture for a database application. Application
architects must decide how to deploy the application components into the distributed physical
architecture.
This topic focuses on some of the guidelines that you must follow when implementing data access
layers in Web farms.

Guidelines for Implementing Data Access Layers in Web Farms


Consider the following guidelines when implementing data access layers in Web farms.
Guideline Reason

Avoid remote method Whenever possible, each Web farm member should have its own data
calls. access layer. By avoiding remote method calls, you can avoid expensive
network trips, data serialization, and network latency.
Develop the data Stateless components support both scale-up and scale-out strategies.
access layer as Stateless components avoid server affinity and optimize memory
stateless components. management by using only short-life objects.
Deploy SQL Server in Instead of managing multiple database servers—for example, one
a clustered database for each node in the Web farm—implement a scalable database
infrastructure. architecture by scaling out the database installation into a database
cluster. The cluster will be seen as a single entity by calling clients.
Consider data Do not partition the data into multiple servers only for scalability reasons
partitioning only when if the data is not congruent with the partitioning model or does not clearly
the data permits it. contain a partition key. Otherwise, it will be difficult to administer and

MCT USE ONLY. STUDENT USE PROHIBITED maintain data, and applications might not benefit from data partitioning.
62 Session 6: Designing a Scalable Data Tier for Database Applications

Discussion: Session Summary

*****************************illegal for non-trainer use******************************

Introduction
This session focused on how to assess scalability needs and design the best architecture to scale
your system to meet the needs and expectations of users. You learned how to identify when to scale
database applications and what layer to scale. You also learned how to select the appropriate
technology to avoid concurrency problems and improve application performance. Finally, you
learned how to evaluate whether scaling out or scaling up is appropriate for the scalability
requirements of your database system.

Discussion Questions
1. What was most valuable to you in this session?

2. Have you changed your mind about anything based on this session?

3. Are you planning to do anything differently on the job based on what you learned in this session?
If so, what?

MCT USE ONLY. STUDENT USE PROHIBITED


Session 6: Designing a Scalable Data Tier for Database Applications 63

Clinic Evaluation

*****************************illegal for non-trainer use******************************

Your evaluation of this course will help Microsoft understand the quality of your learning
experience.
Please work with your training provider to access the clinic evaluation form.
Microsoft will keep your answers to this survey private and confidential and will use your
responses to improve your future learning experience. Your open and honest feedback is very
valuable.

MCT USE ONLY. STUDENT USE PROHIBITED


THIS PAGE INTENTIONALLY LEFT BLANK

MCT USE ONLY. STUDENT USE PROHIBITED

Вам также может понравиться