Вы находитесь на странице: 1из 8

Copyright 2009 MeasureUp.

All Rights Reserved














































PRO: Designing, Optimizing, and Maintaining a Database Administrative Solution
Using Microsoft SQL Server 2008
SumITUp
A Complete Summary for
Our 70-450 Practice Test
SumITUp is a great summary recap of the objectives & material
covered on the exam. Use it in addition to or in concert with your
practice test as:
A bulleted overview of the exam scope and objectives before
you start your study to provide you with the big picture
objective by objective.
A checklist & review of topics covered within each objective to
ensure you have studied all the critical areas.
A tool you can print out for review on the go.
A rapid review tool for the day before you take the exam.

2 SumITUp | A Complete Summary for Our 70-450 Practice Test




Copyright 2009 MeasureUp.
Designing a SQL Server Instance and a Database Solution
Design for CPU, memory, and storage capacity requirements
Hardware NUMA nodes are used to group processors and associated memory as a way of improving
performance and scalability
RAID 0 provides the best write performance, but no fault tolerance
RAID 1 is a disk mirroring configuration
- Best for transaction logs
RAID 5 is disk striping with parity
- Best for databases where rapid recoverability required
Using row compression, because it is performed on a row-by-row basis, minimizes the impact that compression
has on write operations
Design SQL Server instances
The MAXDOP option determines how many processors can be used when generating a parallel execution plan
The min server memory and max server memory options control the minimum and maximum amount of memory
allocated to a SQL Server instance
When configuring a SQL Server 2008 instance to use a fixed amount of memory, you should set min server
memory and max server memory to the same value
Design physical database and object placement
Clustered indexes are always created on the same filegroup as the base table
- You can improve performance by placing nonclustered indexes on a filegroup located on a different
hard disk than the base table
When you specify the filestream attribute, the data is stored in a filestream filegroup on an NTFS disk partition
Design a migration, consolidation, and upgrade strategy
When you upgrade the passive node and then the primary node, it is known as a rolling update and keeps
interruptions to database operations at a minimum
- After you upgrade the passive node, Setup will automatically fail over to that node when you begin to
upgrade the primary node
Designing a Database Server Security Solution
Design instance authentication
Windows authentication is inherently more secure than SQL Server authentication
- Microsoft strongly recommends using Windows authentication whenever possible
Mixed mode authentication supports both Windows and SQL Server authentication
Windows authentication cannot be used for users in an untrusted domain, so those users would need to connect
using SQL Server authentication


3 SumITUp | A Complete Summary for Our 70-450 Practice Test




Copyright 2009 MeasureUp.
Design instance-level security configurations
When installing a certificate on a failover cluster, you must install a certificate with the fully qualified domain
name of the virtual server representing the cluster
Design database, schema, and object security parameters
You must create a database master key before you can use the CREATE CERTIFICATE command to create a
self-signed certificate
The EXTERNAL_ACCESS permission set allows execution of managed code, access to local data, and access
to external resources
- It is the minimum required permission needed to run distributed transactions
Design a security policy and an audit plan
You can define policies on a SQL Server 2008 instance identified as the configuration server and have the
policies replicated to the remaining servers, referred to as configuration targets
- Because policies and policy changes are replicated to configuration targets only, you can remove a
server from the configuration targets if you no longer want policies to apply and manually remove the
policies from that instance
When you attach a database with an audit specification from a different SQL Server 2008 instance, the database
audit specification specifies a GUID that does not exist on the destination instance
- This results in an orphaned audit
Design an encryption strategy
Data from a TDE-encrypted database is not automatically encrypted during replication
- To protect the distribution and subscriber databases, you must manually enable TDE on each of these
databases
When a database is encrypted using TDE, any backups created from the database are encrypted using the
same encryption key
- The encryption key must be available when restoring from backups
Designing a Database Solution for High Availability
Design a failover clustering solution
You must install MSCS on at least one node before you start SQL Server cluster instance installation
Design database mirroring
During manual failover, the primary and mirror change roles without loss of service
The ALTER DATABASE SAFETY option determines the operating mode
- Set to FULL, the mirror is operating in synchronous, or high-safety, mode
- Set the SAFETY option to OFF to switch to asynchronous or high-performance, mode
- Only Forced Service failover is possible in high-performance mode
A witness is necessary when supporting automatic failover and when you need to ensure a quorum even if the
primary server is lost

4 SumITUp | A Complete Summary for Our 70-450 Practice Test




Copyright 2009 MeasureUp.
A witness server is always required for automatic failover
A witness is not always necessary for manual failover, but if you have a witness server, then you can still perform
a manual failover as long as the secondary server and witness can establish a quorum
Design a high-availability solution that is based on replication
Merge replication allows for updates to be made to any copy of the database and those updates propagated to
the publisher and all subscribers
FILESTREAM data over 2 GB will generate an error if it is replicated without the FILESTREAM attribute
Peer-to-peer replication is designed for situations where you have multiple copies of a database, updates can be
made at any copy of the database, and update latency must be minimized
Design a high-availability solution that is based on log shipping
Log shipping is implemented at the database level and supports shipping to multiple locations or consolidating
standby copies
sp_add_log_shipping_primary_database - adds the primary database
sp_add_jobschedule - creates the backup job schedule
sp_add_log_shipping_alert_job - adds the alert job
sp_add_log_shipping_secondary_primary - used to supply details about the primary server and database on the
secondary server
sp_add_jobschedule - adds copy and restore jobs for secondary database(s)
sp_add_log_shipping_secondary_database - adds secondary database to the configuration
sp_add_log_shipping_primary_secondary - adds information about a secondary database on the primary server
Select high-availability technologies based on business requirements
Database mirroring
- Automatic failover
- Manual failover
- Transparent client redirect
- Employs standard servers
- Can operate synchronously
- Implemented at database level
- High-safety mode
Failover clustering
- Automatic failover
- Manual failover
- Transparent client redirect
- Implemented at server level
- No protection from disk failure - data on shared storage





5 SumITUp | A Complete Summary for Our 70-450 Practice Test




Copyright 2009 MeasureUp.
Log shipping
- Can use in addition to or instead of database mirroring
- Allows user-specified delay
- Can ship a log to multiple secondary servers
- Warm standby
Replication
- Allows database filtering
- Allows multiple copies of database
- Scalable
Designing a Backup and Recovery Solution
Design a backup strategy
Point-in-time recovery is only supported by the full recovery model
A copy-only backup is a full database backup that does not impact the backup sequence used for normal
recovery
A differential backup backs up the changes that have occurred since the last full backup
The full and bulk-logged recovery models allows you to back up and restore a chain of transaction logs
Simple recovery model allows only full and differential backups
Design a recovery strategy
When you execute Setup with the REBUILDDATABASE action, it rebuilds all system databases to their default
setting, including master
If a restoration fails, you can attempt to restore the backup using the CONTINUE_AFTER _ERROR option
To recover to a marked transaction, specify the WITH STOPATMARK option when recovering the transaction
logs
The WITH STANDBY option recovers the database for read-only access
The quickest way to restore one or two corrupt pages is to restore only those pages from backup
Design a recovery test plan
When you restore a database with the RESTRICTED_USER option on, this option allows unlimited connections
to the database, but only by members of the sysadmin, dbcreator, or db_owner roles
You can use DBCC CHECKDB to verify a database snapshot before reverting
The msdb.backupset system table stores detailed information about each backup, including the backup type, the
compression ratio, whether the database was detected as damaged, and the recovery fork information
Designing a Monitoring Strategy
Design a monitoring solution at the operating system level
A high value for Memory: Pages/sec can indicate that more memory is needed
A low value for SQL Server: Buffer Manager: Buffer Cache Hit Ratio can indicate the need for more memory

6 SumITUp | A Complete Summary for Our 70-450 Practice Test




Copyright 2009 MeasureUp.
A high value for the Processor: % Privileged Time counter indicates that the operating system is using a large
percentage of processor time
The sys.dm_os_sys_info DMV allows you to retrieve information about operating system resources and how they
are consumed by SQL Server
SQL Server 2008 includes data collectors which query DMVs to obtain performance and resource usage data
and store the results in a Management Data Warehouse
SQL Server Profiler allows you to load a performance log and a trace and correlate the two
- Doing so can allow you to see which queries are executing when performance bottlenecks occur
Design a monitoring solution at the instance level
The sys.dm_exec_connections DMV returns information about current connections
The sys.dm_exec_requests DMV returns information about each request being serviced
The sys.dm_exec_sessions DMV allows you to retrieve a lot of information about current connections, including
the user name, the connection time, the CPU and memory resources consumed since the connection was
established, and the number of open transactions
Members of the dc_operator role can view data collectors, configure the frequency at which data collectors
upload data, configure the frequency of data collection events, and start or stop a collection set
The sys.database_files catalog view reports information about the files associated with a specific database,
including the status, the growth properties, and the physical file name of each file
An event notification can be configured to respond to trace events, including the DATA_FILE_AUTO_GROW
event
Design a solution to monitor performance and concurrency
SQL Server 2008 has enhanced the performance of the Database Tuning Engine Advisor by allowing it to offload
analysis to a test server
The Deadlock Graph event stores XML data when a deadlock must be resolved by SQL Server
When AUTO_UPDATE_STATISTICS_ASYNC is enabled on the database, statistics are automatically updated
in the background instead of being updated before compiling a query plan
sqlcmd -A -d master specifies that you want to launch a Dedicated Administrator Console and connect to master
The sys.dm_tran_locks DMV returns detailed information about locks requested and held, including the process
holding the lock, the type of lock, and the type of resource requesting the lock
The Tuning template includes all the columns the Database Engine Tuning Advisor needs to tune a database
based on a workload
Designing a Strategy to Maintain and Manage Databases
Design a maintenance strategy for database servers
When moving data in partitioned tables, the non-clustered indexes on the source table and the destination table
must be identical
- You can exempt a non-clustered index from this requirement by disabling it
When moving data in partitioned tables, the clustered indexes on the source and destination tables must be
identical and cannot be disabled

7 SumITUp | A Complete Summary for Our 70-450 Practice Test




Copyright 2009 MeasureUp.
The ALTER PARTITION FUNCTION statement can be used to split a partition into two partitions or to merge two
partitions into one
When an index has more than 30% fragmentation, you should rebuild it
When index fragmentation is less than 30%, you should reorganize indexes instead of rebuilding them
Design a solution to govern resources
When the query governor cost limit is set to a positive number greater than 0, queries will not be executed if their
estimated execution time exceeds that value
Resource Governor allows you to segregate queries belonging to different users into workloads to limit resource
usage for those workloads
The GROUP_MAX_REQUESTS option limits the number of simultaneous queries that can be executed by
members of a workload group
A classifier function classifies requests into the appropriate workload group
A resource pool is used to limit the amount of CPU and memory resources available to all queries executed by a
workload group
Design policies by using Policy-Based Management
Policy-Based Management allows you to enforce various types of rules
You can evaluate a Policy-Based Management policy against multiple servers if you add those servers to a
server group
You can import or create policies to ensure that DDL statements, such as CREATE DATABASE and ALTER
DATABASE do not violate company policy
Design a data compression strategy
The FILESTREAM data type is used to store unstructured data, such as a binary file, that is transactionally
consistent with related structural data
- You can compress FILESTREAM data by using Windows compression on the storage media
Page-level compression provides a higher compression ratio than row-level compression because row-level
compression is a subset of page-level compression
Compression will negatively impact the performance of a partition that needs to support a lot of updates
Design a management automation strategy
You can configure a SQL Server Agent job to launch a PowerShell script by selecting the SQL Server Agent
PowerShell subsystem when you specify the job step
A job step that executes a PowerShell script must execute in the PowerShell subsystem
A DDL trigger can be configured to execute in response to any DDL statement, including CREATE TABLE,
DROP TABLE, and ALTER TABLE
- You can perform actions within the DDL trigger, such as logging data to another table
You can automate Transact-SQL commands, such as bulk import commands and the UPDATE STATISTICS
command using a SQL Server Agent job

8 SumITUp | A Complete Summary for Our 70-450 Practice Test




Copyright 2009 MeasureUp.
Designing a Strategy for Data Distribution
Administer SQL Server Integration Services (SSIS) packages
The dtutil command lets you export an SSIS package from one SQL Server instance and then import the
package onto another instance
Design a strategy to use linked servers
sp_addlinkedsrvlogin maps logins on a local SQL Server instance to a security account on a linked server
Microsoft.Jet.OLEDB.4.0. is the Microsoft OLE DB Provider for Jet and can be used to link to a Jet database,
including Access databases, or to an Excel spreadsheet as a linked server
Design a replication strategy for data distribution
Peer-to-peer transactional replication is the most appropriate choice when you have frequent updates from
subscribers that must have low latency and when the data cannot be partitioned
Peer-to-peer transactional replication has only rudimentary conflict detection support
Merge replication is appropriate when updates occur at subscribers, but it can only be used with data that can be
partitioned
Merge replication is the most appropriate choice for supporting mobile users because changes are tracked and
only the most recent change is replicated to the publisher during synchronization
Merge replication provides sophisticated conflict detection and resolution
Transactional replication with updating subscribers is the best choice when data distribution needs low latency
and the majority of updates are performed at the publisher
Peer-to-peer transactional replication does not offer the option of detecting conflicts in logical records or
resolving conflicts interactively
Acronyms
Acronym Definition
DDL Data Definition Language
DMV Dynamic management view
MAXDOP Max degree of parallelism
MSCS Microsoft Cluster Service
NUMA Non-uniform Memory Access
RAID Redundant Array of Independent Disks
TDE Transparent Data Encryption

Вам также может понравиться