Академический Документы
Профессиональный Документы
Культура Документы
Student Manual
Education Services
Course #: L1-626.3
IBM Part #: Z251-1686-00
December 9, 2003
Copyright, Trademarks, Disclaimer of Warranties, and
Limitation of Liability
© Copyright IBM Corporation 2002, 2003.
IBM Software Group
One Rogers Street
Cambridge, MA 02142
IBM and the IBM logo are registered trademarks of International Business Machines Corporation.
The following are trademarks or registered trademarks of International Business Machines Corporation in the United States,
other countries, or both:
Answers OnLine DPI FFST/2 OnLine Dynamic Server S/390
AIX DRDA Foundation.2000 OS/2 Sequent
APPN Dynamic Scalable Illustra OS/2 WARP SP
AS/400 Architecture Informix OS/390 System View
BookMaster Dynamic Server Informix 4GL OS/400 Tivoli
C-ISAM Dynamic Server.2000 Informix Extended PTX TME
Client SDK Dynamic Server with Parallel Server QBIC UniData
Cloudscape Advanced Decision Informix Internet QMF UniData and Design
Connection Services Support Option Foundation.2000 RAMAC Universal Data
Database Architecture Dynamic Server with Informix Red Brick Red Brick Design Warehouse Blueprint
DataBlade Extended Parallel Option Decision Server Red Brick Data Mine Universal Database
DataJoiner Dynamic Server with J/Foundation Red Brick Decision Components
DataPropagator Universal Data Option MaxConnect Server Universal Web Connect
DB2 Dynamic Server with Web MVS Red Brick Mine Builder UniVerse
DB2 Connect Integration Option MVS/ESA Red Brick Decisionscape Virtual Table Interface
DB2 Extenders Dynamic Server, Workgroup Net.Data Red Brick Ready Visionary
DB2 Universal Database Edition NUMA-Q Red Brick Systems VisualAge
Distributed Database Enterprise Storage Server ON-Bar Relyon Red Brick Web Integration Suite
Distributed Relational WebSphere
Microsoft, Windows, Window NT, SQL Server and the Windows logo are trademarks of Microsoft Corporation in the United
States, other countries, or both.
Java, JDBC, and all Java-based trademarks are trademarks or registered trademarks of Sun Microsystems, Inc. in the United
States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
All other product or brand names may be trademarks of their respective companies.
The information contained in this document has not been submitted to any formal IBM test and is distributed on an “as is” basis
without any warranty either express or implied. The use of this information or the implementation of any of these techniques is
a customer responsibility and depends on the customer’s ability to evaluate and integrate them into the customer’s operational
environment. While each item may have been reviewed by IBM for accuracy in a specific situation, there is no guarantee that
the same or similar results will result elsewhere. Customers attempting to adapt these techniques to their own environments do
so at their own risk. The original repository material for this course has been certified as being Year 2000 compliant.
This document may not be reproduced in whole or in part without the prior written permission of IBM.
Note to U.S. Government Users — Documentation related to restricted rights — Use, duplication, or disclosure is subject to
restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
iii
iv
Course Description
This course provides students with the knowledge and skills they need to perform the routine tasks of a
DB2 Universal Database Systems Administrator. Through instructor presentations, they will learn about
the tools and commands needed to configure and maintain instances and database objects. Through lab
exercises, students will have the opportunity to practice the skills they’ve learned in a simulated database
server environment.
Objectives
At the end of this course, you will be able to:
Configure and maintain DB2 instances
Manipulate databases and database objects
Optimize placement of data
Control user access to instances and databases
Implement security on instances and databases
Use DB2 activity monitoring utilities
Use DB2 data movement and reorganization utilities
Develop and implement Database recovery strategy
Interpret basic information in the db2diag.log file
Prerequisites
To maximize the benefits of this course, we require that you have met the following prerequisites:
Some experience with writing Structured Query Language scripts
Knowledge of relational database design concepts
Knowledge of UNIX operating system fundamentals and VI editor
Knowledge of Windows GUI navigation skills
v
Acknowledgments
Course Developers . . . . . . . . . . Manish K. Sharma, Jagadisha Bhat, Sunminder S. Saini, Kumar Anurag
Additional Development. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Gene Rebman
Technical Review Team . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Harold Luse, Glen Mules, Bob Bernard
Course Production Editor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Susan Dykman
This course was developed at the DB2 Center for Competency, e-Business Solution Center, IBM India.
Further Information
To find out more about IBM education solutions and resources, please visit the IBM Education website at
http://www-3.ibm.com/software/info/education.
Additional information about IBM Data Management education and certification can be found at http://
www-3.ibm.com/software/data/education.html.
To obtain further information regarding IBM Informix training, please visit the IBM Informix Education
Services website at http://www-3.ibm.com/software/data/informix/education.
Comments or Suggestions
Thank you for attending this training class. We strive to build the best possible courses, and we value your
feedback. Help us to develop even better material by sending comments, suggestions and compliments to
dmedu@us.ibm.com.
vi
Table of Contents
Module 1 Overview of DB2 Major Components
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
DB2 and E-Business . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Effective E-Business Model . . . . . . . . . . . . . . . . . . . . . . . . .1-4
DB2 E-Business Components . . . . . . . . . . . . . . . . . . . . . . . 1-6
DB2 Product Family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1-8
DB2 Object Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
DB2 Architecture — Multiple Instances . . . . . . . . . . . . . . . 1-11
DB2 Architecture — Processes . . . . . . . . . . . . . . . . . . . . . 1-12
DB2 Architecture — Shared Memory . . . . . . . . . . . . . . . . 1-13
DB2 Architecture — Configuration Files . . . . . . . . . . . . . . 1-14
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-15
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16
vii
Journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22
License Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-23
Development Center: Create a Project . . . . . . . . . . . . . . . 2-24
Development Center: Project View . . . . . . . . . . . . . . . . . . 2-25
Development Center: Create a New Routine . . . . . . . . . . 2-26
Visual Explain: Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . 2-27
Visual Explain: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . 2-28
Visual Explain: Access Plan . . . . . . . . . . . . . . . . . . . . . . . 2-29
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-30
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-31
viii
Dropping Table Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-31
DROP TABLESPACE Authority . . . . . . . . . . . . . . . . . . . . 3-32
DMS Table Space Minimum Size . . . . . . . . . . . . . . . . . . . 3-33
Performance: Container Size . . . . . . . . . . . . . . . . . . . . . . 3-34
RAID Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-35
Performance: SMS Table Spaces . . . . . . . . . . . . . . . . . . . 3-36
Performance: DMS Table Spaces . . . . . . . . . . . . . . . . . . . 3-37
Performance: Catalog Table Space . . . . . . . . . . . . . . . . . 3-38
Performance: System-Temporary Space . . . . . . . . . . . . . 3-39
Performance: User Table Spaces . . . . . . . . . . . . . . . . . . . 3-40
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-41
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3-42
ix
Cataloging the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-27
Cataloging the Database . . . . . . . . . . . . . . . . . . . . . . . . . . 4-28
Cataloging Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . 4-29
Configuration Assistant (CA) . . . . . . . . . . . . . . . . . . . . . . . 4-30
Discovery Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-31
Configuration Assistant Overview . . . . . . . . . . . . . . . . . . . 4-33
Configuration Assistant: Add a Database . . . . . . . . . . . . . 4-34
Add Database Wizard: Set Up Connection . . . . . . . . . . . . 4-35
Add Database Wizard: Search the Network . . . . . . . . . . . 4-36
Add Database Wizard: Testing the Connection . . . . . . . . . 4-37
Search: Configuration is Complete . . . . . . . . . . . . . . . . . . 4-38
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-39
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-40
x
Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-29
Classifying Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-31
CREATE VIEW Examples . . . . . . . . . . . . . . . . . . . . . . . . . 5-33
Creating Views Using the Control Center . . . . . . . . . . . . . 5-34
Create View Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-35
SQL Assist Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-36
SQL Assist: Tables Page . . . . . . . . . . . . . . . . . . . . . . . . . . 5-37
Federated System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-38
Federated System Objects . . . . . . . . . . . . . . . . . . . . . . . . 5-39
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-40
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-41
xi
Module 7 Using Constraints
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Keys: Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-3
Primary Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7-4
Primary Key: Table Creation Time . . . . . . . . . . . . . . . . . . . 7-5
Creating Tables in the Control Center . . . . . . . . . . . . . . . . . 7-7
Adding a Primary Key to an Existing Table: SQL . . . . . . . 7-13
Adding a Primary Key: Alter Table Window . . . . . . . . . . . . 7-14
Foreign Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-16
Foreign Key: Table Creation Time . . . . . . . . . . . . . . . . . . . 7-18
Foreign Key: Control Center . . . . . . . . . . . . . . . . . . . . . . . 7-20
Unique Key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-22
Specifying a Unique Key at Table Creation Time . . . . . . . 7-23
Changing a Unique Key: ALTER TABLE . . . . . . . . . . . . . . 7-25
Check Constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-26
Check Constraint: Table Creation Time . . . . . . . . . . . . . . 7-27
Create Table Window: Adding Check Constraints . . . . . . 7-28
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-30
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-31
xii
IMPORT Command: METHOD . . . . . . . . . . . . . . . . . . . . . 8-21
IMPORT Command: Count and Message Options . . . . . . 8-22
IMPORT: Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-23
Data Movement Utilities: LOAD . . . . . . . . . . . . . . . . . . . . . 8-24
LOAD Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-25
LOAD: Load Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-26
LOAD: Build Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-27
LOAD: Delete Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-28
LOAD: Target and Exception Tables . . . . . . . . . . . . . . . . . 8-29
LOAD Command: Syntax . . . . . . . . . . . . . . . . . . . . . . . . . 8-30
LOAD Command: Filename and Location . . . . . . . . . . . . . 8-31
LOAD Command: Filetype and Modifier . . . . . . . . . . . . . . 8-32
LOAD Command: METHOD . . . . . . . . . . . . . . . . . . . . . . . 8-34
LOAD Command: Counter Options . . . . . . . . . . . . . . . . . . 8-36
LOAD Command: Mode . . . . . . . . . . . . . . . . . . . . . . . . . . 8-37
LOAD Command: Exception Table . . . . . . . . . . . . . . . . . . 8-39
LOAD Command: Statistics . . . . . . . . . . . . . . . . . . . . . . . . 8-40
Load Command: Parallelism . . . . . . . . . . . . . . . . . . . . . . . 8-41
LOAD Command: Copy Options . . . . . . . . . . . . . . . . . . . . 8-42
LOAD Command: Indexing . . . . . . . . . . . . . . . . . . . . . . . . 8-44
LOAD: Performance Modifiers . . . . . . . . . . . . . . . . . . . . . . 8-45
Unsuccessful Load Operation . . . . . . . . . . . . . . . . . . . . . . 8-46
Post Load: Table Space State . . . . . . . . . . . . . . . . . . . . . . 8-47
Post Load: Removing Pending States . . . . . . . . . . . . . . . . 8-48
LOAD: Additional Features in DB2 8.1 . . . . . . . . . . . . . . . 8-49
LOAD: Authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-51
IMPORT Versus LOAD . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-52
Data Movement Utilities: db2move . . . . . . . . . . . . . . . . . . 8-53
Data Movement Utilities: db2move . . . . . . . . . . . . . . . . . . 8-54
db2move Command: Syntax . . . . . . . . . . . . . . . . . . . . . . . 8-55
Data Movement Utilities: db2look . . . . . . . . . . . . . . . . . . . 8-57
db2look Command: Syntax . . . . . . . . . . . . . . . . . . . . . . . . 8-58
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-60
Lab Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-61
xiii
REORGCHK Command: Syntax and Examples . . . . . . . . . 9-5
REORGCHK: Table Statistics . . . . . . . . . . . . . . . . . . . . . . . 9-6
REORGCHK: Index Statistics . . . . . . . . . . . . . . . . . . . . . . . 9-8
REORGCHK: Interpreting of Index Information . . . . . . . . . 9-10
Reorganization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-11
REORG Command: Syntax and Example . . . . . . . . . . . . . 9-12
REORG: Using Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-14
Generating Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-15
RUNSTATS Command: Syntax and Examples . . . . . . . . . 9-16
RUNSTATS: Distribution Statistics . . . . . . . . . . . . . . . . . . 9-17
REBIND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-18
REBIND and db2rbind: Syntax . . . . . . . . . . . . . . . . . . . . . 9-19
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-20
Lab Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9-21
xiv
Restoring a Backup Image . . . . . . . . . . . . . . . . . . . . . . . 11-12
The Database Roll Forward . . . . . . . . . . . . . . . . . . . . . . . 11-13
Redirected Restore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-14
Restore Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . 11-15
Table Space Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . 11-16
Table Space State: Offline . . . . . . . . . . . . . . . . . . . . . . . . 11-17
Table Space Offline State (cont.) . . . . . . . . . . . . . . . . . . 11-18
Backup and Restore Summary . . . . . . . . . . . . . . . . . . . . 11-19
Recovery History File . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-20
Dropped Table Recovery . . . . . . . . . . . . . . . . . . . . . . . . . 11-21
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-22
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-23
xv
Creating an Event Monitor: Example . . . . . . . . . . . . . . . . 12-35
Event Monitor: Start/Flush . . . . . . . . . . . . . . . . . . . . . . . . 12-36
Event Monitor: Reading Output . . . . . . . . . . . . . . . . . . . . 12-37
Event Monitor: db2eva . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-38
Event Monitor: db2eva (cont.) . . . . . . . . . . . . . . . . . . . . . 12-39
Event Monitor: db2eva (cont.) . . . . . . . . . . . . . . . . . . . . . 12-40
Health Monitor and Health Center . . . . . . . . . . . . . . . . . . 12-41
Health Indicator Settings . . . . . . . . . . . . . . . . . . . . . . . . . 12-43
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-44
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12-45
xvi
Additional Data Available . . . . . . . . . . . . . . . . . . . . . . . . . . 14-8
The db2diag.log File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-9
Suggestion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-10
DIAGLEVEL 4 Considerations . . . . . . . . . . . . . . . . . . . . . 14-11
Location of the db2diag.log File . . . . . . . . . . . . . . . . . . . . 14-12
db2diag.log Information Example . . . . . . . . . . . . . . . . . . 14-13
db2diag.log Example: Starting the Database . . . . . . . . . 14-14
db2diag.log: Finding Error Information . . . . . . . . . . . . . . 14-15
Looking Up Internal Codes . . . . . . . . . . . . . . . . . . . . . . . 14-16
Byte Reversal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-17
Looking Up Internal Return Codes . . . . . . . . . . . . . . . . . 14-18
db2diag.log Example: Container Error . . . . . . . . . . . . . . 14-19
db2diag.log Example: Sharing Violation . . . . . . . . . . . . . 14-20
db2diag.log Example: Manual Cleanup . . . . . . . . . . . . . . 14-21
db2diag.log Example: Database Connection . . . . . . . . . 14-22
Which Container? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-23
Error Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-24
Error Reasons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-25
Error Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-26
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-27
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-28
Module 15 Security
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-2
Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-3
Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-4
Authentication Type: Server . . . . . . . . . . . . . . . . . . . . . . . 15-5
Authentication Type: DCS . . . . . . . . . . . . . . . . . . . . . . . . . 15-6
Encrypted Password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-7
Authentication Type: KERBEROS . . . . . . . . . . . . . . . . . . . 15-8
Authentication Type: KRB_SERVER_ENCRYPT . . . . . . . 15-9
Authentication Type: CLIENT . . . . . . . . . . . . . . . . . . . . . 15-10
TRUST_ALLCLNTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-11
TRUST_CLNTAUTH . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-12
Authorities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-13
Authorities in the DBM Configuration . . . . . . . . . . . . . . . 15-15
Database Authority Summary . . . . . . . . . . . . . . . . . . . . . 15-16
Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-17
Levels of Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-18
xvii
Database Level Privileges . . . . . . . . . . . . . . . . . . . . . . . . 15-19
Schema Level Privileges . . . . . . . . . . . . . . . . . . . . . . . . . 15-20
Table and View Privileges . . . . . . . . . . . . . . . . . . . . . . . . 15-21
Package and Routine Privileges . . . . . . . . . . . . . . . . . . . 15-22
Index and Table Space Privileges . . . . . . . . . . . . . . . . . . 15-23
Implicit Privileges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-24
Privileges Required for Application Development . . . . . . 15-25
System Catalog Views . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-26
Hierarchy of Authorizations and Privileges . . . . . . . . . . . 15-27
Audit Facility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-28
The db2audit Command: How It Works . . . . . . . . . . . . . . 15-29
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-30
Lab Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15-31
Module 16 Summary
Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-2
Course Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-3
Basic Technical References . . . . . . . . . . . . . . . . . . . . . . . 16-4
Advanced Technical References . . . . . . . . . . . . . . . . . . . . 16-5
Next Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-6
Evaluation Sheet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-7
xviii
Module 1
1-2
1-3
IBM has the highest-impact, business-based solutions available on the market, and an e-business
software portfolio that is robust, scalable, and multiplatform.
1-4
Leverage Information
The quantity of business documents, images, and data continues to grow as businesses become
more complex. By utilizing the IBM DB2 Universal Database, a business can manage all types
of data no matter how complex. Utilizing business intelligence tools, such as data warehousing
and data mining, a business can develop a competitive advantage.
Organizational Effectiveness
Using web-based teamwork and collaboration tools, a business can reduce the time necessary to
complete a project, as well as produce products of better quality. In addition, as the virtual
classroom replaces the traditional classroom, a business can manage the training and
development of its employees with greater effectiveness.
1-6
Business Intelligence
The IBM Business Intelligence product family includes DB2 DataJoiner, DB2 OLAP Server,
Intelligence Miner, and Warehouse Manager.
Content Manager
The IBM Content Manager product family includes Content Manager, Content Manager
OnDemand, Content Manager CommonStore for SAP, Content Manager Common Store for
Lotus Domino, Content Manager VideoCharger, and the IBM EIP Client Kit for Content
Manager.
1-8
Personal Edition
This is a fully functional database for personal computers using the OS/2, Windows and Linux
operating environments. It enables local users to create databases on the workstation where the
product is installed, and it has the capability to access remote DB2 servers as a DB2 client.
For this course we will be using DB2 UDB Enterprise Server Edition v8.1. The course is
designed to use:
a Windows version of the server;
a Linux/Unix version of the server and a Windows client; or
a combination of both these approaches.
If you do not already have an installed copy of DB2 UDB ESE v8.1, this may be the best time to
start the installation (Lab Exercise 2 for Module 1). Since the installation takes some time, you
should then return here to complete the remainder of the module.
The lab exercises for this course are in a separate Lab Exercises book.
1-10
One machine can have many instances of DB2, and each instance of DB2 can have many
databases. A database consists partly of table spaces and all of the database objects are located in
these table spaces. Initially, there is a minimum of three table spaces (a space for the system
catalog, one for system temporary use, and one for user applications), however, more can be
added at any time as data storage needs grow.
To understand DB2 on Linux, Unix, and Windows better, we will look at the DB2 Architecture
from the following perspectives:
Multiple instances on the same host (each with more than one database)
Processes active in memory for each instance
Shared memory for each instance
Configuration files
1-11
This slide shows two instances, inst01 and inst02. Each instance is independent of the other, and
can be independently administered and configured. There is no overlap between the two
instances. The DBM CFG file contains instance-wide configuration parameters.
Each instance contains two databases. Instance inst01 contains two databases, db1 and db2, and
instance inst02 contains two databases, db3 and db4. Each database has its own set of system
catalog tables and log files, as well as its own DB CFG file. There is no overlap between
databases. Although all databases within an instance share certain instance-wide parameters
located in the DBM CFG file, the database-wide configuration parameters are contained in the
DB CFG file for each database.
DB2 ARCHITECTURE
client client client
UDF
1-12
On the client side, either local or remote applications, or both, are linked with the DB2 UDB
client library. Local clients communicate using shared memory and semaphores; remote clients
use a protocol such as Named Pipes (NPIPE), TCP/IP, NetBIOS, or SNA.
On the server side, activity is controlled by engine dispatchable units (EDUs). In the above and
on the next page, EDUs are shown as circles or groups of circles.
Processes
EDUs are implemented as threads in a single process on Windows-based platforms and as
processes on UNIX (single-threaded). DB2 agents are the most common type of EDUs. These
agents perform most of the SQL processing on behalf of applications. Prefetchers and page
cleaners are other common EDUs.
A set of subagents might be assigned to process the client application requests. Multiple
subagents can be assigned if the machine where the server resides has multiple processors or is
part of a partitioned database.
All agents and subagents are managed using a pooling algorithm that minimizes the creation and
destruction of EDUs.
Device
Directory File container
container container
File
container
Secondary
Directory Device log files
container container Device
container
1-13
Shared Memory
Buffer pools are areas of database server memory where database pages of user table data, index
data, and catalog data are temporarily moved and can be modified.
The configuration of the buffer pools, as well as prefetcher and page cleaner EDUs, controls
how quickly data can be accessed and how readily available it is to applications.
Prefetchers retrieve data from disk and move it into the buffer pool before applications
need the data. Agents of the application send asynchronous read-ahead requests to a
common prefetch queue. As prefetchers become available, they implement those
requests by using big-block or scatter-read input operations to bring the requested pages
from disk to the buffer pool.
Page cleaners move data from the buffer pool back out to disk. Page cleaners are
background EDUs that are independent of the application agents. They look for pages
from the buffer pool that are no longer needed and write the pages to disk. Page
cleaners ensure that there is room in the buffer pool for the pages being retrieved by the
prefetchers.
Without the independent prefetchers and the page cleaner EDUs, the application agents would
have to do all of the reading and writing of data between the buffer pool and disk storage.
With DB2 UDB, there is one configuration file for each instance
Called the Database Manager (DBM) Configuration file
Contains parameter values for that instance
1-14
DB configuration file
Every DB2 UDB database also has a configuration file that contains parameters for just that one
database. The various databases in an instance are configured separately within the bounds of
the instance.
1-15
1-16
2-2
2-3
The GUI (graphical user interface) tools consist of an easy to use, integrated set of tools to help
you administer and manage DB2. Their main features are:
An intuitive, point and click navigation scheme.
A scalable architecture that can grow with your needs.
Support for the object-oriented and multimedia extensions of DB2.
Smart guides and wizards to provide step-by-step expert advice.
First Steps
Information Center
Control Center
Client Configuration Assistant
Command Center
Task Center
Health Center
Journal
License Center
Application Center
2-4
2-5
First Steps is a package of tutorial guides and programs to facilitate setting up and learning DB2.
First Steps allows you to:
Create the sample, warehouse, and olap tutorial databases.
Work with these tutorial databases.
View the DB2 Product Information Library.
Launch the DB2 UDB Quick Tour.
View other DB2 resources on the World Wide Web
Each of these functions will be illustrated in more detail in the following slides.
You can invoke First Steps by selecting the following from the Windows Start menu: Start >
Programs > IBM DB2 > Set-up Tools > First Steps.
2-6
By using the First Steps Create Sample Database wizard, you to can create the following
databases:
DB2 UDB Sample—This tutorial database is used for learning the concepts of a
relational database and is required in many of the courses taught by IBM.
OLAP Sample—This database is used to perform multidimensional analysis of
relational data using Online Analytical Processing (OLAP) techniques.
Data Warehousing Sample—This database is used with the Data Warehouse Center to
move, transform, and store data in a target warehouse database.
The time necessary to create a database is from 3 to 30 minutes, and each of these databases can
also be created at a later time by accessing this First Steps wizard.
2-7
Choose the Work with Tutorials option from First Steps to access various DB2 UDB tutorials.
2-8
Quick Tour launches a tutorial covering the e-business, Business Intelligence and Data
Management subject areas. It also demonstrates the multimedia capabilities of the DB2
Universal Database as well as the built-in Java and XML functionality.
Select tabs to
view different
types of
information
Search
option
2-9
Menu bar
Tool bar
Objects
pane Contents
pane
Contents
pane tool
bar
2-10
The Control Center is the main work area for DB2 administration. You can access most of the
other GUI tools from here. The principle components of this tool are:
Menu bar — This menu accesses the Control Center functions and online help menus.
Toolbar — From the toolbar located at the top of the window, you can launch any other
DB2 centers that are integrated into the Control Center. The Control Center toolbar is
shown here in more detail:
Satellite
Control Admin Task Development Tools Information
Center Center Center Journal Center Settings Center
A similar toolbar appears in each Administration Client tool. You can also access these
tools by selecting them from the Tools menu.
2-12
All of the tools that are available on the toolbar are also available in the Tools menu of the menu
bar.
2-13
By right-clicking on a database object, menu options appear that allow you to manipulate the
object. In the slide above, the menu options for the sample database allow you to perform such
actions as connecting, restarting, and dropping the database. The menu options for the employee
table allow you to perform such actions as renaming, dropping, and loading the table.
2-14
The Configuration Assistant (CA) allows you to easily configure an application for connections
to local and remote databases. The client can be configured using one of three methods:
Imported profiles
DB2 discovery
Manual configuration
In addition, the CA can be used to:
Display existing connections.
Update and delete existing connections.
Perform CLI/ODBC administration tasks.
Test connections to cataloged databases.
Bind applications to databases.
You can invoke the CA by selecting Start > Programs > IBM DB2 > Set-up Tools >
Configuration Assistant.
Type
commands
here
DB2
messages
2-15
The Command Center consists of windows in which you can enter SQL statements, scripts, and
DB2 commands and view the results. Use the Command Center to:
Execute SQL statements interactively.
Execute SQL using the SQL Assist wizard.
Create and save command scripts.
View query access plans.
Delete, update, export, and view the query result set.
You can invoke the Command Center by selecting the tool from the toolbar or Tools menu in
another GUI tool, or by selecting Start > Programs > IBM DB2 > Command Line Tools >
Command Center.
2-16
The query results from an SQL statement can be viewed by selecting the Query Results pane,
and they can be manipulated, saved, or exported by selecting options from the Query Results
menu.
2-17
The Task Center contains a list of all the scripts that have been created. Use the Script Center to:
Create or modify an SQL script or command file.
Import a previously created script.
Execute scripts immediately.
Schedule scripts to run at a later time.
To invoke the Task Center, first invoke either the Control Center or the Command Center. Then
click on the Task Center icon, or choose Task Center from the Tools menu on the menu bar.
You can also invoke the Task Center directly from the desktop by selecting Start > Programs >
IBM DB2 > General Administration Tools > Task Center.
2-18
To create a new script, click on the Task menu, and select New from the list of options.
2-19
The New Task window is shown above. Enter the appropriate values into the fields to define
your script. Go to the Command Script tab to enter the text of your script. Additional
information can be provided in the other tabs.
2-20
To work with a particular task, select the task, click on the Selected menu in the menu bar, and
choose from the options available.
2-21
The Health Center monitors the system and warns of potential problems. It can be configured to
automatically open and display any monitored objects that have exceeded their threshold setting,
which means they are in a state of alarm or warning. Use the Health Center to:
Specify the action to be taken when a threshold is exceeded.
Specify the message to be displayed.
Specify if an audible warning is to be used.
To invoke the Health Center, first invoke either the Command Center or the Control Center.
Then click on the Health Center icon, or choose the Health Center option from the Tools menu
on the menu bar.You can also access the Health Center from the desktop by selecting Start >
Programs > IBM DB2 > Monitoring Tools > Health Center.
2-22
The Journal displays the status of the jobs that have been created from scripts and logs the
results of their execution. Use the Journal to:
View job histories.
Monitor running and pending jobs.
Review job results.
Display recovery history.
View DB2 message logs.
To invoke the Journal, first invoke either the Command Center or the Control Center. Then click
on the Journal icon, or choose the Journal option from the Tools menu on the menu bar. You
can also access the Journal from the desktop by selecting Start > Programs > IBM DB2 >
General Administration Tools > Journal.
2-23
The License Center provides a central point to manage the licensing requirements of the DB2
products. Use the License Center to:
Add a new license.
Upgrade from a trial license to a permanent license.
View the details of your licenses, including version information, expiration date, and
number of entitled users.
To invoke the License Center, first invoke either the Command Center or the Control Center.
Then click on the License Center icon, or choose the License Center option from the Tools
menu on the menu bar.
2-24
The Development Center provides an easy means of creating and managing stored procedures
and user-defined functions (UDFs). Use the Development Center to:
Create development projects.
Create stored procedures, functions, and structured types on local and remote servers.
Modify existing routines and types.
Run procedures and functions for testing and debugging purposes.
You can invoke the Development Center by selecting Start > Programs > IBM DB2 >
Development Tools > Development Center.
The first time you start the Development Center, you must create a new project. To create a new
project, click on Create Project in the Development Center Launchpad window. You are then
asked to provide a project name.
2-25
The main DB2 Development Center window provides two options for viewing projects and
application. The Project View, shown above, displays the object pane hierarchy based on defined
projects. The server view displays the objects using an object hierarchy similar to the view
shown by the Control Center.
The following types of applications and objects can be created in the Development Center:
Stored procedures
User-defined functions
Structured data types
Status information is displayed at the bottom of the Development Center window.
2-26
To create a new stored procedure, right-click on Stored Procedure and select New > SQL
Stored Procedure. To create a new function, right-click on User-Defined Functions and select
New > SQL User-Defined Function. An example of the editing window for a stored procedure
is shown above. The editing window for a function is similar.
Procedures and functions that use other programming languages can be created by using a create
wizard. Right click on Stored Procedure or User-Defined Function, select New, and then
select the appropriate wizard from the menu. Structured types can only be created through the
Development Center by using a wizard.
To modify an existing routine, right-click on the routine name and select Edit.
Query:
SELECT staff_region.id, staff_region.region,
staff_region.city
FROM staff, staff_region
WHERE staff.id = staff_region.id
2-27
Visual Explain is a tool provided through the Command Center that allows you to analyze SQL
queries.
To demonstrate Visual Explain we will use two database tables, one index, and a query. These
objects are shown above.
Create Access
Plan icon
2-28
Visual Explain provides a graphical representation of the access plan developed by the
optimizer. Use Visual Explain to:
View the statistics used at optimization time. You can compare this set of statistics to
the statistics in the current system catalog to determine if rebinding the package will
improve performance.
Determine if an index was used to access a table. If an index was not used, the visual
explain function helps to determine which columns might benefit from being indexed.
View the effects of database tuning changes. By comparing the before and after
versions of the access plan for a query, you can determine the effect that a tuning
change had on the database.
Obtain information about each operation in the access plan. If a query is in need of
improvement, you can examine each operation performed by the query and isolate
possible trouble spots.
To invoke Visual Explain, first invoke the Command Center. Then using the Interactive tab and
the Command window, enter the CONNECT TO database statement. Once you have
connected to the database, the Create Access Plan icon becomes active. Enter your query in the
Command window and click on the Create Access Plan icon. A Visual Explain access plan
appears.
Table scan
Join
Index scan
Index
Tables
2-29
In our scenario, the optimizer chooses to use the inx_staff_id index to scan the staff table. Since
there is no suitable index for the staff_region table, the optimizer performs a full table scan.
After both tables have been scanned, the optimizer performs a nested loop join and returns the
result set. The estimated time is cumulative, reading from the bottom to the top, and is returned
in units of timerons. A large jump in cumulative timerons between operational blocks represents
a large amount of processing effort, and these operations may be candidates for performance
improvement.
Note Timerons represent an estimated amount of CPU cycles and disk I/O operations
required to process a query. A simplified way of defining them would be to think of
100 CPU cycles being equal to one timeron, and one disk read being equal to one
timeron, since disk reads cost more in processing time.
2-30
2-31
Data Placement
3-2
Container 2
Extent 2
Extent 4
3-3
A table space is a logical storage structure where the data for a database is stored. It consists of
one or more physical storage containers, and is associated with one memory structure called a
bufferpool.
Table spaces are categorized by the method used to access the data:
System-managed space (SMS)—This type of table space is managed by the operating
system and utilizes the O/S disk processes and data buffers. Therefore, the data access
time can be slower, but this type of table space is relatively easy to manage.
Database-managed space (DMS)—This type of space is managed directly by the DB2
database manager and bypasses the O/S system data buffers. Therefore, the access time
can be faster, but this type of table space is potentially more difficult to manage.
Table spaces are divided up in terms of pages and extents. A page is the smallest quantity of data
that can be retrieved from disk in one I/O operation. An extent is a set of pages grouped
contiguously to minimize I/O operations and improve performance. Both page size and extent
size are defined when a table space is created and cannot be changed.
SMS DMS
Device
Directory File
3-4
A container is a physical storage device that is assigned to one table space. Depending on the
type of table space, a container can be a directory, a file, or a device. SMS table spaces only use
directories as containers. DMS table spaces use either files or raw devices as containers and both
can be used in the same table space. The container definition is stored in the system catalog
tables along with the other attributes of the table space.
Containers must reside on disks that are local. Therefore, resources such as LAN-redirected
drives or NFS-mounted file systems cannot be used as containers for table spaces.
3-5
One extent consists of a number of pages grouped together and defined during table space
creation. If no extent size is specified, the default is the value for the DB CFG parameter
DFT_EXTENT_SZ.
DB2 writes extents to the containers in round-robin fashion.
3-6
Bufferpools are memory structures that cache data and index pages in memory. This reduces the
need for the DBM to access disk storage and thereby increases performance. Bufferpools use a
least-recently-used algorithm that ensures that the most recently accessed data is retained in
memory.
On UNIX, the creation of a DB2 database creates a default bufferpool, ibmdefaultbp, consisting
of 1000 4-KB pages. On other platforms the buffer pool size is 250 4-KB pages.
A buffer pool can be associated with specific table spaces. For example, you might create a
separate buffer pool to store index pages or a separate bufferpool to handle data for high activity
tables. The page size for the table space must match the page size for the buffer pool.
3-7
In an SMS table space, the file manager controls the location of the data in the storage space and
the DBM only controls the table space name and storage path, which are defined at creation time
and cannot be altered. New containers cannot be added dynamically unless you are doing a
redirected restore, which is a process you will learn about in the module on backup and
recovery.
Space is allocated only when needed, and only one page at a time, until the number of pages
allocated equals the amount for one extent. At this point, the database manager switches to the
next container in the table space and begins allocating one page at a time in that container. This
process of filling up extents and switching to the next container continues in a round-robin
fashion (also called striping) and balances the data requirement across all the containers.
Note Since DB2 considers an SMS table space to be full when it cannot add any more
space to one of the containers, it is important to make all of the containers of an
SMS table space the same size. If the containers are different sizes, DB2 marks the
table space as full when the smallest container is full.
Since the data is automatically balanced across all the containers and additional containers
cannot be added dynamically, there is little administration required for SMS table spaces.
3-8
For a DMS table space, the database manager has control of the placement of the data within the
containers and can ensure that the pages are physically contiguous. The size of the containers are
defined at creation time, and additional containers can be added later.
All of the space is allocated at creation time and data is initially stored in the first extent for the
first container. When this extent becomes full, the database manager switches to the next
container and begins filling up the first extent in that container. This process continues in a
round robin fashion and balances the data requirement across all the containers.
Note DB2 does not consider a DMS table space to be full until all the containers are full,
therefore, the containers can be different sizes. When the smallest container is full,
DB2 eliminates it from the rotation and continues to fill up the remaining
containers. However, containers should be the same size for best performance.
Since containers can be dynamically allocated at any time, the administration requirements are
higher with DMS table spaces. However, the performance can be better particularly when raw
devices are used.
Catalog
Table data
Regular Index data
Large data
System temp
Directory
User temp
3-9
SMS table spaces can be created as regular, system-temporary or user-temporary spaces. There
are three different classes of data associated with tables:
Table data: This is the data contained in the data rows of the table.
Index data: This includes the unique values and row identifiers for any columns on the
table that are indexed.
Large data: This includes the long varchar, long vargraphic, and LOB data types.
System temp
File Device
User temp
3-11
DMS table spaces can be created as regular or large, as well as system-temporary and user-
temporary. This allows the database administrator to spread a table over multiple table spaces for
better performance. To do this, the regular table data is located in a regular table space, the index
data is located in a separate, regular table space, and the large data is located in a large table
space. The large table space type is optimized to hold large data strings.
3-12
The chart above compares the features and limitations of SMS and DMS table spaces.
Default
Table spaces
Buffer pools
3-13
When a database is created, one bufferpool and three table spaces are created. The bufferpool is
named ibmdefaultbp, and is associated with the three table spaces. During creation, the system
administrator can specify names for these three table spaces or use the defaults:
syscatspace — This table space contains all the data for the system catalog tables.
userspace1 — This table space contains all the data for any permanent tables created by
users.
tempspace1 — This table space contains any temporary tables needed by the system to
execute queries.
In the illustration above, three additional bufferpools and table spaces have been created:
myregspace—This table space contains all the index and table data for the permanent
tables. The large data has been separated out and is not stored with the table and index
data, but is stored in its own table space. The myregspace table space is associated with
the mybuff1 bufferpool.
mytempspace—This table space contains all the temporary tables that are explicitly
created by the users. It is associated with the mybuff2 bufferpool.
mylongspace—This table space contains all the long data for the tables in the
myregspace table space. The mylongspace table space is associated with the mybuff3
bufferpool.
Syntax:
3-14
Table spaces can be created when the database is created. When they are created in this way,
values other than the default values for names and management types can be used. In the
example above, the following variables are available:
db_name—allows you to specify the name of the database.
CATALOG, USER, or TEMPORARY specifies what type of table space to create.
SYSTEM or DATABASE defines the management type for the table space. The default is
SYSTEM.
SYSTEM specifies an SMS table space. The containers must be directories, and
you cannot specify a size. The container definition string cannot exceed 250
characters.
DATABASE specifies a DMS table space. The containers must be files or raw
devices, and you can use a mixture of both. You must specify location and size.
The container definition string cannot exceed 254 bytes in size.
3-15
When the db2cert database is created, the following table spaces are also created:
A catalog space for the system catalog tables, which is managed by the database manager.
A temporary space for the system-temporary tables, which is managed by the operating
system.
A user space for the user-created permanent tables, which is managed by the database
manager.
3-16
Additional table spaces can also be created at any time by executing the SQL statement
CREATE TABLESPACE. The following options are available:
REGULAR | LARGE | SYSTEM TEMPORARY | USER TEMPORARY specifies the
type of data that will be stored in the table space. The default is REGULAR, which can
store any type of data except temporary table data.
PAGESIZE defines the size of the pages used for the table space. The default is 4K.
Note There are two types of valid PAGESIZE integer values. The values without the K
suffix are 4096, 8192, 16384, or 32768. If the K suffix is used, the valid integer
values are 4 or 8 and 16 or 32. If the page-size integer is not consistent with these
values, an error is returned.
3-17
3-18
There are numerous additional options that can be used when creating a table space. Some of the
more common ones are:
EXTENTSIZE specifies the number of pages that are written to a container before
skipping to the next container. The default is the value of DFT_EXTENT_SZ in the
database configuration file.
PREFETCHSIZE specifies the number of pages read from the table space when data
prefetching is performed. The default is the value of DFT_PREFETCH_SZ.
The values for EXTENTSIZE and PREFETCHSIZE and the sizes for file or device containers
can be entered in one of four different ways:
integer — indicating number of PAGESIZE pages
integer K — indicating kilobytes
integer M — indicating megabytes
integer G — indicating gigabytes
3-19
You must have either SYSADM or SYSCTRL authority to create a table space.
3-20
To list the table spaces in a database, use the LIST TABLESPACES command. The output
provides you with:
Table space ID number
Table space name
Type (system-managed space or database-managed space)
Data type or contents (any data, large data only, or temporary data)
State, which is a hexadecimal value indicating the current table space state
(for example: 0x0 for Normal or 0x20 for Backup Pending)
Tip You must be connected to a database to use the LIST TABLESPACES command.
3-21
If you execute the LIST TABLESPACES SHOW DETAIL command, you get all of the
information for the LIST TABLESPACES command plus:
Total number of pages
Number of usable pages
Number of used pages
Number of free pages
High water mark (in pages)
Page size (in bytes)
Extent size (in pages)
Prefetch size (in pages)
Number of containers
3-22
To list the containers associated with a table space, use the LIST TABLESPACE
CONTAINERS FOR table_space_id [SHOW DETAIL] command. The output without the
optional SHOW DETAIL clause returns:
Container ID
Container name
Container type (file, disk, or path)
The output with the SHOW DETAIL clause returns the following additional information:
Total number of pages
Number of usable pages
Accessible (yes or no)
The table_space_id is an integer with a unique value for each table space in the database. To get
a list of all the table space IDs contained in the database, execute the LIST TABLESPACES
command.
3-23
3-24
3-25
Tip You must be connected to a database to use the ALTER TABLESPACE statement.
3-26
To alter a table space, use the SQL statement, ALTER TABLESPACE tblspace_name with the
following options:
ADD, EXTEND, or RESIZE — Use only one of these options per statement. ADD
specifies that a new container is to be added to the table space. EXTEND indicates the
amount of additional space to allocate to the container. RESIZE indicates a new container
size. Resizing an existing container to a smaller size is only supported with v8 and later.
container_clause specifies the container definition
Syntax:
ADD (FILE|DEVICE 'path_and_name' size)
Example:
ADD (FILE '/database/sample/newfile.dat' 100K)
3-28
3-29
3-30
You must have SYSADM or SYSCTRL authority to execute the ALTER TABLESPACE
command.
Syntax:
DROP TABLESPACE|TABLESPACES table_space_name
Example:
DROP TABLESPACE dms_ts1, dms_ts6, sms_ts2
3-31
To drop a table space, use the DROP TABLESPACE table_space_name SQL statement.
Replace table_space_name with the name of the table space to be dropped. You can also use a
comma-separated list of table spaces.
Dropping a table space drops all objects defined in the table space. All existing database objects
with dependencies on the table space, such as packages and referential constraints, are dropped
or invalidated (as appropriate), and dependent views and triggers are made inoperative.
Table spaces are not dropped in the following cases:
A table in the table space spans more than one table space, and the other table spaces
associated with the table are not being dropped. In this case drop the table first.
The table space is a system table space such as syscatspace.
The table space is a system-temporary table space and it is the only system-temporary
table space that exists in the database.
The table space is a user-temporary table space and there is a declared temporary table in
it.
SYSADM authority
SYSCTRL authority
3-32
= 5 extents minimum
3-33
3-34
In a DMS table space, one page in every container is reserved for overhead, and the remaining
pages are used one extent at a time. Only full extents are used in the container, so add one extra
page to the container size to allow for the overhead page.
With an SMS table space, you do not specify the size of container. Since the container is a
directory, the size of the container is defined when the directory structure is created at the O/S
level.
For optimum performance with either SMS or DMS table spaces, the containers should be of
equal size and on different physical drives. The greater the number of containers, the greater the
potential for parallel I/O operations.
3-35
When using RAID devices for containers, observe the following guidelines for increased
performance:
Define one DMS container per RAID array.
Make the extent size a multiple of the RAID stripe size so that only one I/O operation is
required per extent.
Make the container size a multiple of the extent size so that disk space is not wasted.
Make the prefetch size a multiple of the extent size so that disk I/O is minimized.
Use the DB2 registry variable DB2_STRIPED_CONTAINERS to align extents to the
RAID stripe boundaries. The single overhead page for the container is placed in its own
extent, which means that the rest of the first extent is empty space. However, it allows the
rest of the extents to line up with the RAID stripes, thus improving I/O performance.
When this variable is used, the size for the container must be one extent less than the size
of the RAID device.
Use the DB2 registry variable DB2_PARALLEL_IO to enable parallel disk I/O.
3-36
Since pages are only allocated as needed in SMS table spaces, small tables will have less wasted
space if SMS table spaces are used.
If you need to allocate multiple pages at a time, enable multipage file allocation. This feature is
implemented by running the db2empfa utility and is indicated by the MULTIPAGE_ALLOC
database configuration parameter. When the value is set to yes, all SMS table spaces are
affected—there is no selection possible for individual SMS table spaces—and the value cannot
be reset to no.
3-37
One of the major benefits of DMS table spaces is that data can be separated across multiple table
spaces. Use separate table spaces for table, index, and large data. To realize maximum
performance, place the table spaces on separate physical disks, and use multiple containers for
each table space.
When it comes to choosing between using files or devices as the containers, be aware that
devices provide a 10 to 15 percent performance enhancement over files. However, since the
pages from file containers are already cached in the file system cache, you can reduce the size of
the buffer pool for the table space and still get good performance. In addition, files are more
useful when you want to avoid the extra administrative effort associated with setting up and
maintaining devices. Finally, a file may be preferable when a container size is small, since a
device can only support one container, and placing a small container in a large device would be
a waste of space.
3-38
There are several factors to consider when designing system catalog space. If you want to
maximize storage capacity, use an SMS table space since pages are allocated only as they are
needed, and most system catalog tables are small. If you use a DMS table space, create one with
a small extent size (2–4 pages).
If you want to take advantage of the file system cache for LOB data types, use either an SMS
table space, or a DMS table space with file-type containers.
If the database is expected to grow and the size cannot be predicted, use a DMS table space
since this type of table space has the option of adding containers.
3-39
3-40
When planning for the storage of user table data, consider the following factors:
Amount of data — If the design involves tables with a small amount of data, consider
using SMS table spaces. It is more prudent to use DMS table space for larger, more
frequently accessed tables.
Type of data — Infrequently used data without critical response time requirements may be
placed on slower, less expensive devices.
Minimizing disk reads — It may be beneficial for user table spaces to have their own
bufferpool.
Recoverablilty — Group related tables into a single table space. They may be related via
referential integrity, triggers, or structured data types. Since backup and restore utilities
work at the table space level, all the tables in one table space stay consistent and
recoverable.
3-41
3-42
Creating an Instance
4-2
Instance
4-3
An instance is comprised of the database manager, the databases that are assigned to it, and a
configuration file called the DBM CFG. All of the configuration parameters for the instance are
contained within this file. The DBM CFG file is actually a file in the instance home directory
named db2systm. However, you can only edit this file using DB2 commands or the GUI tools
and not normal text editors, so it is best to refer to it by the logical name DBM CFG.
Tip In addition to the parameters found in the DBM CFG file, there are registry
variables that modify the behavior of the instance. They are similar to environment
variables. The registry variables are discussed later in this module.
There can be many databases for one instance and many instances on one machine. A single
database, however, can only belong to one instance.
4-4
Each of these users will be discussed in following slides. To create users and groups on a
system, you should consult your operating system documentation. However, the command
samples below should give you an idea as to the steps required.
To create a user and group as the owner of the instance, where the group is instgrp and the user
is instusr, type:
mkgroup instgrp
mkuser pgrp=instgrp instusr passwd instpwd
To create a fenced user and fenced group where the group is fencgrp and the user is fencusr:
mkgroup fencgrp
mkuser pgrp=fencgrp fencusr passwd fencpwd
To create users and groups, you need root access on UNIX-based systems or local Administrator
access on Windows and OS/2 operating systems.
4-5
Before a database manager instance can be created on UNIX platforms, a user must exist to
function as the systems administrator (SYSADM) for the instance. Some thought should be
given to the name chosen for this user, because the name of the database manager instance is the
same as the name for this user. This user also becomes the owner of the instance. When the
instance is created, this user's primary group name is used to set the value of the database
manager configuration parameter SYSADM_GROUP. Any additional users that wish to have
SYSADM authority on the instance must also belong to this group. SYSADM authority has total
authority over all functions for the instance in a similar way that root has total authority on a
UNIX system, or Administrator has total authority on a Windows system.
4-6
Before a database manager instance can be created on a UNIX platform, a user must exist that
can run any user-defined functions (UDFs) and stored procedures in a fenced mode. This user is
necessary since UDFs can be created using the C programming language, which can use pointers
to reference memory addresses outside of its defined memory space. To prevent a poorly written
UDF from corrupting the DB2 UDB memory, UDFs are commonly run in a fenced section of
memory to prohibit references to memory addresses outside of the fence.
4-7
If this installation of DB2 UDB is a new installation, then a database administration server
(DAS) is automatically created along with the first database manager instance. During the
installation process, you are asked to provide a name for the DAS. The user name that you
provide becomes the name of the DAS and the installing user has SYSADM authority on the
DAS. In addition, the registry variable DB2ADMINSERVER is set to the name of the DAS. If
you do not plan on using the GUI administration tools, the DAS is not needed and can be
dropped after the database manager instance has been created.
You can drop the DAS by using the dasidrop command (UNIX) or the db2admin drop
command (Windows).
If, at a later date, you decide to use the GUI administration tools and you need to have a DAS,
you can create one using the dasicrt command (UNIX), or the db2admin create command
(Windows).
Port 523
4-8
The DAS is a special DB2 process for managing local and remote DB2 servers. There can be
only one DAS per machine and it listens to port 523. The DAS:
Is a special purpose DB2 server
Has no user acessible databases
The DAS is used to satisfy requests from the DB2 administration tools such as the Control
Center and the Configuration Assistant. Some examples of these requests are:
Obtain user, group, and operating system configuration information
Start/stop DB2 instances
Set up communications for DB2 server instances
Return information about the DB2 servers to remote clients
Collect information results from DB2 Discovery
The DBM CFG file for the DAS is similar to the DBM CFG files used by other instances, except
that it only contains a subset of the parameters found in a normal DBM CFG file. Unlike other
DBM CFG files, it is actually a file named das2systm that resides in the home directory of the
instance, and you cannot modify it with normal text editors.
4-9
The DB2 utility used to create the database manager instance is db2icrt. In the example above, a
database manager instance is created with the name instusr and assigned a fenced user named
fencusr. The user instusr is the owner of the instance and is assigned SYSADM authority over
the instance.
In addition, all files associated with the instance, plus any default SMS table spaces, are created
in the $HOME directory for user instusr.
4-10
The db2icrt command installs and configures the database manager instance on the UNIX
server. Normally only the user root has authority to run this command, but in our classroom
environment, the student logins have been given authority to run this command.
The environment variable DB2INSTANCE is set to the name of the database manager instance
and PATH is set to include the path to the DB2 UDB binary files. A new directory, sqllib, is
created in the $HOME directory of the user specified as the SYSADM.
If it is a new installation on a Windows system, a DAS is created. The DAS is not created on
Linux or Unix systems.
The communications protocols that are supported on the server are examined and entries are
made in the operating system services file to allow communications with the database manager
instance.
Finally, the files necessary to set environment variables are created. The first of these two files is
db2profile (or db2bashrc or db2cshrc, depending on your shell), which sets the default
environment variables. This file is often overwritten by new versions of DB2 UDB or by
fixpacks, and you should not make any changes to it. The second file is called userprofile and is
provided for your use to set environment variables unique to your installation. It will not be
overwritten by new versions of DB2 UDB or by fixpaks.
4-11
Installing DB2 on a Windows platform is much simpler than on a UNIX platform. During
installation, a default DAS is created named db2das00, and a default instance is created named
db2. During the installation process, the installation program prompts you for the name of the
user that you want to be the system administrator for both of these instances. If the user does not
exist, the program asks if you want one created.
The installation program builds the C:\Program Files\SQLLIB\DB2DAS00 and the
C:\Program Files\SQLLIB\DB2 directories and put the files associated with each instance in
the appropriate directory (such as the db2systm file). The installation program also builds the
directory C:\DB2\NODE0000 and places any table spaces for any databases associated with the
DB2 instance in this directory.
4-12
Use the following command if you would like to drop a database manager instance on either
UNIX or Windows:
Syntax:
db2idrop instance_name
Example:
db2idrop instusr
Use the following command if you would like to drop the DAS on a UNIX platform:
dasdrop
Use the following command if you would like to drop the DAS on a Windows platform:
db2admin drop
In order to execute these commands you need root access on UNIX based systems or local
Administrator access on the Windows operating system.
4-13
4-14
The DAS normally starts automatically when the operating system boots. However, the DAS
can be set to start manually. In this case, you must use the db2admin start command to start the
DAS. Since there is only one DAS per machine, it is not necessary to specify the name of the
DAS.
You can start a normal instance in two different ways depending on which tool you use:
Command Line Processor — Enter db2start at the command prompt. The CLP starts the
database manager instance specified in the environment variable DB2INSTANCE.
GIU Control Center — Invoke the Control Center and expand the objects in the left pane
until the instances are visible. Then right-click on the instance you want, and a menu
appears. Click on the Start menu option.
4-15
You can use the Command Line Processor to access the instance
configuration file:
Use the GET DBM CFG command to view current values
Use the UPDATE DBM CFG command to change values
4-16
If you wish to view the current DBM CFG parameter values, type the command:
db2 GET DBM CFG
This returns a list of all of the configuration parameters and their current values. For illustration
purposes, here are a couple of configuration parameters:
MAXAGENTS indicates the maximum number of database manager agents (db2agent)
available at any given time to accept application requests.
NUMDB limits the maximum number of concurrently active databases.
If you wish to change the current values of these parameters, enter the following command:
Syntax:
UPDATE DBM CFG USING parameter value [parameter value...]
Example
db2 UPDATE DBM CFG USING MAXAGENTS 10 NUMDB 3
Note that you can change several parameters at one time by listing them in sequential order.
Once values have been updated, they do not take effect until the database manager is restarted.
To see a list of current and pending configuration values, run the command:
db2 GET DBM CFG SHOW DETAIL
4-17
To configure DBM CFG parameters in the Control Center, expand the objects in the left pane of
the Control Center until the instances are visible. Right-click on the instance you want and select
Configure Parameters to display the DBM Configuration window.
4-18
The parameters are grouped in the DBM Configuration window into related categories. Scroll
through the list of parameters to find the desired category, then locate the parameter you want to
change in that section. Click on the parameter, and then click on the value in the next column to
change it. The Hint section at the bottom contains a detailed description of the parameter plus
the value ranges that are valid for the parameter.
To modify a parameter value, highlight the parameter and enter a value in the Value field at the
bottom left of the screen. Some parameters require you to select from a drop-down list that
appears in the Value field.
4-19
The DB2 Profile Registry holds variable values that function similarly to environment variables
and control the DB2 environment. However, there are very few environment variables
recognized by DB2. Registry variables have two distinct advantages: they take effect
immediately without having to restart the instance, and they are in a centrally located registry
and are easily managed. The registry variables are available in both UNIX and Windows
environments.
The variables in the registry vary by platform, but here are a few examples that are common to
all platforms:
DB2CODEPAGE specifies the human language in which the data is presented. If not set,
DB2 uses the code page set for the operating system.
DB2DBFT specifies the default database for implicit connections.
DB2COMM specifies which DB2 communication listeners are started when the database
manager is started. If this is not set, no DB2 communications managers are started at the
server.
DB2 Profile Registry variables are stored in profile files on UNIX platforms and in the Registry
on Windows platforms.
4-20
4-21
4-22
4-23
Most system administration operations require that you have a certain level of authority or
privilege in order to perform them. Some of these operations are shown above.
4-24
There are two ways to configure connectivity between a client machine and a server machine:
the manual method and using the Configuration Assistant, which is a GUI based tool.
Both of these methods are discussed on the following slides.
4-25
The client must be able to identify the server on the network. To do this, the client must have
information about the server, such as the communications protocol and the server name, in its
system catalog. In order to recognize a database, the client must have information about the
database, such as the database name and alias, in its system catalog. Finally, there are some
additional steps required that are specific to the communications protocol being used.
4-26
In the scenario, you will be setting up client/server connectivity on a Windows client that will be
connecting to a UNIX server, which has a DB2 instance that contains a database called
SAMPLE.
The server has already been set up with the following:
The value of the registry variable DB2COMM has been set to tcpip.
There is a valid entry in the UNIX services file that identifies a TCP/IP protocol with a
port number; in our case, it is port number 3700.
The DBM CFG parameter SVCENAME has been set to the same name that was used for
the TCP/IP port number 3700 in the services file.
The Windows client has already been set up with a name and IP address in the hosts file that
will resolve the server on the network. In our scenario, the host name is db2server and the IP
address is 9.186.128.141.
Example:
CATALOG TCPIP NODE db2serv REMOTE 9.186.128.141
SERVER 3700
4-27
The syntax required to catalog the server on the client machine is shown above. Note that the IP
address could have been replaced with the server name and the port number could have been
replaced with the service name. In addition the port number on the client and the server must be
the same.
Example:
CATALOG DATABASE sample AS srv_samp
AT NODE db2server
4-28
4-29
You must have either SYSADM or SYSCTRL authority to catalog a node or database.
4-30
The Configuration Assistant is a GUI tool that takes advantage of the discovery function of DB2
to automate the configuration of remote databases. The discovery function operates by searching
the network for all the DAS instances, normal instances, and databases that have their discovery
configuration parameters set to allow them to be discovered. These instances and databases
reply to the CA’s discovery request with their connectivity information. The instances and
databases that do not have their discovery configuration parameters set to allow for discovery do
not reply and remain hidden. Therefore, it is possible to have some machines, instances, and
databases that are discoverable and some that are hidden. The discovery function has two
operating modes:
Search discovery — The network machines are searched for all instances and databases
with configuration parameters set to allow them to be discovered by the discovery
function.
Known discovery — A specific hostname is provided to the discovery function and the
network is searched for that machine. Any instances and databases on that machine that
are discoverable reply to the CA request.
4-31
You must set the database manager and database configuration parameters to enable the proper
functioning of the discovery feature. There are configuration parameters at the DAS level, the
database manager instance level and the database level. Therefore, it is possible to have whole
machines, just instances, or just databases that do not respond to a discovery request.
There are two discovery parameters at the DAS level:
DISCOVER_COMM — This discovery parameter defines protocols that clients use to
issue search discovery requests. The valid values are TCPIP and NETBIOS, or a
combination of both. There is no default.
DISCOVER — This discovery parameter determines the type of discovery mode that is
started when the DAS starts. The valid values are SEARCH, KNOWN, and DISABLE,
and the default is SEARCH.
SEARCH — When the DAS starts, connection protocols for all of the connections
specified in the DBM CFG parameter, DISCOVER_COMM and the registry
variable, DB2COMM are started. SEARCH provides a superset of the
functionality provided by KNOWN discovery; when set to search, the DAS server
handles both search and known-discovery requests from clients.
4-33
To invoke the CA click on: Start > Program Files > IBM DB2 > Set-up Tools >
Configuration Assistant. You can also start CA by executing the db2ca command from a
command line.
4-34
Add a remote database by selecting Selected > Add Database Using Wizard from the menu
bar.
Wizard
pages
4-35
The Add Database wizard shows a list of pages on the left side of the window. This list of pages
changes according to selections you make on the first and on subsequent pages.
The first page of the Add Database wizard is the Source page where you can choose a method of
connection to the database. You can choose to:
Create and use a file that contains connection profile information.
Search the network to find local or remote databases.
Manually configure a connection.
To add a remote database, select the Search the network option and press Next.
4-36
On the Network page, you choose whether you want to add a new database that is already
known by your local instance or whether you want to search the network for other databases.
Expand one of these options to display the database you wish to add and select the database.
Click Next to continue to the Alias page.
Alias Page
An alias can be specified on this page to provide a local name for a remote database. Provide a
database alias and click Next.
4-37
When you click Finish, you will receive a confirmation window. You can then verify the
connection by entering your user ID and password and clicking on the Test Connection button
in the confirmation window.
4-38
4-39
4-40
5-2
DB CFG (DB 1)
DB CFG (DB 2)
5-3
The parameters for each individual database are stored in the database configuration file, or DB
CFG. This file is named sqldbcon and is located in the /NODE0000/SQLnnnnn directory,
where nnnnn is the number assigned to the database when it was created. Even though the file
physically exists as the file sqldbcon, it can only be viewed and modified by using DB2
commands through the CLP, or by using the DB2 UDB Control Center. It cannot be edited using
a normal text editor. Therefore, it is best to refer to this file by its logical name of DB CFG.
The starting point for the directory /NODE0000/SQLnnnnn is one of the options specified in
the CREATE DATABASE statement, or the value assigned to the DFTDBPATH database
manager (DBM) configuration parameter for the instance.
You can view and update the DB CFG file using the CLP:
Use GET DB CFG command to view DB CFG values.
Syntax
db2 GET DB CFG FOR database_name
Example
db2 GET DB CFG FOR db1
Use the UPDATE DB CFG command to update DB CFG values.
Syntax
db2 UPDATE DB CFG USING parameter_value
Example
db2 UPDATE DB CFG USING buffpage 1000
5-4
When you use the GET DB CFG command to view the DB CFG parameter values, you get a list
of all the configuration parameters for a database and their assigned values. The CLP accepts
several variations in the keywords for the GET DB CFG command. The following commands
are equivalent:
db2 GET DB CFG FOR database_name
db2 GET DATABASE CFG FOR database_name
db2 GET DATABASE CONFIG FOR database_name
db2 GET DATABASE CONFIGURATION FOR database_name
To help you understand the purpose of the DB CFG parameters, here are a couple of examples:
BUFFPAGE — This parameter is the default bufferpool size that is used when the
CREATE BUFFERPOOL statement does not specify the size.
DFT_EXTENT_SZ — This parameter is the default table space extent size when the
size is not been specified at the table space creation time.
5-6
You can also manage the DB CFG file by using the DB2 Control Center. Invoke the Control
Center by selecting Start > Program Files > IBM DB2 >General Administration Tools >
Control Center
Expand to the Databases folder in the Control Center, right-click on the desired database, and
select Configure Parameters. This displays the Database Configuration window.
5-7
The database configuration parameters are grouped into categories that are accessible by
scrolling down to the desired section heading, then selecting the desired parameter in that
section.
You can modify the parameter values in the Value column located to the right of the parameter
name. You can view a detailed description of the parameter in the Hint box located at the
bottom of the window.
Click OK after you have finished changing the parameter values, but be aware that the changes
do not take effect until the database is restarted.
5-8
A database is implicitly started when the first application connects and is stopped when the last
application disconnects. When a database is implicitly started by the first connection, all
necessary services are started, the required memory is allocated, and only then is the database
ready to process the request by the application. As soon as all applications have disconnected
from the database, all services are stopped, the required memory is released, and the database is
stopped.
To explicitly start a database, use the ACTIVATE DATABASE command:
Syntax
db2 ACTIVATE DATABASE database_name
Example
db2 ACTIVATE DATABASE sample
When the database is started by using the ACTIVATE DATABASE command, all necessary
services are started, the required memory is allocated, and the database is idle, but ready for the
first connection.
If ACTIVATE DATABASE was used to start the database, then the database must be explicitly
stopped by using the DEACTIVATE DATABASE command. Until this command is issued,
processes are not implicitly stopped, nor is the required memory released when the last
5-10
If you need to forcefully disconnect all applications on the instance and stop a database, you can
use the FORCE APPLICATION command:
db2 FORCE APPLICATION ALL
It is also possible to force individual applications off of the instance by using a combination of
the LIST APPLICATIONS command and the FORCE APPLICATION command:
LIST APPLICATIONS — This command provides you with descriptive information,
including an application_handle, for all the applications that are connected to the
instance.
FORCE APPLICATION (application_handle) — Use this command to force the
application specified by the application handle off of the instance. For example:
db2 FORCE APPLICATION (1)
The application connection is terminated and any uncommitted transactions are rolled
back.
5-11
Above is a list of the command options that have been discussed and the authority and privilege
required.
5-12
Schemas are database objects used in DB2 to logically group a set of database objects. Most
DB2 objects are named using a two-part naming convention where the first part of the name is
the schema—otherwise known as a qualifier for the database object—and the second part is the
object name. This format is schema.object_name; an example of a schema name for the
customer table would be db2admin.customer.
When you create an object, and you do not specify a schema, the object is associated with an
implicit schema based on the login that you are using to access the database. For example, if you
logged in as bobjones and created a customer table,. then the full two-part table name would be
bobjones.customer. When the login is used in an implicit schema, it is referred to as an
authorization ID.
When an object is referenced in an SQL statement, it is also implicitly qualified with the
authorization ID of the issuer if no schema name is specified in the SQL statement. For example,
if you logged on as bobjones and issued the SQL statement:
SELECT * FROM customer
the bobjones.customer table is accessed.
5-13
CURRENT SCHEMA is a special register that contains the default qualifier used for
unqualified objects referenced in dynamic SQL statements. The value of CURRENT SCHEMA
is initially set to the value of the authorization ID and can be reset using the SQL statement, SET
CURRENT SCHEMA.
Special registers are a set of storage values that are defined for an application process by the
database manager when the application connects to a database. Each connection is assigned its
own private set. They are used to store values that are accessible by using keywords in an SQL
statement. For example, the special register USER is set to the value of the login name for the
application’s user and can be used in an SQL statement, such as:
SELECT password FROM password_table WHERE user_id = USER
5-14
The CREATE SCHEMA statement creates a new schema in the database. The principle
components are:
schema_name — This name identifies the new schema and it cannot identify a schema
that already exists. The name cannot begin with sys, which is reserved for schemas that
are created by the database manager when the database is created. If schema_name is
used without an authorization_name then schema_name is also used to identify the user
who owns the schema.
auth_name — This name identifies the user who owns the schema. If auth_name is
used without schema_name, then auth_name is also used as the name of the schema.
schema_name AUTHORIZATION auth_name — These two names identify both the
name of the schema and schema owner when you want them to be different values.
sql_statement — This is an optional clause that allows SQL statements to be included
as part of the CREATE SCHEMA statement. Acceptable SQL statements include:
CREATE TABLE, CREATE VIEW, CREATE INDEX, COMMENT ON, and GRANT.
5-15
The first statement in the slide above creates a schema named admin that is owned by the user
admin. The second statement creates a schema that is owned by the user db2admin and is also
named db2admin. The third statement creates a schema that is named admin but is owned by
the user db2admin.
5-16
The example in the slide above creates a schema named inventory. It then creates a table and
index that become part of the inventory schema and grants all permissions on the table to the
db2admin user. Note that a CREATE SCHEMA statement can have multiple SQL statements
embedded into it.
5-17
System catalog tables contain information about the definitions of the database objects (tables,
views, indexes, and packages) and security information about the type of access users have to
these objects. Catalog tables are stored in the syscatspace table space and assigned to the
sysibm schema. These tables are updated during the operation of a database; for example, when
a table, view, or index is created.
The tables belong to the sysibm schema. They cannot be directly created or dropped, however,
they can be updated through a set of views that belong to the sysstat schema.
The following database objects are defined in the system catalog:
A set of user-defined functions (UDFs) is created in the sysfun schema
A set of read-only views for the system catalog tables is created in the syscat schema
A set of updateable catalog views is created in the sysstat schema
5-18
You can query on the views associated with the syscat schema. For example, you can retrieve
data about the tables in the database. The next few slides illustrate a sampling of the other
information that is available.
5-19
Above is an example of a query to search for table spaces created by user inst00.
5-20
The example above queries the catalog tables for information about bufferpools.
5-21
In the slide above, the SQL statement returns the name and type for all the constraints associated
with the employee table. The values returned for the type column are:
F — foreign key
K — check constraint
P — primary key
U — unique
5-22
The data type large object (LOB) is a special category of data types provided to store large data
values:
Binary large objects (BLOBs)
Single-byte character large objects (CLOBs)
Double-byte character large objects (DBCLOBs)
There are several size limitations for large objects:
Any single LOB value cannot exceed 2 gigabytes.
Any single row in the table cannot contain more than 24 gigabytes of LOB data.
Any single table cannot contain more than 4 terabytes of LOB data.
5-23
5-24
5-25
Some of the benefits of using global temporary tables are listed above.
5-26
The following code examples illustrate how temporary tables are implemented:
Example 1
CONNECT TO sample
DECLARE GLOBAL TEMPORARY TABLE tempdata
(id INTEGER, name CHAR(10))
ON COMMIT DELETE ROWS NOT LOGGED IN dectemptab
DECLARE GLOBAL TEMPORARY TABLE tempinv
(item CHAR(10), count INTEGER)
ON COMMIT PRESERVE ROWS NOT LOGGED IN dectemptab
INSERT INTO session.tempdata VALUES(1,'John')
INSERT INTO session.tempdata VALUES(2,'Susan')
INSERT INTO session.tempinv VALUES('wheel',2)
SELECT * FROM session.tempdata --returns 2 rows
SELECT * FROM session.tempinv --returns 1 row
COMMIT
SELECT * FROM session.tempdata --returns 0 rows
SELECT * FROM session.tempinv --returns 1 row
COMMIT
CONNECT RESET --tables are dropped
5-28
Views are logical tables that are derived from one or more base
tables or views.
5-29
Views are logical tables in that they do not contain any data. They only exist as a definition in
the system catalog tables for the database. A view can be thought of as a SELECT statement that
returns data from one or more underlying base tables or other views. The view has a name like a
regular table, and as far as the user is concerned, the view responds the same as a regular table.
The syntax for the SQL statement CREATE VEIW contains the following components:
view_name — Names the view. The name cannot match an existing table or view.
column_names—Names the columns in the view. If a list of column names is specified,
it must consist of as many names as there are columns in the result table of fullselect.
Do not specify the data type for columns, as it is the same as the base table or view.
fullselect — Defines the view. At any time, the view consists of the rows that result if
the SELECT statement had been executed.
CHECK OPTION — Specifies the constraint that every row inserted or updated
through the view must conform to the definition of the view. The constraint is
propagated to dependant views.
CASCADED — The WITH CASCADED CHECK OPTION constraint on a view
means that the view inherits the search conditions as constraints from any updateable
view on which the view is dependent. CASCADED is the default for WITH CHECK
OPTION.
5-31
5-33
In the slide above, the view emp_view2 is created with the WITH CASCADED CHECK
OPTION, and performs a select on the view emp_view1. Therefore, an INSERT or UPDATE
statement for emp_view2 would have to meet both the WHERE clause condition in emp_view2
(workd_dept='B00'), and the WHERE clause condition in emp_view1 (work_dept='A00').
If the emp_view2 view was created instead with the WITH LOCAL CHECK OPTION, then any
INSERT or UPDATE statement would only check for the WHERE clause condition in
emp_view2 (work_dept= 'B00').
5-34
5-35
In the Create View window, specify the View schema and View name, and select one of the
Check options in the middle of the window.
You can either write the SQL statement manually in the SQL statement field, or you can use
SQL Assist by clicking on the SQL Assist button.
5-36
SQL Assist is a tool for creating SQL statements. In the Outline pane, the clauses of an SQL
statement are presented in a hierarchical form. You create the SQL statement by selecting each
clause and choosing options for that clause that appear in the Details pane. For example, choose
the FROM clause to select the tables you want to use in your query and choose the WHERE
clause to create the query filters. The SQL code field displays the SQL statement you have
created based on selections made.
5-37
The employee table has been selected. Continue selecting clauses and choosing options for each
clause and watch as the statement is created in the SQL code section.
5-38
A DB2 federated system consists of a DB2 server (called a federated server), a DB2 database,
and a set of diverse data sources to which DB2 sends queries.
In a federated system, each data source consists of an instance of a relational database
management system (RDBMS), plus the database(s) that the instance supports.
A DB2 federated system provides location transparency for database objects.
A DB2 federated server provides compensation for data sources that do not support all of the
DB2 SQL dialect or certain optimization capabilities.
5-39
The federated database contains catalog entries identifying data sources and access methods.
These catalog entries contain information about federated database objects: what they are called,
information they contain, and conditions under which they can be used. Applications connect to
the federated database just like any other DB2 database.
The following federated system objects are considered essential:
Wrappers — Identify the module (DLL or library) used to access a particular type of
data source.
Servers — Define the data source. Server data includes the wrapper name, server name,
server type, server version, authorization information, and server options.
Nicknames — Identify specific data source objects (such as tables or views).
Applications reference nicknames in queries just like they reference tables and views.
5-40
5-41
Creating Indexes
6-2
6-3
An index is a database object that consists of an ordered list of values with pointers to
corresponding values in a column on a table.
Any permanent table (user table or system table) can have indexes defined on it.
Multiple indexes can be defined on a single table.
Computed columns can have indexes created on them (in Version 7 and later).
Indexes cannot be defined on a view.
Indexes are used for two primary reasons:
Ensure uniqueness of data values.
Improve SQL query performance.
6-4
Version 8.1 of DB2 UDB introduced a new format for indexes called the type-2 index. To the
database administrator, there is no apparent difference between a type-1 index, the type of
indexes used before DB2 Version 8.1, and the type-2 index. The differences are primarily
architectural, but the format of these indexes result, generally, in better overall index
performance.
All indexes created in DB2 UDB Version 8 are automatically created as type-2 indexes, unless
created on a table that already has existing type-1 indexes. Since a table can have only indexes
of one type, it is necessary to convert the type-1 indexes to type-2 indexes before you can create
additional type-2 indexes on a table. You can convert a type-1 index to a type-2 index using the
REORG command. The syntax for this command is shown here:
REORG INDEXES ALL FOR TABLE table_name
Once the indexes for a table have been rebuilt as type-2 indexes, the table can begin to take
advantage of new performance features.
6-6
6-7
UNIQUE INDEX
A unique index prevents the table from containing two or more rows with the same index key
value. The uniqueness is enforced at the completion of SQL statements that update rows or
insert new rows.
The uniqueness is also checked during the execution of the CREATE INDEX statement. If the
table already contains rows with duplicate key values, the index is not created.
When UNIQUE is used, null values are treated as any other values. For example, if the key is a
single column that may contain null values, that column may contain no more than one null
value.
index_name
This specifies the name of the index or index specification. The index name, including an
implicit or explicit schema qualifier, must be unique, that is, not already used to identify an
index or index specification already described in the catalog.
The schema qualifier must not be SYSIBM, SYSCAT, SYSFUN, or SYSSTAT.
6-8
ON table_name
This specifies the name of a table on which an index is to be created. The table must be a base
table (not a view) or a summary table described in the catalog. Indexes can be created on
permanent user tables and declared temporary tables, but they cannot be created on catalog
tables.
column_name
This identifies a single column, or list of comma-separated column(s), that form the index key.
Each column name must be unqualified. Up to 16 columns can be specified for a persistent table
and 15 columns for a typed table. The sum of the stored lengths of the specified columns must
not be greater than 1024 bytes for a persistent table and 1020 bytes for a typed table. Length of
index key(s) cannot be more than 255 bytes.
ASC or DESC
ASC specifies that index entries are to be kept in ascending order of the column values; this is
the default setting used when neither ASC nor DESC is specified. DESC specifies that index
entries are to be kept in descending order.
6-9
INCLUDE
INCLUDE can only be specified with UNIQUE indexes—the option allows you to specify
additional columns to be stored in the index record with the set of index key columns. The
columns included with this clause are not used to enforce uniqueness and are not used for
sorting the index, but they do require additional storage space in the index.
CLUSTER
The CLUSTER option specifies that the index is used for clustering the table. In earlier versions
of DB2, you were allowed to have only one clustering index for a table, since the table data is
physically arranged in the order of the index.
The cluster factor of a clustering index is maintained or improved dynamically as data is
inserted into the associated table; an attempt is made to insert new rows so that they are
physically close to rows that have key values that are logically close in the index.
6-11
PCTFREE integer
This specifies the percentage of each index page that should be left as free space when building
the index. You should plan for free space on every index page so that when the index key is
updated to a length greater than the previous length, the entries do not spill onto a new page.
The value of integer can range from 0 to 99. However, if a value greater than 10 is specified,
leaf pages are created with the specified amount of free space, but non-leaf pages are created
with only 10 percent free space. The default setting is 10 percent.
MINPCTUSED integer
This indicates whether indexes are automatically reorganized online and the threshold for the
minimum percentage of space used on an index leaf page.
If, after a key is deleted from an index leaf page and the percentage of space used on the page is
at or below the integer percentage, an attempt is made to merge the remaining keys on this page
with those of a neighboring page.
6-12
6-13
6-14
Using an index
Table
6-15
6-16
6-17
Indexes are created from the Control Center by expanding the Objects pane to list the database
objects, right-clicking on Indexes, and choosing Create. In the Create Index window you
specify the index schema and name, the source table schema and name, the column(s) to include
in the index, the type of index to create, and other options. The Create Index window is shown
on the next page.
6-19
The Design Advisor is a management tool that reduces the need to design and define suitable
indexes for data.
Use the Design Advisor to:
Find the best indexes for a problem query.
Find the best indexes for a set of queries (a workload), subject to resource limits which are
optionally applied.
Test an index on a workload without having to create the index.
You can invoke the Design Advisor from either the Control Center or from the DB2 CLP:
From the Control Center, right click on a database and select Design Advisor to invoke
the Design Advisor wizard. The Design Advisor recommendations are part of the wizard
notebook.
For the DB2 CLP, use the command db2advis with appropriate options from the operating
system prompt.
The introduction page for the Design Advisor is shown below.
6-21
The workload page is next. A workload is a set of SQL statements that the database manager
must process over a period of time. The SQL statements can include SELECT, INSERT,
UPDATE, and DELETE. Some statements that are frequently used to access system catalog
tables are provided by default.
Add a workload name and click on one of the buttons on the right to import, add, change, or
remove statements from the workload.
6-22
In the Collect Statistics page, you have the chance to make sure that statistics for selected tables
are up to date. Select the tables you want to include in the statistics update and press >, or press
the >> button to select all available tables.
6-23
On the Disk Usage page, specify the table space to use for the recommended objects. You can
also set a limit to the amount of space allocated for indexes.
6-24
In the calculate window, indicate when you want the Design Advisor to perform calculations
based on the information provided so far. If you select Now, then click on Next to allow Design
Advisor to start performing calculations immediately. When Design Advisor is finished with
calculations, it presents you with its recommendations.
6-25
Based on the information you have provided to the Design Advisior, a list of recommended
indexes is shown on the Recommendations page. Time estimates are provided to give you an
idea of the time savings you can expect if you add the recommended indexes.
6-26
Finally, the Design Advisor provides a list of objects that are of no use based on workload
information provide. You can choose to drop the indexes or keep the indexes if you think they
might be needed for other situations.
6-27
On the Schedule page, you can specify when and how to execute the script to create
recommended objects you selected.
6-28
The Summary page provides a review of objects you chose to create and drop based on the
recommendations of the Design Advisor. When you click on Finish, the objects are created or
dropped according to the options you chose on the Schedule page.
6-29
You can invoke the Design Advisor using the DB2 CLP by executing the db2advis command
with appropriate options from the operating system prompt. The syntax for this command is
shown above. The command options for the command are shown here:
-d database_name specifies the name of the database to which a connection is established.
-w workload_name specifies the name of the workload for which indexes are advised.
-s “sql_statement” specifies the text of a single SQL statement whose indexes are advised.
The statement must be enclosed by double quotation marks.
-i filename specifies the name of an input file containing one or more SQL statements.
Statements must be terminated by semi colon (;).
-a userid[/passwd] specifies the name and password used to connect to the database. The
slash (/) must be included if a password is specified.
-l disklimit specifies the maximum number of megabytes available for all indexes in the
existing schema. The default is 64 GB.
-t max_advise_time specifies the maximum allowable time, in minutes, to complete the
operation. The default value is 10. Unlimited time is specified by a value of zero.
-h displays help information. When this option is included, all other options are ignored;
only help information is displayed.
6-31
In the example above, the utility connects to the sample database, and recommends indexes for
the employee table.
Connection is made to the sample database with the appropriate user ID and password.
The size of all indexes in the existing schema cannot exceed 53 MB.
The maximum allowable time for finding a solution is 20 minutes.
6-32
6-33
Using Constraints
7-2
Keys are a set of columns defined on a table that are used to:
Identify a row
Reference a uniquely identified row from another table
Ensure uniqueness of column values
Keys can be classified by their source columns, or by the database
constraint they support
Composition: ATOMIC KEY or COMPOSITE KEY
Constraints: UNIQUE KEY, PRIMARY KEY, or FOREIGN KEY
7-3
Keys are a special set of columns defined on a table. Their purpose is to do any one of the
following:
Identify a row
Reference a uniquely identified row from another table
Ensure uniqueness of column values
Keys can be classified by the columns from which they are composed, or by the database
constraint they support.
Composition:
An atomic key is a single column key.
A composite key is composed of two or more columns.
Constraints:
A unique key is used to implement unique constraints.
A primary key is used to implement entity integrity constraints.
A foreign key is used to implement referential integrity constraints.
7-4
A primary key is a special type of unique key. Apart from guaranteeing uniqueness on column
values, it also serves as the lookup for values on another table.
Important characteristics of a primary key include:
There can only be one primary key per table.
The primary key column must be defined as NOT NULL.
DB2 creates a system-generated unique index on the primary key column(s) if one does
not already exist.
You can define a primary key in two different places in the CREATE
TABLE statement:
Defining a primary key as part of a column definition—user cannot
control the name of the primary key constraint
Defining a primary key after the table definition—user can name
the primary key
7-5
If you define the primary key as part of a column definition, you cannot name the constraint:
CREATE TABLE student (
id INTEGER NOT NULL PRIMARY KEY,
name VARCHAR(30),
subject VARCHAR(20),
position INTEGER NOT NULL
)
If you define the primary key after the table definition, you can name the primary key constraint:
CREATE TABLE student (
id INTEGER NOT NULL,
name VARCHAR(30),
subject VARCHAR(20),
position INTEGER NOT NULL,
CONSTRAINT pk_id PRIMARY KEY(id)
)
7-7
Alternatively, you can use the Control Center to create a table, and during table creation specify
the primary key:
Open DB2 Control Center: Go to Start > Program Files > IBM DB2 > General
Administration Tools > Control Center.
In the Control Center, expand to the Tables folder in the sample database.
Right click on Tables folder and select Create > Table to start the Create Table wizard.
On the Table page (shown on the next page):
Enter the schema name and table name.
Click on Next to continue.
To view the SQL statement that will be used to create the table, click on Show
SQL (see below).
7-13
7-14
Alternatively, you can alter an existing table to add a primary key constraint through the Control
Center:
Open the Control Center by selecting Start > Program Files > IBM DB2 > General
Administration Tools > Control Center. Expand to the Tables folder in the sample
database, right click on the student_cc table, and select Alter. This displays the Alter
Table window:
7-16
The phrase foreign key is used to implement referential integrity constraints. Referential
constraints reference only a primary key or unique key.
The values of a foreign key are constrained to have only values defined in the primary key or
unique key that is referenced—alternatively, the foreign key can be set to NULL, if allowed. The
table containing the referenced column is the parent and the table containing the referencing
column table is the child or dependent.
You can specify the following actions for the child table record upon update of parent table
column values:
NO ACTION indicates that an error occurs for the update operation on the parent table,
and no rows are updated.
RESTRICT indicates that an error occurs for the update operation on the parent table,
and no rows are updated.
You can specify the following actions for child table record upon delete of parent table column
values:
NO ACTION indicates that an error occurs for the delete operation on the parent table,
and no rows are deleted.
7-18
If you are defining a foreign key as part of column definition, you cannot name the constraint:
Parent table definition
CREATE TABLE parent (
id INTEGER NOT NULL PRIMARY KEY,
depname VARCHAR(20)
)
Child table definition: The user does not control the name of foreign key constraint.
CREATE TABLE child (
id INTEGER,
name VARCHAR(30),
dept INTEGER REFERENCES parent(id)
)
Child table definition: Specifying action when update/delete of record in parent.
CREATE TABLE child (
id INTEGER,
name VARCHAR(30),
dept INTEGER REFERENCES parent(id)
ON DELETE CASCADE ON UPDATE RESTRICT
)
7-20
Alternatively, you can use the Control Center to create a table and specify any foreign key. Use
the same process to add a foreign key to a new table as you did to create a primary key. When
you reach the Keys page in the Create Table wizard, do the following:
Press the Add Foreign button to display the Add Foreign Key window, as shown
above.
Highlight a column or a set of columns from the Available columns section. Click on
the selection button (>) to select the column(s) for the foreign key.
Optionally provide action for delete/update operations on the Parent table and
Constraint name.
Click OK.
Repeat this process to add more foreign key constraints and the press OK.
7-22
UNIQUE KEY is used to implement unique constraints. A unique constraint does not allow two
different rows to have the same values on the key columns.
A table can have more than one unique key defined.
A unique index is always created for unique key constraints; if a constraint name is
defined, it is used to name the index; otherwise, a system-generated name is used for
the index.
Having unique constraints on more than one set of columns of a table is different than
defining a composite unique key that includes the whole set of columns. For example, if
we define a composite primary key on the columns id and name, there is still a chance
that a name is duplicated using a different id.
You can define a unique key in two different places in the CREATE
TABLE statement:
Define unique key as part of column definition
• User cannot control the name of unique key constraint
Define unique key after the table definition
• User can name the unique key constraint
7-23
If you define a unique key as part of a column definition, you cannot name the constraint:
CREATE TABLE unik (
id INTEGER NOT NULL UNIQUE,
name VARCHAR(30),
title CHAR(3)
)
If you define the unique key after the body of the table, you can name the unique key constraint.
CREATE TABLE unik (
id INTEGER NOT NULL,
name VARCHAR(30),
title CHAR(3),
CONSTRAINT unq_id UNIQUE(id)
)
The following statement is invalid—the column id is not defined with the NOT NULL option:
CREATE TABLE unik (
id INTEGER UNIQUE,
name VARCHAR(30),
title CHAR(3)
)
7-25
You can use the ALTER TABLE statement in SQL to add a check constraint to an existing table.
ALTER TABLE unik
ADD CONSTRAINT chk_title
CHECK(title LIKE 'M%')
Note that a unique key constraint cannot be altered, but you can drop an existing unique key
constraint and create another unique key constraint with a new definition.
ALTER TABLE unik
DROP CONSTRAINT unq_id
ALTER TABLE unik
ADD CONSTRAINT unq_idname
UNIQUE(id, name)
This statement will fail in the following circumstances:
Data values for combination of columns id and name are not unique.
Either column id or column name has not been defined as NOT NULL.
7-26
A check constraint is a rule that specifies the values that are allowed in one or more columns of
every row of a table.
A check constraint enforces data integrity at the table level.
Once a table-check constraint has been defined for a table, every INSERT and
UPDATE statement involves a checking of the restriction or constraint.
The check constraint is used to implement business specific rules. This saves the
application developer the rigor of data validation.
The definitions for all check constraints are stored in sysibm.syschecks table.
7-27
If you define a check constraint as part of a column definition, you cannot name the constraint:
CREATE TABLE chek (
id INTEGER CHECK(id > 5),
name VARCHAR(30),
age INTEGER)
If you define the check constraint after the body of the table, you can name the constraint:
CREATE TABLE chek (
id INTEGER,
name VARCHAR(30),
age INTEGER,
CONSTRAINT chk_idage CHECK(id < age))
The following is an invalid statement—the column age is referenced before its own declaration:
CREATE TABLE chek (
id INTEGER CHECK(id < age),
name VARCHAR(30),
age INTEGER)
Alternatively, you can use the Control Center to add check constraint(s) at the same time that
you create a table.
7-28
When you create the table, go to the Constraints page in the Create Table wizard to add a check
constraint. Click on Add to bring up the Add Check Constraint window. An example is shown
here:
When a Check condition and Constraint name have been added, click OK to return to the
Constraints page. Repeat the above steps to add additional check constraints.
7-30
7-31
8-2
DB2 supports the following data formats for extraction and insertion:
Delimited ASCII format (DEL)
Integrated exchange format (IXF)
Worksheet format (WSF)
Non-delimited ASCII (ASC)
8-3
Whenever data is extracted or inserted into the database, particular care must be taken to check
the format of the data. DB2 supports various data formats for extraction and insertion.
The formats include:
Delimited ASCII format (DEL)
Integrated exchange format (IXF)
Worksheet format (WSF)
Non-delimited ASCII (ASC)
8-4
A set of utilities is provided with DB2 to populate tables or to extract data from tables. These
utilities enable easy movement of large amounts of data into or out of DB2 databases. The speed
of these operations is very important. When working with large databases and tables, extracting
or inserting new data may take a long time.
These utilities are:
EXPORT
IMPORT
LOAD
8-5
The EXPORT utility is used to extract data from a database table into a file.
Data can be extracted into several different file formats, which can be used either by the
IMPORT or LOAD utilities to populate tables. These files can also be used by other software
products such as spreadsheets, word processors, and other RDBMS packages to populate tables
or generate reports.
You must be connected to the database from which data is to be exported.
traversal-order-list:
( sub_table_name [ ,sub_table_name ...] )
8-6
Syntax for the EXPORT command is shown above. The options shown are described in the
following pages.
traversal-order-list:
( sub_table_name [ ,sub_table_name ...] )
8-7
TO file_name
This option specifies the name of the file to which data is exported. If the complete path to the
file is not specified, the export utility uses the current directory and the default drive as the
destination.
If the specified file name already exists, the export utility overwrites the contents of the file; it
does not append the information.
OF file_type
This specifies the format of the data in the output file (DEL, WSF, or IXF).
traversal-order-list:
( sub_table_name [ ,sub_table_name ...] )
8-8
LOBS TO lob_path
This specifies one or more paths to directories in which the LOB files are to be stored. When file
space is exhausted on the first path, the second path is used, and so on.
traversal-order-list:
( sub_table_name [ ,sub_table_name ...] )
8-9
LOBFILE lob_file
This option specifies one or more base file names for the LOB files. When name space is
exhausted for the first name, the second name is used, and so on.
When creating LOB files during an export operation, file names are constructed by appending
the current base name from this list to the current path (from lob-path), and then appending a
3-digit sequence number. For example, if the current LOB path is the directory /u/foo/lob/path,
and the current LOB file name is bar, the LOB files created are /u/foo/lob/path/bar.001,
/u/foo/lob/path/bar.002, and so on.
traversal-order-list:
( sub_table_name [ ,sub_table_name ...] )
8-10
traversal-order-list:
( sub_table_name [ ,sub_table_name ...] )
8-12
METHOD N
This option specifies one or more column names to be used in the output file. If this parameter is
not specified, the column names in the table are used. This parameter is valid only for WSF and
IXF files, but is not valid when exporting hierarchical data.
traversal-order-list:
( sub_table_name [ ,sub_table_name ...] )
8-13
MESSAGES message_file
This option specifies the destination for warning and error messages that occur during an export
operation. If the file already exists, the export utility appends the information. If message_file is
omitted, the messages are written to standard output.
traversal-order-list:
( sub_table_name [ ,sub_table_name ...] )
8-14
select_statement
This specifies the select statement that returns the exported data. If the select statement causes
an error, a message is written to the message file (or to standard output). If the error code is
SQL0012W, SQL0347W, SQL0360W, SQL0437W, or SQL1824W, the export operation
continues; otherwise, it stops.
traversal-order-list:
( sub_table_name [ ,sub_table_name ...] )
8-15
traversal-order-list:
( sub_table_name [ ,sub_table_name ...] )
8-16
HIERARCHY traversal_order_list
Export a subhierarchy using the specified traverse order. All subtables must be listed in
PREORDER fashion. The first sub_table_name is used as the target name for the SELECT
statement.
8-17
To perform an EXPORT, you must have either SYSADM or DBADM permissions on the
database manager, or have CONTROL or SELECT permissions on each participating table or
view.
8-18
The syntax for the IMPORT command is shown above. The options are described in the
following pages.
8-19
FROM file_name
This option specifies the file containing the data being imported. If the path is omitted, the
current working directory is used.
MODIFIED BY filetype_mod
This specifies additional options, such as LOBSINFILE. If the LOBSINFILE modifier is not
specified, the LOBS FROM option is ignored.
Modifier Description
compound = x Nonatomic compount SQL is used to insert the data,
and x (a number from 1 to 100) statements are
attempted each time
generatedignore / This modifier informs the import utility that data for all
identityignore generated/identity columns are present in the data file
but should be ignored
generatedmissing / If this is specified, the utility asumes that the input data
identitymissing file contains no data for the generated identity
usedefaults If a source column for a target table column has been
specified, but it contains no data for one or more row
instances, default values are loaded
8-20
8-21
Method Options
METHOD L — this specifies the start and end column numbers from which to import
data. A column number is a byte offset from the beginning of a row of data. It is
numbered starting from 1. (This option can only be used for ASCII and is the only valid
option for this file type.)
METHOD N — this specifies the names of the columns to be imported. (This option
can only be used with IXF files).
METHOD P — this specifies the indexes (numbered from 1) of the input data fields to
be imported. (This option can only be used with IXF or DEL files and is the only valid
option for the DEL file type)
8-22
COMMITCOUNT n
This option performs a COMMIT after every n records are imported.
RESTARTCOUNT N
Specifies that an import operation is to be started at record N + 1. The first N records are
skipped.
MESSAGES message-file
Specifies the destination for warning and error messages that occur during an import operation.
If the file already exists, the import utility appends the information. If the complete path to the
file is not specified, the utility uses the current directory and the default drive as the destination.
If message-file is omitted, the message files are written to standard output.
8-23
8-24
All phases of the LOAD process are part of one operation that is run
only after all three phases complete successfully. The three phases
are:
Load—data is written into the table
Build—indexes are created
Delete—rows that caused a unique constraint violation are
removed from the table
8-25
All phases of the LOAD process are part of one operation that is run only after all three phases
complete successfully. The three phases are:
Load—data is written into the table.
Build—indexes are created.
Delete—rows that caused a unique constraint violation are removed from the table.
During the LOAD phase, data is stored in a table and index keys
are collected
Save points are established at intervals
Messages indicate the number of input rows successfully loaded
If a failure occurs in this phase, use the RESTART option for
LOAD to restart from the last successful consistency point
8-26
During the LOAD phase, data is stored in a table and index keys are collected.
Save points are established at intervals specified by the SAVECOUNT parameter of the LOAD
command.
Messages inform as to the number of input rows successfully loaded during the operation.
If a failure occurs in this phase, use the RESTART option for LOAD to restart from the last
successful consistency point.
Alternatively, if the failure occurs near the beginning of the load, restart the load from the
beginning of the input file.
During the BUILD phase, indexes are created based on the index
keys collected in the load phase
The index keys are sorted during the load phase
If a failure occurs during this phase, LOAD restarts from the BUILD
phase
8-27
During the BUILD phase, indexes are created based on the index keys collected in the load
phase.
The index keys are arranged during the load phase.
If a failure occurs during this phase, LOAD restarts from the BUILD phase.
During the DELETE phase, all rows that have violated a unique
constraint are deleted
If a failure occurs, LOAD restarts from the DELETE phase
Once the database indexes are rebuilt, information about the rows
containing the invalid keys is contained in an exception table, if
one exists
Finally, any duplicate keys found are deleted
8-28
During the DELETE phase, all rows that have violated a unique constraint are deleted.
If a failure occurs, LOAD restarts from the DELETE phase.
Once the database indexes are rebuilt, information about the rows containing the invalid keys is
contained in an exception table, if the exception table was created before the load began.
Messages on these rejected rows are stored in the message file.
Finally, any duplicate keys found are deleted.
The exception table must be identified in the syntax of the LOAD command.
The LOAD utility moves data into a target table that must exist
within the database prior to the start of the load process
The target table may be a new or existing table into which data is
appended or replaced
The LOAD process only builds indexes that are already defined on
the table
An exception table should be
created to hold any rows that
violate unique constraints,
otherwise violated rows are
discarded
8-29
The LOAD utility moves data into a target table that must exist within the database prior to the
start of the load process.
The target table may be a new or existing table into which data is appended or replaced.
Indexes on the table may or may not already exist. However, the LOAD process only builds
indexes that are already defined on the table.
In addition to the target table, it is recommended that an exception table be created to hold any
rows that violate unique constraints.
If an exception table is neither created nor specified with the LOAD utility, any rows that violate
unique constraints are discarded without any chance of recovering or altering them.
8-30
The syntax for the LOAD command is shown above. Options are described on the following
pages.
8-31
CLIENT
This specifies the data loaded resides on a remotely connected client. This operation is ignored if
the load operation is not being invoked from a remote client.
8-32
Filetype Modifier
This option specifies the format of the data in the input file:
ASC — non-delimited ASCII format
DEL — delimited ASCII format
IXF — integrated exchange format (PC version) exported from the same or from
another DB2 table.
8-34
METHOD Options
METHOD L — this option specifies the start and end column numbers from which to
load data. A column number is a byte offset from the beginning of a row of data. It is
numbered starting from 1.
This method can only be used with ASC files, and is the only option available for that
file type.
METHOD N — this specifies the names of the columns in the data file to be loaded.
The case of these column names must match the case of the corresponding names in the
system catalogs. Each table column that is not nullable should have a corresponding
entry in the METHOD N list.
For example, given data fields F1, F2, F3, F4, F5, and F6, and table columns C1 INT, C2
INT NOT NULL, C3 INT NOT NULL, and C4 INT, method N (F2, F1, F4, F3) is a valid
request, while method N (F2, F1) is not valid (only for IXF files).
METHOD P — this specifies the indexes (numbered from 1) of the input data fields to
be loaded. Each table column that is not nullable should have a corresponding entry in
the METHOD P list.
8-36
Counter Options
SAVECOUNT specifies that the load utility is to establish consistency points after
every n rows. This value is converted to a page count, and rounded up to intervals of the
extent size.
ROWCOUNT specifies the number of n physical records in the file to be loaded. It
allows a user to load only the first n rows in a file.
WARNINGCOUNT stops the load operation after n warnings.
8-37
Mode Options
INSERT adds the loaded data to the table without changing the existing table data.
REPLACE deletes all existing data from the table, and inserts the loaded data. The table
definition and index definitions are not changed. If this option is used when moving
data between hierarchies, only the data for an entire hierarchy, not individual subtables,
can be replaced. This option is not supported for tables with DATALINK columns.
RESTART restarts a previously interrupted load operation. The load operation
automatically continues from the last consistency point in the load, build, or delete
phase.
8-39
8-40
STATISTICS
{YES | NO} — specifies whether or not statistics are collected for the table and for any
existing indexes. This option is supported only if the load operation is in REPLACE
mode.
WITH DISTRIBUTION — specifies that distribution statistics are collected
AND INDEXES ALL — specifies that both table and index statistics are collected
FOR INDEXES ALL — specifies that only index statistics are collected
DETAILED — specifies that extended index statistics are collected
8-41
CPU_PARALLELISM n
This specifies the number of processes or threads that the load utility spawns for parsing,
converting, and formatting records when building table objects. This parameter is designed to
exploit intrapartition parallelism. It is particularly useful when loading presorted data, because
record order in the source data is preserved. If the value of this parameter is zero or has not been
specified, the load utility uses an intelligent default value (usually based on the number of CPUs
available) at runtime.
DISK_PARALLELISM n
This specifies the number of processes or threads that the load utility spawns for writing data to
the table space containers. If a value is not specified, the utility selects an intelligent default
based on the number of table space containers and the characteristics of the table.
8-42
COPY NO
This specifies that the table spaces in which the table resides are placed in backup pending state
if forward recovery is enabled (that is, log retain or userexit is on).
COPY YES
This specifies that a copy of the loaded data is saved. This option is invalid if forward recovery
is disabled (both logretain and userexit are off).
Use TSM — Specifies the copy is stored using Tivoli Storage Manager (TSM).
OPEN num_sess SESSIONS — The number of I/O sessions used with TSM or the
vendor product. The default value is 1.
TO device | directory — Specifies the device or directory where the copy image is
created.
LOAD lib_name — the name of the shared library (DLL on OS/2 or the Windows
operating system) containing the vendor backup and restore I/O functions used.
8-44
INDEXING MODE
AUTOSELECT — the load utility automatically decides between REBUILD or
INCREMENTAL mode.
REBUILD — all indexes are rebuilt.
INCREMENTAL — indexes are extended with new data. It only requires enough sort
space to append index keys for the inserted records. This method is only supported in
cases where the index object is valid and accessible at the start of a load operation.
DEFERRED — the load utility does not attempt index creation. Indexes are rebuilt
upon first non-load related access.
FASTPARSE
ANYORDER
DATA BUFFERS
CPU_PARALLELISM
DISK_PARALLELISM
INDEXFREESPACE
8-45
8-46
The LOAD QUERY command is used to interrogate a LOAD operation and generate a report on
its progress.
Specify table being loaded.
May choose to display only summary information.
May choose to display only updated information.
The syntax for the LOAD QUERY command is shown above. The command options are
described here:
message_file — specifies the destination for warning and error messages that occur
during the load operation
NOSUMMARY — no load summary information is reported
SUMMARY ONLY — only load-summary information (rows read, rows skipped, rows
loaded, rows rejected, rows deleted, rows committed, and number of warnings) is
reported
SHOWDELTA — specifies that only new information (pertaining to load events that
have occurred since the last invocation of the LOAD QUERY command) is reported
8-47
Database Integrity: Load utility uses the mechanism of “pending states” as regular logging is not
performed.
Load Pending: Operation failed or interrupted during Load / Build phase
Delete Pending: Operation failed or interrupted during Delete phase
Backup Pending: The database configuration parameter logretain is set to recovery, or
userexit is enabled, and the load option COPY YES is not specified. The load option
NONRECOVERABLE is not specified.
Check Pending: Violation of referential, check, datalinks, or generated column
constraints
Query for table space state:
db2 "LIST TABLE SPACES SHOW DETAIL"
Verify the output value of state
0x8 = Load Pending
0x10 = Delete Pending
0x20 = Backup Pending
Backup Pending
Take a backup of the table space / database
Check Pending
Use SET INTEGRITY Command
8-48
8-49
Several enhancements have been made to the load utility in Version 8. New functionality has
been added to simplify the process of loading data into both single partition and multi-partition
database environments. Here are some of the new load features introduced:
Load operations now take place at the table level. This means that the load utility no
longer requires exclusive access to the entire table space, and concurrent access to other
table objects in the same table space is possible during a load operation. Further, table
spaces that are involved in the load operation are not quiesced. When the COPY NO
option is specified for a recoverable database, the table space will be placed in the
backup pending table space state when the load operation begins.
The load utility now has the ability to query pre-existing data in a table while new data
is being loaded. You can do this by specifying the READ ACCESS option of the LOAD
command.
The LOCK WITH FORCE option allows you to force applications to release the locks
they have on a table, allowing the load operation to proceed and to acquire the locks it
needs.
Data in partitioned database environments can be loaded using the same commands
(LOAD, db2load) and APIs (db2load, db2loadquery) used in single partition
database environments. The AutoLoader utility (db2atld) and the AutoLoader control
file are no longer needed.
8-51
8-52
The above table compares the IMPORT utility with the LOAD utility.
8-53
The db2move utility moves data between different DB2 databases that may reside on different
servers. It is useful when a large number of tables need to be copied from one database to
another.
The utility can run in one of three modes:
Export — the EXPORT utility is used to export data from the table or tables specified
into data files of type IXF.
It also produces a file named db2move.lst that records all the names of the tables
exported and the file names produced when exported.
It also produces various message files that record any errors or warning messages
generated during the execution of the utility.
Import — the IMPORT utility used to import data files of type IXF into a given
database.
It attempts to read the db2move.lst file to find the link between the file names of
the data files and the table names into which the data must be imported.
Load — the input files specified in the db2move.lst file are loaded into the tables using
the LOAD utility.
Extracting the
DDL and statistics
into a script
8-54
Performance evaluation of db2move is undertaken using utilities like Visual Explain, which use
the database statistics to report on how an SQL statement is executed and gives an indication of
its likely performance.
8-55
The syntax for the db2move command is shown above. Here is a description of the options:
database_name — the name of the database
action — EXPORT, IMPORT, or LOAD
-tc — followed by one or more creator IDs separated by a comma
-tn — if specified, it should be followed by one or more (up to ten) table names
separated by commas
-io — specifies the import action to take.
Options are: INSERT, INSERT_UPDATE, REPLACE, CREATE and
REPLACE_CREATE
REPLACE_CREATE is default
-lo — specifies the load option to use. Valid options are:
INSERT and REPLACE
Default is INSERT
-l — specifies the absolute path names for the directories used when importing,
exporting, or loading LOB values into or from separate files. If specified with the
EXPORT action the directories are cleared before the LOBs are exported to files in the
directory or directories.
8-57
8-58
8-60
8-61
9-2
9-3
The physical distribution of the data stored in tables has a significant effect on the performance
of applications using those tables. The way the data is physically stored in a table is affected by
the update, insert, and delete operations on the table.
Examples:
Delete operation may leave empty pages of data that may not be reused later.
Updates to variable-length columns result in the new column value not fitting in the
same data page—this can cause the row to be moved to a different page and produce
internal gaps or unused space in the table.
When the cost-based optimizer determines how a query should be executed, incorrect/outdated
data may result in a cost ineffective plan, leading to slower response time and degraded
performance.
The solutions that we will explore in this module are:
REORGCHK
REORG
RUNSTATS
REORGCHK:
Analyzes the system catalog tables and gathers information
about the physical organization of tables and indexes
Determines how much space is currently being used and how
much is free
Uses six formulas to help decide if tables and indexes require
physical reorganization; three formulas for tables and three for
indexes
To use the REORGCHK command, you must have SYSADM or
DBADM authority, or CONTROL privilege on the table.
9-4
REORGCHK analyzes the system catalog tables and gathers information about the physical
organization of tables and indexes. It determines the physical organization of tables and
corresponding indexes, including how much space is currently used and how much is free.
REORGCHK uses six formulas to help decide if tables and indexes require physical
reorganization (general recommendations that show the relationship between the allocated space
and the space used for the data in tables). Three formulas are applied to tables and the other
three are applied to indexes.
9-5
The syntax of the REORGCHK utility provides for choices at two positions:
REORGCHK {UPDATE | CURRENT} STATISTICS
ON {TABLE {USER | SYSTEM | ALL | table_name}
Use the CURRENT STATISTICS option of REORGCHK to use the statistics in the system
catalog tables at that time. For example, to analyze the current statistics of the employee table:
db2 REORGCHK CURRENT STATISTICS ON TABLE inst00.employee
To review the statistics of all the tables in a database, including system catalog and user tables:
db2 REORGCHK CURRENT STATISTICS ON TABLE ALL
Verify the organization of the system catalog tables using the SYSTEM option. Alternatively,
select all the tables under the current user schema name by specifying the USER keyword:
db2 REORGCHK CURRENT STATISTICS ON TABLE SYSTEM
If the CURRENT STATISTICS parameter is not specified, REORGCHK calls RUNSTATS.
You can also update statistics based on a defined schema. Here is the alternate syntax:
REORGCHK [{UPDATE | CURRENT} STATISTICS]
ON SCHEMA schema_name
9-6
9-8
During the same run, the following formulas are used for index statistics:
F4: CLUSTERRATIO or normalized CLUSTERFACTOR > 80
F5: 100*(KEYS*(ISIZE+10)+(CARD-KEYS)*4) / (NLEAF*INDEXPAGESIZE) > 50
F6: (100*PCTFREE) * (INDEXPAGESIZE-96) / (ISIZE+12) * (NLEVELS-2) +
(INDEXPAGESIZE-96) / (KEYS * (ISIZE+8) + (CARD-KEYS) * 4) < 100
Interpretation:
F4 indicates the CLUSTERRATIO or normalized CLUSTERFACTOR. This ratio
shows the percentage of data rows stored in same physical sequence as the index.
F5 calculates space reserved for index entries. Less than 50% of the space allocated for
the index should be empty.
F6 measures the usage of the index pages. The number of index pages should be more
than 90% of the total entries that NLEVELS can handle.
If, for example, the CLUSTERRATIO of an index is below the recommended level, an asterisk
appears in the REORG column, as shown in the example above.
9-10
For interpreting the output gathered from indexes, some information about the structure of
indexes is needed. Indexes in DB2 are created using a B+ tree structure. These data structures
provide an efficient search method to locate the entry values of an index. The logical structure of
a DB2 index is shown above.
9-11
The need for reorganization is indicated by an asterisk (*) in the REORG column of the
REORGCHK output.
The REORG command deletes all the unused space and writes the table and index data to
contiguous pages. An index is used to place the data rows in the same physical sequence as the
index. These actions are used to increase the CLUSTERRATIO of the selected index. This helps
DB2 find the data in contiguous space and in the desired order, reducing the seek time needed to
read the data. If DB2 finds an index with a very high cluster ratio, it might use it to avoid a sort,
thus improving the performance of applications that require sort operations.
Authority
To use REORG, you must have SYSADM, SYSCTRL, SYSMAINT, or DBADMIN authority, or
CONTROL privilege on the table.
Partial syntax:
REORG
{TABLE table_name [INDEX index_name]
[ALLOW {READ | NO} ACCESS] [USE table_space_name] |
INDEXES ALL FOR TABLE table_name
[ALLOW {READ | NO | WRITE} ACCESS]}
Examples:
Reorganize the inst00.employee table and all of its indexes:
db2 REORG TABLE inst00.employee
Place the rows of the table in the order of the workdept index:
db2 REORG TABLE inst00.employee INDEX workdept
Reorganize all the indexes created on the employee table:
db2 REORG INDEXES ALL FOR TABLE inst00.employee
ALLOW READ ACCESS
9-12
When using REORG, it is mandatory to use the fully qualified name of the table. The following
command options are available:
Reorganize specific indexes
TABLE table_name — Specifies the name of the table containing the index to
reorganize
INDEX index_name — Specifies the name of the index to reorganize.
USE table_space_name — Specifies the name of a system temporary table space
where the database manager can temporarily store the table being reconstructed. If
a table space name is not entered, the database manager stores a working copy of
the table in the table space(s) in which the table being reorganized resides.
Reorganize all indexes in a table
INDEXES ALL FOR TABLE table_name — All indexes for the specified table
are to be reorganized.
Allow access to the table
ALLOW NO ACCESS — Specifies that no other users can access the table while
the indexes are being reorganized. This is the default for the REORG INDEXES
command.
Examples
The first example above recognizes the inst00.employee table and all of its indexes, but does
not put the data in any specific order.
Assume that the table inst00.employee has an index called workdept and that most of the
queries using the table are grouped by department number. In the second example above, the
REORG command is used to physically place the rows of the table ordered by workdept.
The final example shows the command to use when you want all of the indexes created on the
employee table to be reorganized. Read access is allowed for other users during reorganization.
The INDEX option tells the REORG utility to use the specified
index to reorganize the table
After reorganizing a table using the index option, DB2 does not
force the subsequent inserts or updates to match the physical
organization of the table
Only one clustering index for a table
9-14
The INDEX option tells the REORG utility to use the specified index to reorganize the table.
After the REORG command has completed, the physical organization of the table should match
the order of the selected index. In this way, the key columns are found sequentially in the table.
After reorganizing a table using the index option, DB2 does not force the subsequent inserts or
updates to match the physical organization of the table.
A clustering index defined on the table might assist DB2 in keeping future data in a clustered
index order by trying to insert new rows physically close to the rows for which the key values of
the index are in the same range.
Only one clustering index is allowed for a table.
9-15
syscat.tables contains information about columns, tables, indexes, number of rows in a table,
the use of space by a table or index, and the number of different values of a column. This
information is not kept current and has to be generated by executing the RUNSTATS command.
The statistics collected by the RUNSTATS command can be used to display the physical
organization of the data and provide information that the DB2 optimizer needs to select the best
access path for executing SQL statements.
Syntax:
RUNSTATS ON TABLE table_name
[WITH DISTRIBUTION]
[{AND|FOR}] [DETAILED] [{INDEXES ALL|INDEX
index_name}]
[SHRLEVEL {CHANGE|REFERENCE}]
[ALLOW {READ | WRITE} ACCESS]
To collect statistics for a table and all of its indexes at the same time:
db2 RUNSTATS ON TABLE inst00.employee AND INDEXES ALL
To collect statistics for table indexes only:
db2 RUNSTATS ON TABLE inst00.employee FOR INDEXES ALL
9-16
It is recommended that you execute RUNSTATS on a frequent basis on tables that have a large
number of updates, inserts, or deletes. Also, use the RUNSTATS utility after a REORG of a
table.
RUNSTATS does not produce any output. View its results by querying the system catalog tables
only.
The following example uses the sysibm.syscolumns table:
db2 RUNSTATS ON TABLE sysibm.syscolumns
To collect statistics for a table and all of its indexes at the same time:
db2 RUNSTATS ON TABLE inst00.employee AND INDEXES ALL
To collect statistics for table indexes only:
db2 RUNSTATS ON TABLE inst00.employee FOR INDEXES ALL
You can permit other uses to have access to the table by including either the ALLOW READ
ACCESS or ALLOW WRITE ACCESS syntax to the command.
9-17
9-18
The DB2 REBIND command and db2rbind utility provide the following functionality:
They provide a quick way to recreate a package, enabling the user to take advantage of
a change in the system without a need for the original bind file.
They provide a method to recreate inoperative packages.
They control the rebinding of invalid packages.
You should use a qualified package name, otherwise these programs assume the current
authorization ID. They do not automatically commit unless auto-commit is enabled.
The db2rbind utility rebinds all packages in the database.
Syntax:
db2rebind database -l logfile [ALL -u userid
-p password] [-r {CONSERVATIVE|ANY}]
9-19
The syntax for db2rebind is shown above. The options for this command include:
-l — Specifies the (optional) path and the (mandatory) file name used for recording
errors that result from the package revalidation procedure.
all — Specifies that rebinding of all valid and invalid packages is to be done. If this
option is not specified, all packages in the database are examined, but only those
packages that are marked as invalid are rebound, so that they are not rebound implicitly
during application execution.
-u userid -p password — User ID and password.
9-20
9-21
10-2
10-3
Locking allows multiple applications to share the same data on an instance. It protects data by
allowing only one application to update the data at a time, but still allows applications to share
data. Locking also prevents applications from accessing data that has been modified, but not
committed by other applications, except where uncommitted read isolation is used.
10-4
The table below lists the different lock modes available and the objects that use these modes.
Mode Applicable Objects Description
IN — Intent None Table spaces & tables The lock owner can read any data in the table,
including uncommitted data, but cannot update
it. No row locks are acquired by the lock owner.
Other concurrent applications can read or
update the table.
IS — Intent Share Table spaces and tables The lock owner can read data in the locked
table, but not update this data. When an
application holds the IS table lock, the
application acquires an S or NS lock on each
row read. In either case, other applications can
read or update the table.
NS — Next Key Share Rows The lock owner and all concurrent applications
can read, but not update, the locked row. This
lock is acquired on rows of a table, instead of
an S lock, where the isolation level is either RS
or CS on data that is read.
S — Share Tables and rows The lock owner and all concurrent applications
can read, but not update, the locked data.
Individual rows of a table can be S locked. If a
table is S locked, no row locks are necessary.
N None
None IN IS NS S IX SIX U NX X Z NW W
NS Next Key
None Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
Share
IN Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes No Yes Yes
S Share
IS Yes Yes Yes Yes Yes Yes Yes Yes No No No No No
NX Next Key
NS Yes Yes Yes Yes Yes No No Yes Yes No No Yes No
Exclusive
S Yes Yes Yes Yes Yes No No Yes No No No No No
X Exclusive
IX Yes Yes Yes No No Yes No No No No No No No
State requested
U Update
SIX Yes Yes Yes No No No No No No No No No No
10-6
The above table summarizes the compatibility of the different lock modes. The horizontal
heading shows the lock mode of the application that is holding the locked resources and the
vertical heading shows the mode of the lock requested by another application. For example, if
the application holding the resource is holding an update lock (U), and another application
requests an exclusive lock on that same resource, the request for the lock is denied.
UPDATE inst##.staff
SET salary = 10000 +1000
WHERE salary > 10000
10-7
Lock conversion is required when an application has already locked a data object and requires a
more restrictive lock. A process can hold only one lock on a data object at any time.
The operation of changing the mode of the lock already held is called a conversion.
Row
locks
Table
lock
10-8
If an application changes many rows in one table, it is better to have one lock on the entire table.
Each lock, regardless of whether it is a lock on a database, table, or row, consumes the same
amount of memory, so a single lock on the table requires less memory than locks on multiple
rows in the table. However, table locks result in decreased concurrency, since other applications
are prevented from accessing the table for the duration of the lock.
Database configuration parameters that affect lock escalation include LOCKLIST, which sets a
limit to the amount of space allocated to the lock list, and MAXLOCKS, which is a percent
value representing the maximum amount of lock list space that can be used by a single
application.
10-10
The lock mode that is used by an application is determined by the isolation level. Also, locks
placed on table elements can be on individual rows, or on pages of rows in the table. The default
used by DB2 is row locking.
Databases, table spaces, and tables can be explicitly locked. Here are some examples of
commands that can be used to lock these database objects:
Database lock
CONNECT TO database IN EXCLUSIVE MODE
Table lock
LOCK TABLE table_name IN EXCLUSIVE MODE
Database, tables, and rows can be implicitly locked. For example:
Database are locked during full database restore
Tables are locked during lock escalation
Rows are locked through normal data modification
10-11
To guarantee the integrity of the data, some sort of modification rules are required to control the
use of data. Without these rules, serious problems could occur.
10-12
The isolation level is set within an application to control the type of locks and the degree of
concurrency allowed by the application. DB2 provides four different levels of isolation:
uncommitted read, cursor stability, read stability, and repeatable read.
Uncommitted Read
Uncommitted read, also known as dirty read, is the lowest level of isolation. It is the least
restrictive, but provides the greatest level of concurrency. However, it is possible for a query
executed under uncommitted read to return data that has never been committed to the database.
for example, if an application has performed an insert of a row but has not committed, this row
can be selected by an application using uncommitted read. Phantom reads are also possible
under uncommitted read isolation.
Cursor Stability
Cursor stability is the default isolation mode; it is used when no isolation is set in an application.
In this isolation mode, only the row on which the cursor is currently positioned is locked. This
lock is held until a new row is fetched or the unit of work is terminated. If a row is updated, the
lock is held for the duration of the transaction.
Read Stability
Under read stability isolation, locks are only placed on the rows an application retrieves within a
unit of work. Applications cannot read uncommitted data and no other application can change
the rows locked by the read stability application. It is possible to retrieve phantom rows if the
application retrieves the same row more than once within the same unit of work.
Repeatable Read
Repeatable read is the highest level of isolation and has the lowest level of concurrency. Locks
are held on all rows processed (scanned) for the duration of a transaction. Because so many
locks are required for repeatable read, the optimizer might choose to lock the entire table instead
of locking individual rows.
The same query issued by the application more than once in a unit of work gives the same result
each time (no phantom reads). No other application can update, delete, or insert a row that
affects the result table until the unit of work completes.
Process - 1 Process - 2
A
n
ko
X- Exclusive Lock
X- Exclusive Lock
L oc
X-
ts
an
W
W
an
ts
X-
L oc
ko
n
B
Deadlock
A B
Table Table
10-14
A deadlock occurs when two or more applications connect to the same database wait indefinitely
for a resource. The waiting is never resolved because each application is holding a resource that
the other needs to continue.
Process 1 locks table A in X (exclusive) mode and Process 2 locks table B in X mode; if Process
1 then tries to lock table B in X mode and Process 2 tries to lock table A in X mode, the
processes will be in a deadlock.
The ultimate cause of all deadlocks is poor programming. With proper design, they are
impossible.
Deadlock Detector
Deadlocks in the lock system are handled in the database manager by an asynchronous system
background process called the deadlock detector.
The deadlock check interval defines the frequency at which the database manager checks for
deadlocks among all the applications connected to a database.
Time_interval_for_checking_deadlock = dlchktime
Default [Range]: 10,000 (10 seconds) [1000–600,000]
Unit of measure: milliseconds
10-15
You should now complete the lab exercises for Module 10.
10-16
11-2
Crash/restart recovery
Version/image recovery
Rollforward recovery
11-3
Recovery occurs in a DB2 instance as a result of the need to restore the instance to a state of
consistency when some event has caused portions of the instance to be out of sync. You can
initiate any of the following types of recovery:
Crash/restart recovery — Uses the RESTART DATABASE command or sets the
AUTORESTART configuration parameter to protect a database from being left in an
inconsistent or unusable state.
Version/image recovery — Uses the BACKUP command in conjunction with the
RESTORE command to put the database in a state that was previously saved. This is
used for nonrecoverable databases or databases for which there are no archived logs.
Rollforward recovery — Uses the BACKUP command in conjunction with the
RESTORE and ROLLFORWARD commands to recover a database or table space to a
specified point in time.
11-4
Log files are used to keep records of all changes made to database objects. The maximum size
for each log file is 32 gigabytes in version 7.1 and 256 gigabytes in Version 8.1.
Changes made to the databases are first written to log buffers in memory, then are flushed from
memory to the log files on disk. The transactions written to the logs define a unit of work that
can be rolled back if the entire work unit cannot complete successfully.
Log files are necessary to perform recovery operations. There are two phases to the recovery
process:
Reapplication of all transactions, regardless of whether or not they have been
committed.
Rollback of those changes that were NOT committed.
T ra n sa ctio n A T ra nsactio n D
commit
commit
P ro cess 1
rollback
commit
P ro cess 2
commit
commit
P ro cess 3
A B C A B C A B activ e Lo g F ile x
D C E F D E E F D D Lo g F ile y
D a tab a se
D isk Files
11-5
In this example, three user processes are accessing the same database. The life of every
transaction is also depicted (A-F). The lower middle section of the diagram shows how the
database changes are synchronously recorded in the log files (x, y).
When a COMMIT is issued, the log buffer containing the transaction is written to disk.
Transaction E is never written to disk because it ends with a ROLLBACK statement. When log
file x runs out of room to store the first database change of Transaction D, the logging process
switches to log file y. Log file x remains active until all Transaction C changes are written to the
database disk files. The hexagon represents the period of time during which log file x remains
active after logging is switched to log file y.
SECONDARY
11-6
When a log file has become full, transactions are automatically written to the next log file in
sequence. When the last log file is filled, transactions are written to the first log, and so on. This
is known as circular logging.
Circular logging is the default DB2 logging method. Primary log files are used to record all
changes and are reused when changes are committed. Secondary log files are allocated when the
limit of primary log files is reached. This method of circular logging makes crash recovery and
version recovery possible.
If all log files become full, rollforward recovery is not possible.
An error is returned when the limit of secondary logs is reached or there is insufficient
disk space.
Configure circular logging by setting the LOGSECOND database configuration parameter to the
number of secondary logs you wish to allow. If you set LOGSECOND to -1, then the database is
configured for an infinite number of secondary logs. The default setting for LOGSECOND is 2.
Circular logging can be disabled by setting LOGSECOND to 0.
LOGPATH
Primary and
secondary logs
Transaction
records
MIRRORLOGPATH
Mirror logs
11-8
Dual logging provides a way to maintain mirror copies of both primary and secondary logs. If
the primary or secondary logs become corrupt, or if the device where the logs are stored
becomes unavailable, the database can still be accessed.
Dual logging is enabled by setting the MIRRORLOGPATH database configuration parameter to
a path where the mirror logs are to be located.
Manual or
User-exit
12 ACTIVE - Contains
information for
non-committed
13 or non-externalized
transactions
14
ONLINE ARCHIVE -
OFFLINE ARCHIVE - Contains information
Archive moved from
15
for committed and
ACTIVE log subdirectory. externalized transactions.
(May also be on other media) Stored in the ACTIVE 16
log subdirectory.
11-9
Archival logging is the process of moving the contents of log files to an external storage
medium. Archival logging is enabled by setting the LOGRETAIN parameter in DB CFG to
RECOVERY. When LOGRETAIN is enabled, log files are not deleted, but are stored either
offline or online. A userexit routine can be used to move archived log files to other storage
media. This makes online backup and roll forward possible.
Retained logs are handled in the following way:
With log retention, all logs are kept in the log path unless userexits are enabled or they
are moved manually.
Logs are closed and archived when they are no longer required for RESTART recovery.
Userexits are used to archive the log files to another path/drive/storage media (tape
device).
Userexits are programs called by the DB2 system controller for every log file as soon as it is
full. During roll forward, a userexit may be called to get the log file if it is not in the current log
path.Userexits must always be named db2uext2 and are only available for full database restore
and not a table-space-level restore. Sample userexits included with DB2 can be modified for any
installation. They include: db2uext2.cadsm, db2uext2.ctape, db2uext2.cdisk, and
db2uext2.cxbsa.
DB2 *SIZE?
CERT
SYSADM -or-
LOCAL
SYSCTRL *REFERENCE?
SYSMAINT
-or-
(backbufsz, command
option) Number? ADSM
11-10
Intel
Alias Instance Year Day Minute Sequence
DBALIAS.0\DB2INST.0\19960314\131259.001
Type Node Month Hour Second
Unix
Alias Instance Year Day Minute Sequence
DBALIAS.0.DB2INST.0.19960314131259.001
11-11
The file name (or folders for Intel platforms) used for images on disk or diskette contains:
The database alias
The type of backup (0=FULL, 3=TABLESPACE, 4=Copy from LOAD)
The instance name
The database node (always 0 for non-partitioned databases)
The timestamp of the backup
A sequence number
The exact naming convention varies slightly by platform. Tape images are not named, but
contain the same information in the backup header for verification purposes. The backup history
provides key information in an easy-to-use format.
BACKUP
EXISTS NEW
DELETE TABLE, INDEX, CREATE NEW DATABASE
LONG FIELD FILES
RESTORE AUTHENTICATION
RETAIN AUTHENTICATION
RESTORE DATABASE
RETAIN DATABASE CONFIGURATION FILE
DIRECTORIES
SET DEFAULT LOG PATH
REPLACE TABLE SPACE
ENTRIES RESTORE COMMENTS
RETAIN HISTORY
CHECK DATABASE SEED *
11-12
Restoring a backup image requires rebuilding the database or table space that has been backed
up with the BACKUP command. The restore can be issued from the Command Line Processor
or the Control Center.
During roll forward processing, DB2 looks for the required log file:
If found, it reapplies transactions
If not found, a userexit can be called to retrieve the log file, or it
may be moved manually
If rolling forward for a table space restore, specify the
OVERFLOWLOGPATH parameter, or manually move the files
back to the active log path
Once the log is in the current log path, transactions are reapplied
11-13
This is a description of what happens when a database is restored and the log files are re-applied
during the roll forward phase.
Look for the required log file in the current log path.
If found, reapply transactions from the log file.
If not found, manually move the required archived log files to the current path.
If not found and USEREXIT is configured, the userexit is called to retrieve the log file
from the archive path. The userexit is only called to retrieve the log file if rolling
forward for a full database restore.
If rolling forward for a table space restore, specify the OVERFLOWLOGPATH
parameter or manually move the files back to the active log path.
Once the log is in the current log path, the transactions are reapplied.
11-14
A redirected restore allows you to redefined or redirect table space containers during the restore
process. The definitions of table space containers are saved during a backup, but if these
containers are not available during the restore, you can specify new containers. In a redirected
restore, the Restore command must be executed twice.
11-15
Point of
Units of Work RESTORE Consistency
1 Table Space(s)
2
3 ROLLFORWARD
Backup 4
Table space(s)
changes in logs
Logging Crash
Log-
Files
BACKUP n active logs
Table space n archived logs
Image Active/ucommitted UoW's
are rolled back
time
11-16
During table space recovery, LOGRETAIN and/or a userexit must be enabled. You can then
restore the database using the ROLLFORWARD command to bring the table space to the
desired point in time.
Table space recovery requires a recoverable database (retention logging or a userexit must be
enabled) as the table space must be rolled forward to a minimum point in time (PIT).
Minimum PIT ensures the table space agrees with what is in the system catalogs. It is initially
the time when a backup occurred, but can be increased by changes which cause system catalog
updates:
Alter table
Create index
Table space definition change
The remainder of the database and table spaces are accessible during restore of a particular table
space.
Example 1: Example 2:
TBS1 TBS1
Backup_Pending (Before)
Normal (Before)
Backup_Pending + Offline (After)
OFFLINE (After)
11-17
NO TBS
CONNECT
CIRCULAR V5
LOGGING
OK TBS
CONNECT OFFLINE
CIRCULAR V6/7
LOGGING
11-18
When table spaces are offline, a connection is allowed to be made to the database even if the
table space is damaged and circular logging is used. If only a temporary table space is damaged,
you can create a new one after connecting to the database. The bad temporary table space can
then be dropped.
Logging
Archival Circular Archival Archival Archival
Type
Access
allowed
N/A N/A Full None Full
during
Backup
Database
Rollforward Rollforward TS inRF TS in RF
state after Consistent
Pending Pending Pending Pending
restore
Any Point
Any Point in
Rollforward in Time
N/A Time past Min PIT Min PIT
Required after
backup
backup
11-19
11-20
A recovery history file is created for each database and is updated whenever any of the above
events occur.
You can use this information to recover all or part of the database to a given point in time. The
size of the file is controlled by the REC_HIS_RETENTN configuration parameter This
parameter specifies a retention period (in days) for the entries in the file (db2rhist). You can
execute OPEN, CLOSE, GET NEXT, UPDATE, and PRUNE commands against this file.
The dropped table recovery feature is provided in DB2 as a way to restore tables that are
accidently dropped. Above is a list of steps required to restore a dropped table.
11-22
You should now complete the lab exercises for Module 11.
11-23
Performance Monitoring
12-2
12-3
12-4
DB2 UDB includes several database server parameters that can be tuned to improve overall
server performance:
MAXAGENTS
This parameter indicates the maximum number of database manager agents
(db2agent) available at any given time to accept application requests. These
agents are required both for applications running locally and those running
remotely.
MAXAGENTS should be set to a value that is at least equal to the sum of the
values of the MAXAPPLS database configuration parameters. This database
parameter sets a limit to the number of users that can access the database.
By increasing MAXAGENTS, more agents are available to handle database server
requests, but more memory resources are required.
MAXAGENTS can be set to any value from 1 to 64,000.
BUFFPAGE CHNGPGS_THRESH
CATALOGCACHE_SZ NUM_IOCLEANERS
LOGBUFSZ NUM_IOSERVERS
PCKCACHESZ LOCKLIST
SORTHEAP MAXLOCKS
STMTHEAP MINCOMMIT
DBHEAP LOGFILSIZ
MAXAPPLS LOGPRIMARY & LOGSECOND
12-6
Above is a list of database configuration parameters that can be tuned for better performance.
They are described in more detail below.
BUFFPAGE
Amount of memory allocated to keep required data in cache
Alter buffer pools with NPAGES -1 so that the value of the BUFFPAGE database
configuration parameter is used.
It is recommended that you set the BUFFPAGE parameter in DB CFG.
Start sizing the buffer pool at 75% of the total system memory.
CATALOGCACHE_SZ
This parameter indicates the maximum amount of space the catalog cache can use
from the database heap. It stores table descriptor information used during
compilation of an SQL statement.
The default value is 32, and the range is from 1 to the size of the database heap.
More cache space is required if a unit of work contains several dynamic SQL
statements or if a package is bound to the database containing a lot of static SQL
statements.
...
12-10
AUTOCONFIGURE is a new DB2 command that recommends and optionally applies new
values for buffer pool sizes, database configuration, and database manager configuration.
The syntax for this command is shown here:
AUTOCONFIGURE [USING input_keyword param_value]
[APPLY {DB ONLY | DB AND DBM | NONE}
Where:
input_keyword is the name of a resource that can be set to provide additional
information to the autoconfiguration utility. Refer to the IBM DB2 Command
Reference for a list of valid parameters.
param_value is a value to assign to the input_keyword.
DB ONLY indicates that only configuration changes for the currently selected database
will be applied. This is the default setting.
DB AND DBM indicates that changes to both DBM CFG and DB CFG will be applied.
NONE displays the recommended changes, but does not apply them.
12-12
When multiple processors are available on a computer, DB2 UDB takes advantage of them by
performing some query operations in parallel. Parallel processing is possible only for queries
that do not involve update operations.
Monitoring tools:
Snapshot Monitor
Event Monitor
Health Monitor
12-13
Monitoring activities that are related to database access and SQL processing is required for
optimizing the performance of the queries. It involves:
Understanding how a given query is optimized in a specific environment. For example,
a query that is used in an application that does not perform well.
Understanding how applications use the database manager resources at a specific point
of time. For example, database concurrency is reduced if a specific application is
started.
Understanding what database manager events occur when running applications. For
example, observing a degradation in overall performance when certain applications are
running.
DB2 provides the following tools for monitoring performance:
Snapshot Monitor
Event Monitor
Health Monitor
These switches are set at the instance level, or at the session level
12-14
Similar to a snapshot from a camera, the Snapshot Monitor is used to gather information about
database activity at any point in time. The collection of information for the Snapshot Monitor is
enabled by setting a series of configuration parameters that act as switches for the monitor.
These switches control the amount of information, as well as whether information is collected
for the entire instance, or just for a single application.
12-15
12-16
Above are some examples of commands that are used to manage the Snapshot Monitor.
When the Snapshot Monitor is enabled at the instance level, information is captured for
applications accessing all databases within the instance, and the change does not take affect until
the instance is restarted. When the Snapshot Monitor is enabled at the application level, only
information for that application is captured, and the change takes effect immediately.
Snapshot data accumulates as long as the instance is running. Use the RESET command to clear
out the monitor data.
12-17
Application handle = 3
Application ID = *LOCAL.INST01.020626120056
Sequence number = 0001
Application name = db2bp.exe
Authorization ID = INST00
Application status = UOW Waiting
Status change time = 06-26-2002 17:30:56.893183
Application code page = 1252
Locks held = 1
Total wait time (ms) = 1
12-20
12-21
An Event Monitor records the database activity whenever a specific event or transition occurs.
This is different from the snapshot monitors, which record the state of database activity when the
snapshot is taken.
Here are a couple of examples of when event monitoring is more suitable to use than snapshot
monitoring.
Deadlock— When a deadlock occurs, DB2 resolves the deadlock by issuing a
ROLLBACK for one of the transactions. Information regarding the deadlock event
cannot be easily captured using Snapshot Monitor since the deadlock has probably been
resolved before a snapshot can be taken.
Statement—The Snapshot Monitor for the application records cumulative data for all
the SQL statements, so if you want just the data for an individual SQL statement, use
the Event Monitor for statements.
12-22
You can create individual event monitors to monitor specific types of events or transitions. Once
created, these monitors must be activated.
When creating event monitors, you must specify a directory in which to store the files for the
captured data, and you must specify the number and size of the files. These files are sequentially
numbered and have an .evt extension.
12-23
Example:
SELECT evmonname, EVENT_MON_STATE(evmonname) state
FROM syscat.eventmonitors
Output:
EVMONNAME STATE
------------ -------
EVMON_STAT 1
LOCK_MON 0
12-24
SYSADM or DBADM
12-26
You can define an unlimited number of event monitors, but only 32 event monitors can be active
at a time. You can perform the following tasks to manage event monitors:
Create an event monitor.
Start the monitor.
Flush the event monitor to write the recorded data from memory to files.
Read the output of the event monitor.
Drop the event monitor when it is no longer needed.
12-27
Above is the complete syntax used to create an event monitor. Once the name has been
specified, indicate which type(s) of events you want to monitor. You can use a comma to specify
more than one event object.
12-28
12-29
You can choose to write event monitor information to a file, or, in Version 8.1 of DB2, you can
choose to have the event monitor send streams of this information to a table.
For the WRITE TO FILE option, specify the path of the directory to where the event monitor
should write the event data files. The event monitor writes out the stream of data as a series of 8
character numbered files, with the extension evt. (for example, 00000000.evt, 00000001.evt,
and 00000002.evt).
If you specify the WRITE TO TABLE option, tables are created in the current database
according to which events you have chosen to monitor. Event monitor tables can be identified by
the _event_monitor_name string in the table name.
12-30
For the MAXFILES option, specify an integer value to set a limit on the number of event
monitor files that can exist for a particular event monitor at any time.
By default, there is no limit to the number of files.
12-31
This option specifies that there is a limit to the size of each event monitor file (in units of 4K
pages). Specify the keyword None to remove any restriction on the size of the file and make
sure the value for MAXFILES is 1.
The default for MAXFILESIZE for UNIX is 1000 4K pages, and the default for Windows is 200
4K pages.
12-32
The BUFFERSIZE option specifies the size of the event monitor buffers (in units of 4K pages).
The default size is 4 4K pages and two buffers.
12-33
APPEND indicates that, if event data files already exist when the event monitor is turned on,
then the event monitor appends the new event data to the existing stream of data files.
REPLACE indicates that, if event data files already exist when the event monitor is turned on,
then the event monitor erases all of the event files and starts writing data to file 00000000.evt.
The default is APPEND.
12-34
MANUAL START indicates that the event monitor does not start automatically each time the
database is started. AUTOSTART is used to start the event monitor automatically each time the
database is started.
The default is MANUAL START.
Example 1:
CREATE EVENT MONITOR smithstaff
FOR DATABASE, STATEMENTS
WHERE APPL_NAME = 'staff' AND AUTH_ID = 'jsmith'
WRITE TO FILE '/database/inst101/sample' MAXFILES 25
MAXFILESIZE 1024 APPEND
Example 2:
CREATE EVENT MONITOR stmt_evts FOR STATEMENTS
WRITE TO FILE '/database/inst101/sample' MAXFILES 1
MAXFILESIZE NONE AUTOSTART
Example 3 (DB2 UDB Version 8):
CREATE EVENT MONITOR stmt_evts FOR STATEMENTS
WRITE TO TABLE MAXFILES 1
MAXFILESIZE NONE AUTOSTART
12-35
12-36
Above are some examples of commands to start event monitoring and flush the event
monitoring buffer to disk.
12-37
Two utilities are provided that allow you to read the output from an event monitor. The
db2evmon utility displays results in a text-based format. The db2eva utility displays the event
monitor results using a graphical format. Examples of commands that use these utilities are
shown above.
12-38
12-39
To display the statements that were executing during the monitored event, right-click on the
monitored time period and choose Open as > Statements.
12-40
Above is a sample view of statements that were executing when the monitored event occurred.
Note in the slide panel above, you can readily determine the type of SQL statement being
monitored, as well as the database operation occurring as a result of that statement’s execution.
12-41
The Health Monitor is a server-side tool that constantly monitors the health of the instance, even
without user interaction. If the Health Monitor finds that a defined threshold has been exceeded
(for example, the available log space is not sufficient), or if it detects an abnormal state for an
object (for example, an instance is down), the Health Monitor will raise an alert.
When an alert is raised two things can occur:
Alert notifications can be sent by e-mail or to a pager address, allowing you to contact
whoever is responsible for a system.
Preconfigured actions can be taken. For example, a script or a task (implemented from
the new Task Center) can be run.
Alerts can be monitored and configured through the Health Center. To start the Health Center,
select the Health Center icon from the Command Center tool bar, select Tools > Health Center
from the Command Center menu, or choose Start > Program Files > IBM DB2 > Monitoring
Tools > Health Center from the Windows desktop. You can also start the Health Center by
executing the db2hc command at the command line.
In the Health Center window, you can choose the type of alerts (alarm, warning, attention, or
normal) to display by selecting one of the four buttons above the object panel on the right side of
the window. The icons highlighted in the button indicate which alert type will be presented in
the alert panel on the right side of the window. These icons are shown below.
Attention Normal
The Health Monitor gathers information about the health of the system using new interfaces that
do not impose a performance penalty. It does not turn on any snapshot monitor switches to
collect information. The Health Monitor is enabled by default when an instance is created; you
can deactivate it by setting the HEALT_MON database manager configuration parameter to Off.
12-43
A health indicator is a system characteristic that the Health Monitor checks. The Health Monitor
comes with a set of predefined thresholds for these health indicators. The Health Monitor checks
the state of your system against these health-indicator thresholds when determining whether to
issue an alert.
Using the Health Center, commands, or APIs, you can customize the threshold settings of the
health indicators, and define who should be notified and what script or task should be run if an
alert is issued. This allows you to configure the Health Monitor to retune itself when
performance problems occur, or even “heal” itself when it encounters a critical problem.
To modify Health Monitor indicator settings, expand the object window to display databases,
right-click on the database name, and choose Configure > Database Object Health Indicator
Settings. This displays the Configure Database Object Health Indicator window, as shown
above.
12-44
You should now complete the lab exercises for Module 12.
12-45
Query Optimization
13-2
13-3
Check Semantics
Query
Rewrite Query
Graph
Model
Pushdown Analysis
Executable
Explain Tables Execute Plan
Plan
Visual db2exfmt
Explain Tool db2expln
13-4
Compiler Steps
When an SQL command is submitted, the following actions occur:
Parse query — The SQL compiler analyzes the SQL query to validate the syntax.
Check semantics — The SQL compiler makes sure the database objects referenced in
the statement are correct. For example, the compiler checks to make sure that the data
types of the columns match the actual table definition. In addition, behavioral semantics
are added, such as referential integrity, constraints, triggers, and so forth.
Rewrite query — The compiler transforms the query so that it can be optimized more
easily.
Pushdown analysis — This step is only relevant for federated database queries. The
compiler determines if an operation can be remotely evaluated (pushed-down) at a data
source.
Optimize access plan — The SQL optimizer (a portion of the SQL compiler) generates
many alternative execution plans and chooses the plan with the least estimated
execution cost.
SELECT...
EXPLAIN
Explain
output Recommendations
Explain Advise
Tables Tables
13-6
Explain is a facility to capture detailed information about the access plan chosen by the SQL
compiler to resolve an SQL statement. It supports both static and dynamic SQL, and it supports
both text and graphical displays. All elements of SQL processing are captured, including table
access, index access, joins, unions, scans, and so forth. The explain output information is stored
in a set of persistent explain tables and recommendations are written to a set of advise tables.
There are seven explain tables and two advise tables that are used to provide access plan
information. The explain tables include:
EXPLAIN_ARGUMENT — unique characteristics for each individual operator
EXPLAIN_INSTANCE — main control table for explain
EXPLAIN_OBJECT — objects required by access plan (tables, indexes, and so forth)
EXPLAIN_OPERATOR — operators needed by access plan (table/index scans)
EXPLAIN_PREDICATE — matches predicates to operators
EXPLAIN_STATEMENT — text of the statement (original and rewritten)
EXPLAIN_STREAM — data flows within the query
The advise tables contain recommendation information and include:
ADVISE_INDEX — represents the recommended indexes
ADVISE_WORKLOAD — represents the statement that makes up the workload
13-8
13-9
The syntax for the EXPLAIN statement is shown above. Here is a description of the syntax
elements:
FOR | WITH SNAPSHOT — The FOR clause indicates that only an explain snapshot
is taken and placed into the SNAPSHOT column of the EXPLAIN_STATEMENT
table. The WITH clause indicates that, in addition to the regular explain information, an
explain snapshot is taken. The explain snapshot information is intended for use with
Visual Explain.
SET queryno=integer — This option associates an integer, using the QUERYNO
column in the explain_statement table, with an explainable SQL statement. The
integer supplied must be a positive value. For all dynamic SQL statements the default is
1, and for any static EXPLAIN statement, the default value is the statement number
assigned by the precompiler.
SET querytag=string — This option associates a string, using the querytag column in
the explain_statement table, with an explainable SQL statement. The string can be up
to 20 bytes.
FOR sql_statement — This clause specifies the SQL statement to be explained. This
statement can be any valid DELETE, INSERT, SELECT, SELECT INTO, UPDATE,
VALUES, or VALUES INTO statement.
13-11
The CURRENT EXPLAIN MODE and CURRENT EXPLAIN SNAPSHOT special registers
hold a VARCHAR(254) value which controls the behavior of the explain facility with respect to
eligible dynamic SQL statements. The CURRENT EXPLAIN MODE facility generates and
inserts explain information into the explain tables. The CURRENT EXPLAIN SNAPSHOT
generates explain and snapshot information.
Above is the syntax for the SET CURRENT EXPLAIN command. Here is an explanation of the
command options:
NO — This option disables the explain facility, and no explain information is captured.
This is the default value.
YES — This option enables the explain facility and causes explain information to be
inserted into the explain tables for eligible dynamic SQL statements.
EXPLAIN — This option enables the explain facility and causes explain information to
be captured for any eligible dynamic SQL statement that is prepared. However,
dynamic statements are not executed.
Bind File
13-13
EXPLAIN — This option specifies the behavior of the explain information capture.
Snapshot information is not captured.
NO — Explain information is not captured.
YES — Explain tables are populated with information about the chosen access
plan at prep or bind time for static statements.
ALL — Same as the above option. Additionally, explain information is gathered
for eligible dynamic SQL statements at run time, even if the CURRENT
EXPLAIN MODE register is set to NO.
EXPLSNAP — This option specifies the behavior of the explain information capture
including the snapshot information.
NO — An explain snapshot is not captured.
YES — An explain snapshot for each eligible static SQL statement is placed in the
explain tables.
ALL — Same as the above option. Additionally, explain information is gathered
for eligible dynamic SQL statements at run time, even if the CURRENT
EXPLAIN SNAPSHOT register is set to NO.
Here is an example of a BIND command:
db2 BIND '/usr/lpp/db2_07_01/samples/c/static.sqc' BINDFILE
EXPLAIN YES EXPLSNAP YES
db2exfmt
db2expln
dynexpln
13-15
Visual Explain — This tool allows for the analysis of the access plan and optimizer
information from the explain tables through a graphical interface. It is invoked from the
Control Center.
db2exfmt — This command displays the contents of the explain tables in a predefined
format.
db2expln — This command is for static SQL statements. It shows the access plan
information from the system catalog, and contains no optimizer information. This
command is invoked through the command line.
dynexpln — This command is for dynamic SQL statements. It creates a static package
for the statements and then uses the db2expln tool to describe them. It is invoked
through the command line.
13-16
The db2expln command describes the access plan selection for static SQL statements in
packages that are stored in the DB2 system catalogs. Given a database name, package name,
package creator, and section number, the tool interprets and describes the information in these
catalogs. To use this command, you must have SELECT privilege on the system catalog views
and EXECUTE authority for the db2expln package.
The options available with this command are:
–c creator — This is the user ID of the creator. You can specify the creator name using
the pattern matching characters, percent sign (%) and underscore (_) used in a LIKE
predicate.
-d database — This is the name of the database that contains the package to be
explained.
-g — This option directs db2expln to show the optimizer plan graphs. Each section is
examined, and the original optimizer plan graph, as presented by the Visual Explain
tool, is constructed.
-h — This option directs db2expln to display the help information about the input
parameters.
-i — This option directs db2expln to display the operator IDs in the explained plan.
13-18
Visual Explain is a GUI (graphical user interface) utility that gives the database administrator or
application developer the ability to examine the access plan constructed by the optimizer.
Visual Explain:
Can only be used with access plans that are explained using the snapshot option
Can be used to analyze previously generated explain snapshots or to gather explain data
and explain dynamic SQL statements
Creates the explain tables if they do not exist
Is invoked from the Command Center or Control Center, as shown above
Is displayed in terms of graphical objects called nodes
An operator indicates an action that is performed on a group of data.
An operand shows the database objects where an operator action takes place. In
other words, an operand is an object that the operators act upon.
O p e r a to r N o d e s
1 ) F ilte r
2 ) S o rt
3 ) J o in
4 ) T a b le S c a n
5 ) In d e x S c a n
Right-click
O perand N odes for detail
1 ) T a b le s
2 ) In d e x
13-19
The slide above contains an example of the graphical output displayed by the Visual Explain
tool. To view more detail about any of the operator nodes, right-click on the node and select
Show Details to view this detailed information.
13-20
An example of an Operator Details window for a table scan after a sort on the employee table, is
shown above.
The information obtained from the access plan graph can help to:
Design application programs
Design databases
See how tables are joined
Determine how to improve performance
View the statistics used at the time of optimization
Determine if indexes were used
View the effects of tuning
Obtain information about each query plan operation
13-21
The access plan graph is a very useful analysis tool that can be used to:
Design application programs to make the best use of available indexes
Design databases that make the best use of available disk resources
Explain how two tables are joined; including the join method, the order in which the
tables are joined, whether sorting is required and, if so, the type of sorting
Determine ways of improving the performance of SQL statements, for example, such as
creating a new index
View the statistics that were used at the time of optimization, then compare these
statistics to the current catalog statistics to determine whether rebinding the package
might improve performance. It also helps determine whether collecting statistics might
improve performance.
Determine whether or not an index was used to access a table. If an index was not used,
the visual explain function helps determine which columns could be included in an
index to help improve query performance.
View the effects of tuning by comparing the before and after versions of the access plan
graph for a query.
Obtain information about each operation in the access plan, including the total
estimated cost and the number of rows retrieved.
Example:
db2 UPDATE DATABASE CFG FOR sample
USING dft_queryopt 3
13-22
The optimizer has a throttle to control how much optimization is done during the generation of
an access plan. This throttle is managed by setting the DB CFG parameter, DFT_QUERYOPT.
This parameter can be set to any integer from 0 to 9 and the default value is 5. The syntax for
setting this variable using the UPDATE DATABASE CFG command and an example are shown
above.
In general, 5 is a good DFT_QUERYOPT setting for OLTP applications and/or static SQL. A
value of 3 or less is appropriate for OLAP, data warehousing, and/or dynamic SQL. The updated
parameter value takes effect only after restarting the database.
13-23
Record blocking is a caching technique used to send a group of records across the network to the
client at one time. Records are blocked by DB2 according to the cursor type and the
BLOCKING parameter setting in the BIND command. The BLOCKING parameter values are
shown above.
For local applications, the ASLHEAPSZ database manager configuration parameter is used to
allocate the cache for row blocking.
For remote applications, the RQRIOBLK database configuration parameter on the client
workstation is used to allocate the cache for row blocking.
13-24
You should now complete the lab exercise for Module 13.
13-25
Problem Determination
14-2
14-3
In order to solve the problem, you must understand the nature of the problem, and determine the
cause of the conditions.
Important points to consider:
Is the problem reproducible? If so, how is it reproducible (by the clock, a certain
application)?
Was it a one time occurrence? What were the operating conditions at the time?
14-4
14-6
To troubleshoot the problem correctly, you need to collect the information required to analyze a
DB2 problem and determine the solution.
Use the db2diag.log file to view captured diagnostic information.
If reproducible, setting the DIAGLEVEL to 4 and recapturing the information is
recommended.
Any dump files mentioned in db2diag.log (pid/tid.node).
Any traceback/trap files in the DIAGPATH (tpid/tid.node or *.trp). You must check for
them manually—know the format!
With WE or EE, send all files in the DB2DUMP directory.
To reduce the number of viewed files, clean up this directory on a regular basis.
Also copy and truncate the db2diag.log file.
14-7
Your checklist for the required data for proper error diagnosis includes:
The SQL code
Reason codes
System error codes
A GOOD problem description
A description of the actions preceding the error
Database code level
DBM/DB configuration data
db2diag.log
Time of the error
Any dump file listed in the db2diag.log
Any trap file in the DIAGPATH
Operating system software level and hardware model
DB2 trace
SYSLOG
DB2 event monitors / database snapshots
Reproducible scenario and data
For UNIX:
db2profile
.profile
.rhosts
services file
14-8
14-9
The FFDC collects the data required to analyze the problem based on:
What the error is
Where the error is being encountered
Which partition is encountering the error
14-10
To get the most useful diagnostic information, use DIAGLEVEL 4 whenever possible.
Always use DIAGLEVEL 4 during:
Initial install/configuration time
Times of configuration changes
Times when experiencing errors
14-11
When using the db2diag.log tool, DIAGLEVEL 4 logs more information than lower levels. This
causes DB2 to run a little slower, but only during the following times:
During an error condition.
During db2start processing.
During an initial connect to a database.
Therefore, you must balance out the extra data provided with the decreased response time to
determine the best setting for the environment.
Warning!
Be careful when using DIAGLEVEL 4. Do not set it at that level during normal
daily activities; use it only as a troubleshooting aid. Remember, db2diag.log grows
in size as it is used, so it is a good idea to copy it to a safe location and truncate the
original periodically.
14-12
14-13
In the example above, there is no error code because it is simply an informational message.
From the application ID, observe that this is a local connection from another session on the same
machine where the instance is running.
14-14
From the application ID, observe this is a remote connection from another machine.
We can also see the TCP/IP address of the client is:
9.21.16.109 from ---> 09.15.10.6D
This allows the easy linking of client side calls to server side actions.
Note The TCP/IP address is captured in hexadecimal format. You will need to convert it
to decimal to recognize it.
14-15
In most cases, the db2diag.log interprets error codes it receives into text. In the above example,
a bad container path error occurred as a result of specifying a path that does not exist when
trying to add a container.
You can find additional error information in several places. Start with the IBM DB2 UDB
Message Reference. Do a search for your error message. For example, if you look up bad
container path, you will see the error message:
SQL0298N Bad container path.
Explanation: The container path violates one of the following requirements:
...
Examine the list of violations that follow to help identify the cause of the error.
14-16
The return codes returned in the db2diag.log file are sometimes internal DB2 return codes. They
are in hexadecimal and can be in one of two forms:
FFFF1111 — Must be in this form to be used
1111FFFF — If it is in this form, you need to convert by byte reversing the value
14-17
Integers are stored byte reversed on Intel platforms, so DB2 can have return codes in the form
xxxxFFFF. You need to convert them so that their form is FFFFxxxx. Below is a method of
reversing the bytes.
To convert do the following:
If the original: 1234ABCD
Byte reversal produces: CDAB3412
Where xxxx is 12 34, it is reversed to 34 12, and where FFFF is AB CD becomes CD AB. Then
they are combined in the form of FFFFxxxx to be CD AB 34 12.
Note In rare cases the code is neither an SQL code nor can it be found in the DB2
documentation. In this case, the code is only used internally. Contact DB2 support to
confirm.
14-18
14-19
14-20
In this case, the return code is F616. You can look up this error in the Troubleshooting Guide as
previously discussed.
The F616 error message in the Troubleshooting Guide is:
F616 -902 22 File sharing error
Check the SQL Code:
db2 ? SQL0902
Error output:
SQL0902C A system error (reason code ="<reason-code>") occurred.
Subsequent SQL statements cannot be processed.
Explanation: A system error occurred.
User Response: Record the message number (SQLCODE) and reason code
in the message.
sqlcode: -902
sqlstate: 58005
The SQL code is -902, but this one has a reason code of 22.
DB2 could not remove the directory due to a sharing violation. This
requires manual cleanup of the directory
14-21
The directory indicated above had a sharing violation when DB2 attempted to remove the
directory. DB2 cleaned up what it could and dropped the database. However, the DBA must
manually remove the above directory.
14-22
14-23
The function name does give a good description of what it is doing. You can get more
information from DB2 by the command:
db2 "LIST TABLESPACE CONTAINERS FOR 0 SHOW DETAIL"
The tag below indicates the file is in use and assigned to table
space 2
The container is already in use
DB2 does not currently fail if the file is not DB2 related
The container file must be, or have been, a DB2 container
14-24
14-25
The previous container error can happen for a number of reasons, including:
In UNIX, the file systems were mounted incorrectly
The container cannot be found
An old table space was dropped but the container was not deleted
The old container remains with a DB2 tag in it, preventing its reuse.
Especially true with raw device containers
A drive was restored on the system without a database restore
The old file structure (hence containers) is restored, with the old DB2 tag still in it.
14-26
14-27
You should now complete the lab exercises for Module 14.
14-28
Security
15-2
15-2 Security
Security
15-3
There are three levels of security that control access to a DB2 system:
Instance level
Database level
Database object level
All access to the instance is managed by a security facility external to DB2. The security facility
is part of the operating system or a separate product. Database manager security parameters,
administrative authorities and user privileges are used to control access to databases and data
objects.
Security 15-3
Authentication
15-4
Authentication is used to verify the user's identity. DB2 passes all user IDs and passwords to the
operating system or external security facility for verification.
You must set the authentication parameter at both the DB2 server and client to control where
authentication takes place. At the DB2 server, the authentication type is defined in the database
manager configuration file (DBM CFG). At the DB2 client, the authentication type is specified
when cataloging a database.
Authentication types available at the DB2 server include:
SERVER, SERVER_ENCRYPT
CLIENT
KERBEROS, KRB_SERVER_ENCRYPT
Authentication types available at the DB2 client include:
SERVER, SERVER_ENCRYPT
CLIENT
DCS
KERBEROS
Gateway authentication is no longer permitted.
15-4 Security
Authentication Type: Server
15-5
Security 15-5
Authentication Type: DCS
15-6
15-6 Security
Encrypted Password
15-7
Security 15-7
Authentication Type: KERBEROS
15-8
15-8 Security
Authentication Type: KRB_SERVER_ENCRYPT
15-9
Security 15-9
Authentication Type: CLIENT
15-10
When the authentication type is set to CLIENT, authentication occurs at the client and the
password is NOT sent to the server for validation unless CLIENT authentication with SERVER
validation is obtained. CLIENT authentication also enables single-point logon.
Be careful in insecure environments. Windows 9x, Windows 3.1, and Macintosh, for example,
do not have a reliable security facility. They connect to the server as an administrator without
any authentication unless TRUST_ALLCLNTS is set to NO on the server.
15-10 Security
TRUST_ALLCLNTS
15-11
Security 15-11
TRUST_CLNTAUTH
15-12
Authentication parameters are used to specify where authentication occurs when a user ID and
password are supplied with a CONNECT statement or ATTACH command.
Active only when AUTHENTICATION is set to CLIENT. If AUTHENTICATION is
set to SERVER, the user ID and password must be sent to the DB2 server on connect.
Active when the user ID and password are provided for connection.
If TRUST_CLNTAUTH is set to CLIENT, then authentication is done at the
client; the user ID and password are not required for CONNECT and ATTACH
statements.
If TRUST_CLNTAUTH is set to SERVER, then authentication is done at the
SERVER when a user ID and password are provided with a CONNECT or
ATTACH statement.
Specify where the trusted client is authenticated. Untrusted clients are always validated
at the DB2 server if TRUST_ALLCLNTS is set to NO (regardless of the setting of
TRUST_CLNTAUTH).
Useful if you need to control where authentication takes place, based on whether
CONNECT sends the user ID and password. Set TRUST_CLNTAUTH to SERVER to
reduce the RPC to the domain controller.
15-12 Security
Authorities
15-13
Authorities are a high level set of user's rights allowing users to do administrative tasks such as
backup or create databases. These are normally required for maintaining databases and
instances.
There are five authorities in DB2:
SYSADM — system administration authority
Holds the most authorities and privileges for the DB2 instance
Specify the user group to be used as the SYSADM_GROUP in the DBM CFG
SYSCTRL — system control authority
Provides the ability to perform almost any administration task
Members of SYSCTRL cannot access database objects (unless explicitly granted
the privileges) and cannot modify the DBM CFG
Specify the user group to be used as the SYSCTRL_GROUP in the DBM CFG.
SYSMAINT — system maintenance authority
Allows execution of maintenance activities
Does not allow access to user data and cannot modify the DBM CFG
Specify the user group to be used as the SYSMAINT_GROUP in the DBM CFG
Security 15-13
LOAD — load table authority
New authority introduced in DB2 UDB v7.1
Authority defined at the database level
Enables the user to run the LOAD utility without the need for SYSADM or
DBADM authority
DBADM — database administration authority
Authority defined at the database level
Users can perform any administrative task and data access on the database
15-14 Security
Authorities in the DBM Configuration
15-15
DB2 authority uses groups defined in the operating system security facility.
Authority is not established by the GRANT statement. Instead, it must be set in the database
manager configuration. The configuration parameters that define this authority are shown above.
For example, to specify the SYSCTRL authority:
db2 "UPDATE DBM CFG USING SYSCTRL_GROUP db2cntrl"
db2stop
db2start
Here is an example of a command to list the current setting of the authority for the instance:
db2 "GET DBM CFG"
Security 15-15
Database Authority Summary
Function SYSADM SYSCTRL SYSMAINT DBADM
Update DBM CFG yes
Grant/revoke DBADM yes
Specify SYSCTRL group yes
Specify SYSMAINT group yes
Force users yes yes
Create/drop database yes yes
Restore to new database yes yes
Update DB CFG yes yes yes
Backup database/table space yes yes yes
Restore/roll forward a database yes yes yes
Start/stop a database instance yes yes yes
Run trace yes yes yes
Take snapshots yes yes yes
Query table space state yes yes yes yes
Update log history file yes yes yes yes
QUISCE table space yes yes yes yes
Load tables yes yes
Create/activate/drop event monitors yes yes
15-16
The table above is a summary of functions that are allowed by various authorities.
15-16 Security
Privileges
15-17
A privilege is the right to create or access a database object. In DB2, there are three types of
privileges:
Ownership (or CONTROL)
Individual
Implicit
CONTROL privilege has full access to the object.
Individual privileges allow the user to perform specific functions on a database object (for
example, SELECT, DELETE, INSERT, and UPDATE).
Implicit privilege is automatically granted when a user is explicitly granted certain higher level
privileges.
Security 15-17
Levels of Privileges
Database
Schema
Table and View
Package
Index
Table space
Alias
Distinct Type (UDT)
User Defined Function (UDF)
15-18
The DB2 privileges can be set and used at the levels shown above.
15-18 Security
Database Level Privileges
CONNECT
BINDADD
CREATETAB
CREATE_NOT_FENCED
IMPLICIT_SCHEMA
CREATE_EXTERNAL_ROUTINE (Version 8)
15-19
Security 15-19
Schema Level Privileges
CREATEIN
ALTERIN
DROPIN
15-20
15-20 Security
Table and View Privileges
CONTROL
ALTER
DELETE
INDEX
INSERT
REFERENCES
SELECT
UPDATE
15-21
Security 15-21
Package and Routine Privileges
Package privileges:
CONTROL
BIND
EXECUTE
15-22
15-22 Security
Index and Table Space Privileges
15-23
The only index-level privilege is CONTROL. The creator of an index or an index specification
automatically receives CONTROL privilege on the index.
CONTROL privilege on an index is really the ability to drop the index. To grant CONTROL
privilege on an index, a user must have SYSADM or DBADM authority.
The only table-space-level privilege is USE OF, which provides users with the ability to create
tables only in table spaces to which they have been granted access.
Security 15-23
Implicit Privileges
15-24
15-24 Security
Privileges Required for Application Development
15-25
The table above shows a list of tasks that are required when developing an application and the
privileges required to perform these tasks.
Security 15-25
System Catalog Views
15-26
Most of the information on authorizations is maintained in five system catalog views. These
catalogs are listed above.
15-26 Security
Hierarchy of Authorizations and Privileges
SYSADM
CREATE NOT
FENCED CONTROL Privileges
(packages)
(database)
BIND
EXECUTE ALL
BINDADD ALTER
(database) CONTROL
(tables) DELETE
CONNECT INDEX
(database) INSERT
CONTROL ALL REFERENCES
CREATETAB (views)
(database) DELETE SELECT
SELECT UPDATE
CONTROL INSERT
(indexes) (Schema
owners) UPDATE
ALTERIN
IMPLICIT SCHEMA CREATEIN
(database)
DROPIN
15-27
The chart shown above depicts the various authorizations and privileges and how they relate to
one another.
Security 15-27
Audit Facility
15-28
The audit facility of DB2 UDB allows you to predefine events at the instance level to generate
records in an audit log file. The following event categories, based on scope, can be audited:
AUDIT — Changes in the state of auditing
CHECKING — Authority checking
OBJMAINT — Creation and deletion of DB2 objects
SECMAINT — Overall security (GRANT, REVOKE, and so forth)
SYSADMIN — Action requiring SYSADM authority
VALIDATE — User validation, retrieving user information
CONTEXT — Operation context (SQL statement, for example)
15-28 Security
The db2audit Command: How It Works
15-29
Security 15-29
Summary
15-30
15-30 Security
Lab Exercises
You should now complete the lab exercises for Module 15.
15-31
Security 15-31
15-32 Security
Module 16
Summary
16-2
16-2 Summary
Course Objectives
16-3
Summary 16-3
Basic Technical References
16-4
16-4 Summary
Advanced Technical References
16-5
Summary 16-5
Next Courses
16-6
16-6 Summary
Evaluation Sheet
Thank You!
16-7
Summary 16-7
16-8 Summary
Appendixes
Appendix LE
In this appendix, you will learn how to connect to the Lab Exercises
environment in the IBM DB2 classrooms:
Client Setup (Windows) — page LE-3
DB2 Server Setup (Windows) — page LE-4
DB2 Server Setup (UNIX / Linux) — page LE-5
LE-2
LE-3
LE-4
LE-5
On the Windows platform, you have three ways to work with DB2:
Graphical User Interface
Command Line Processor
Command Window (using the CLP)
LE-6
Command Window
Use the mouse and navigate to the DB2 Command Window. For example:
Start > Programs > IBM DB2 > Command Line Tools > Command Window
This selection opens a DB2 Command Window. To start the CLP in this window, you must type:
db2
db2
option-flag db2-command
sql-statement
?
phrase
message
sql-state
class-code
LE-7
The basic command line syntax for the CLP is shown above.
LE-8
While the DB2 server is running, you can use the CLP to get command line help as shown
above.
You can also view PDF/HTML technical document files if they were installed with the server.
The IBM DB2 Command Reference document contains further information on using the CLP.
Non-interactive mode:
db2 CONNECT TO eddb
db2 "SELECT * FROM syscat.tables" | more
Interactive mode:
db2
db2=> CONNECT TO eddb
db2=> SELECT * FROM syscat.tables
LE-9
Use the non-interactive mode if you need to issue OS commands while performing your tasks.
quit No No
LE-10
Use the DB2 LIST command to view the Command Line Processor
option settings:
db2 LIST COMMAND OPTIONS
LE-11
LE-13
LE-14
Edit create.tab
COMMIT WORK;
CONNECT RESET;
LE-15
Index-1
DB CFG 5-3 tables 13-7
GET DB CFG command 5-4 tools 13-15
managing in CLP 5-4 Visual Explain 13-15, 13-18
managing in Control Center 5-6 EXPORT command 8-5–8-17
db2admin 4-12 Exporting data
db2advis 6-29 formats 8-3
db2audit 15-29 Extent 3-5
db2diag.log 14-6, 14-9
error codes 14-15
location 14-12 F
db2exfmt 13-15 Federated system 5-38
db2expln 13-15, 13-16 objects 5-39
db2icrt 4-9, 4-10 First Steps 2-5
db2idrop 4-12 FORCE APPLICATION command 5-10
db2look 8-57–8-59 Foreign key 7-16
db2move 8-53–8-56 altering 7-20
db2rebind 9-19 creating through Control Center 7-20
db2set 4-21 creating with new table 7-18
db2start 4-14
db2stop 4-15
DBCLOB 5-22 G
DBM CFG 4-3
Global temporary table 5-24
DEACTIVATE DATABASE command 5-9
Graphical user interface tools 2-3
Design Advisor 6-19–6-28
GUI Tool
db2advis 6-29 Control Center 2-10
Development Center 2-24
GUI tools 2-3
create a new routine 2-26
Command Center 2-15
Diagnostics
Configuration Assistant 2-14, 4-30
data required 14-6
contents pane
db2diag.log 14-6, 14-9 Contents pane 2-11
error codes 14-15
Design Advisor 6-19–6-28
looking up internal codes 14-16, 14-18
Development Center 2-24
other data sources 14-8
First Steps 2-5
required data checklist 14-7
Health Center 2-21
suggestions 14-10
Information Center 2-9
DROP TABLESPACE command 3-31
Journal 2-21
Dropped table recovery 11-21
License Center 2-23
dynexpln 13-15
menu bar 2-10
object menu 2-13
object pane
E Object pane 2-11
Event Monitor 12-21–12-40 SQL Assist 5-36
CREATE EVENT MONITOR 12-27–12-34 Task Center 2-17
Explain 13-6 toolbar 2-10
binding 13-14 Tools menu 2-12
capturing data 13-8
db2exfmt 13-15
db2expln 13-15, 13-16 H
dynexpln 13-15
Health Center 2-21
EXPLAIN command 13-9 Health Monitor 12-41
precompilation 13-13 Health indicator 12-43
setting optimization level 13-22
special register 13-11
Index-2
explicit 10-10
I isolation levels 10-12
IMPORT 8-18–8-23 lock conversion 10-7
Index lock parameters 10-8
bidirectional 6-6, 6-12, 6-16 types of locks 10-4
clustered 6-6, 6-9, 6-15 why needed 10-3
CREATE INDEX command 6-7 Logging
creating in Control Center 6-17 archival 11-9
definition 6-3 circular 11-6
key 7-3 dual 11-8
multidimensional clustering 6-10 infinite active 11-7
primary key 7-4 log file usage 11-5
type-2 6-4 log files 11-4
unique 6-6, 6-7, 6-14 userexits 11-9
Information Center 2-9
Instance 4-3
authority 4-23 M
creating 4-9 Menu bar 2-10
creating in UNIX 4-4 Monitoring
creating in Windows 4-11 Event Monitor 12-21–12-40
dropping 4-12 Health Monitor 12-41
starting 4-14 performance 12-13
stopping 4-15 Snapshot Monitor 12-14–12-20
Isolation level 10-12 Multidimensional clustering 6-10
cursor stability 10-12
read stability 10-13
repeatable read 10-13 O
uncommitted read 10-12 Object menu 2-13
Optimization
record blocking 13-23
J
Journal 2-21
P
Parallelism configuration 12-12
K Performance 12-3
Key 7-3 monitoring 12-13
foreign 7-16 Performance tuning 12-3
primary 7-4 DB CFG parameters 12-6
unique 7-22 DBM CFG parameters 12-4
Primary key 7-4
adding to existing table 7-13
L alter through Control Center 7-14
Large object 5-22 dropping 7-13
storing 5-23 Problem
License Center 2-23 description 14-4
LIST TABLESPACE command 3-20 solving 14-3
LIST TABLESPACE CONTAINERS 3-22 types 14-5
LOAD command 8-24–8-51 Profile Registry 4-19
LOB 5-22 levels 4-20
Locking
compatibility 10-6
deadlock 10-14
escalation 10-8
Index-3
table level 15-21
Q table space level 15-23
Query optimization 13-3 view level 15-21
explain 13-6 system catalogs 15-26
Quick Tour 2-8 Server discovery 4-30
SET CURRENT SCHEMA 5-13
SQL Assist 5-36
R SYSADM 4-5
SYSADM_GROUP 4-5
RAID device 3-35
System catalog
Recovery
querying for bufferpools 5-20
types of 11-3
querying for constraints 5-21
Recovery history file 11-20
querying for table names 5-18
Redirected restore 11-14
querying for table spaces 5-19
Registry variables 4-19
tables 5-17
setting 4-22
System catalog space
viewing 4-21
performance 3-38
REORG 9-11–9-14
System temporary table 5-24
REORGCHK 9-4–9-10
System-managed space 3-3, 3-7, 3-9
Restore
database roll forward 11-13 performance 3-36
System-temporary space
dropped table recovery 11-21
performance 3-39
point-in-time recovery 11-16
recovery history file 11-20
redirected 11-14
table space recovery 11-16
T
Restoring from backup 11-12 Table
RUNSTATS 9-15–9-17 creating in Control Center 7-7
Table space 3-3
altering 3-25
S creating 3-16–3-19
creating with database 3-14
Sample databases 2-6
Schema 5-12 database-managed space 3-3, 3-8, 3-11
creating 5-14 dropping 3-31
setting current 5-13 listing 3-20
Security offline state 11-17
audit facilty 15-28 performance 3-40
authentication 15-4 SMS versus DMS 3-12
Client 15-10 system temporary space 3-10
system-managed space 3-3, 3-7, 3-9
DCS 15-6
user temporary space 3-10
KERBEROS 15-8
Task Center 2-17
KRB_SERVER_ENCRYPT 15-9
new task 2-19
server 15-5
Task menu 2-18
authorities 15-13–15-16
Temporary table
encrypted password 15-7
authorization 5-28
levels 15-3
creating 5-26
privilege 15-17
global 5-24
database level 15-19
system 5-24
development requirements 15-25
Timeron 2-29
implicit 15-24
index level 15-23 Toolbar 2-10
package level 15-22 Tools menu 2-12
routine level 15-22 Tutorials 2-7
schema level 15-20
Index-4
U
Unique index 6-6, 6-7, 6-14
Unique key 7-22
adding with CREATE TABLE 7-23
modifying with ALTER TABLE 7-25
Utility
dasidrop 4-12
data movement 8-4
db2admin 4-12
db2advis 6-29
db2icrt 4-9, 4-10
db2idrop 4-12
db2set 4-21
db2start 4-14
db2stop 4-15
V
View 5-29
Create View window 5-35
creating 5-33
creating in Control Center 5-34
types 5-31
Visual Explain 2-27, 13-18
Index-5
Index-6