Вы находитесь на странице: 1из 321

Unify DataServer: Configuration Variable and Utility Reference

E 1996, 1997, 2001, 2005 by Unify Corporation, Sacramento, California, USA All rights reserved. Printed in the United States of America. No part of this document may be reproduced, transmitted, transcribed, stored in a retrieval system, or translated into any language or computer language, in any form or by any means, electronic, mechanical, magnetic, optical, chemical, manual or otherwise without the prior written consent of Unify Corporation. Unify Corporation makes no representations or warranties with respect to the contents of this document and specifically disclaims any implied warranties of merchantability or fitness for any particular purpose. Further, Unify Corporation reserves the right to revise this document and to make changes from time to time in its content without being obligated to notify any person of such revisions or changes. The Software described in this document is furnished under a Software License Agreement. The Software may be used or copied only in accordance with the terms of the license agreement. It is against the law to copy the Software on tape, disk, or any other medium for any purpose other than that described in the license agreement. Unify Corporation values and appreciates any comments you may have concerning our products or this document. Please address comments to:

Product Manager Unify Corporation 2101 Arena Boulevard Ste. 100 Sacramento, CA 95834-1922 (800) 248-6439 (916) 928-6400 FAX (916) 928-6406
UNIFY, ACCELL, VISION, and the Unify Logo are registered trademarks of Unify Corporation. Unify DataServer is a trademark of Unify Corporation. UNIX is a registered trademark of the Open Group in the United States and other countries. The X Window System is a product of the Massachusetts Institute of Technology. Motif, OSF, and OSF/Motif are trademarks of Open Software Foundation, Inc. SYBASE is a registered trademark, and SQL Server, DB Library, and Open Server are trademarks of Sybase, Inc. INFORMIX is a registered trademark of Informix Software, Inc., a subsidiary of IBM. INGRES is a trademark of Computer Associates International, Inc. ORACLE is a registered trademark of Oracle Corporation. Sun is a registered trademark, and SunView, Sun 3, Sun 4, X11/NeWS, SunOS, PC NFS, and Open Windows are trademarks of Sun Microsystems. All SPARC trademarks are trademarks or registered trademarks of SPARC International, Inc. SPARCstation is licensed exclusively to Sun Microsystems, Inc. Novell is a registered trademark of Novell, Inc. Macintosh is a trademark of Apple Computer, Inc. Microsoft, MS, MS DOS, and Windows are registered trademarks of Microsoft Corporation. All other products or services mentioned herein may be registered trademarks, trademarks, or service marks of their respective manufacturers, companies, or organizations.

Part Number: 7803-03

Chapter Title

Contents
About This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Related Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using This Manual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Syntax Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Icons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration Variable Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration Variable Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . AMFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AMLEVEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AMTFMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AMTNULLCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AUSHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . AUTOSTART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BESHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BTBUFSIZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BTREECUTOFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BUCHECKSUM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BUDEV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BURDSZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . BUSHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CENTURY_CUTOFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CLDMAXSLP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CLIENTINFO (Unify/Net only) . . . . . . . . . . . . . . . . . . . . . . . . . . . CMAGEINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CMDENSITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CMMINFRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CMOPTFRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CMPLOCK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CMPTFLG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CMSHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CMSLPINT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CONFIG_READONLY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . COREDUMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . COREMAX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . COREPATH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 13 13 13 15 17 19 21 21 22 23 23 24 24 25 25 25 26 27 28 28 29 29 30 30 31 31 32 32 32 33 35 35 36 36

CORESIG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CREATSH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CURR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CURRSYM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DATEFMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DATNULLCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DBCHARSET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DBHOST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DBNAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DBPATH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DBSFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DBSHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DBUSER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DBVFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDBTCHNKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDCOLCHNKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDHSHCHNKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDLNKCHNKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDSHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DDTBLCHNKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DISFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DMNNICE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DMNTMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . EDIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ERRFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FLTFMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FLTFMT31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FLTNULLCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FMMINFRE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FMNAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FMOPNMODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FMSHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FND1STFASTFAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FREQTYPE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . FREQUENCY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HISTINTRVL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . HSHCHKUNIQ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . INITMEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IXSHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . JOURNAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . JOURNAL2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . L0FORMAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LANG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4

36 37 37 38 38 39 40 41 42 43 44 44 45 46 47 47 48 49 49 50 51 51 51 52 52 53 53 53 54 54 55 56 56 58 59 59 60 61 61 62 64 65 66
Contents

LANGDIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LINKCUTOFF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMBTSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMCASPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMCDSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMCNAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMCSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMDNAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMHTSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMLKSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMLOCKDIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMNRETRY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMPROMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMRNAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMROWLOCKHASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMSHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMSMSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMTNAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMVDSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LMVLSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LOGALL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LOGARCHIVE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LOGARCHIVE2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LOGBLK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LOGFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LOGFM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LOGOPNMODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LOGRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LOGTX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LOGUSER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . LONGQUERYTIME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MAXBTSCAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MAXBUJRNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MAXCACHE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MAXCOLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MAXOPNBTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MAXSCAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MAXSYSTX (Read only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MAXTBLS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MAXUSRTX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MXIAQP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MXOPENCURSORS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . MXQRYTXT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

67 67 67 68 68 68 69 69 69 70 70 70 71 71 71 72 73 73 73 74 74 74 75 75 76 77 77 78 78 78 79 79 79 79 80 80 81 81 82 82 82 83 83
5

NAMECACHE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NAMECACHEMX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NAMECACHESZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NBUBUF . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NBUCKET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NBUPROC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NFUNNEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NULLCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NUMFMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . NUMNULLCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OPMSGDEV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OPNOTIFY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . OWNERSTART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PHYFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PHYHASH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . PROCESSOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RADIXSEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RALFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . REPTMAXMEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RHLIGLOB-COMPAT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RKYMAXMEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . RMTROWBUFSZ (Unify/Net only) . . . . . . . . . . . . . . . . . . . . . . . . RPT13GLOB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SCHEMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SCHFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SEPARATOR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHELL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMADDR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMDEBUG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMDIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMFULL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMKEY and XXSHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMKIND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMMARGIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMMAX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMMERGE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMMIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMMODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMNAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMNAPINCR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMNAPMAX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMOFFSET . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHMRSRV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6

84 84 85 85 86 86 87 87 88 88 89 89 90 90 91 91 92 92 93 93 94 94 95 95 95 96 96 97 97 97 97 98 100 100 100 101 102 103 103 104 104 105 106
Contents

SHMSPIN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHUTDBSIG . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SHUTDOWN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SORTCOST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SPMAXNEST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SPOOLER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SPSECURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SPTRACEFILE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLATOMICDML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLCHARCNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLCNLCNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLCONCNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLDBGON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLDDLSIZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLESCNTX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLEUPDTX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLFLDCNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLFNCNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLIDMAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLISCNTX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLIUPDTX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLNMSZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLNODECNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLORDCNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLPBUFSIZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLPMEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLQUERYCNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLSELCNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLSMEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLSTATS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLTABCNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLTFBSIZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQLTPENABLE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . STRNULLCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SYNCRETRY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SYNCTOUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TBLDSSZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TIMEFMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TIMEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Determining TIMEM for Nested SELECT Statements . . . . . . . Determining TIMEM for Duplicate Joins . . . . . . . . . . . . . . . . . Determining TIMEM for No-Duplicate Joins . . . . . . . . . . . . . . TIMNULLCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

107 107 107 108 108 109 109 109 110 111 111 112 112 112 113 113 114 114 114 115 115 115 116 116 116 117 118 118 118 119 120 120 120 121 121 121 122 122 122 123 123 124 125
7

TMPDIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TMSHMKEY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TRIADSEP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TRIGGERMAXNEST . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TUPBUFSIZE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TXLOGFULL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TXTNULLCH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UAMOUNT64 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UCCNAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UCURRFMT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ULDACCESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ULDLIBCOUNT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ULDNAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UNICAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UNIFY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UNIFY_REGCMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UNIFY_REGCMP_SZ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UNIFYPORT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UNIFYTMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UNUMERIC64 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . UPPNAME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VOLGROUP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VOLMODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VOLOWNER . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . WP4DIGITYEARS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Utilities Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using Unify DataServer Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Entering Utility Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Referring to Database Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying Column Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Specifying Table and Schema Names . . . . . . . . . . . . . . . . . . . . . Specifying Database Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . Using the Name Cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Utilities Descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Syntax Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . addcgp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bldcmf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . btstats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . budb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8

125 125 126 126 126 128 129 129 130 130 131 131 132 132 133 133 134 134 135 135 136 136 137 137 138 139 141 142 143 144 144 145 148 149 149 149 151 153 155 158
Contents

bunotify . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . bureply . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chkbu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . chkjrn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ckunicap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . cldmn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . config . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Configuration Source File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . creatdb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dbcnv . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dbdmn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dbld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dbld Input File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dbld Specification File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dbname . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . DIS Source File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . disc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . drpobj . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . dumpdd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . EPP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . fmdmn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . htstats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . irma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lmshow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . lnkstats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . migrate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mklog . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . mkvol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . pdbld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . prtlghd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . redb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . remkview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . schempt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Contents

166 167 168 170 172 173 177 181 184 188 191 194 198 201 203 205 215 218 219 226 229 231 237 240 242 245 248 250 252 255 256 261 262
9

schlst . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shmclean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shmmap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . shutdb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . SQL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . sqla.ld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . startdb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . syncdb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . tblstats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ucc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ucrypt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . udbqls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ukill . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . uld . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ulint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . unifybug . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . uperf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . upp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . volstats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

263 265 266 278 281 283 286 289 290 293 296 297 299 301 308 309 311 317 320

10

Contents

Contents

11

About This Manual


This manual provides information about the use and function of the Unify DataServer DBA utilities and configuration variables. This manual is written for system administrators, database administrators, or application developers who are responsible for developing the database design and keeping the database application running smoothly. These are privileged users who have permission to create and delete databases, tables, and columns.

Related Documents

The following publications contain information related to the contents of this manual. You may need to refer to one or more of these manuals as you develop your Unify DataServer database. Unify DataServer: Managing a Database Unify DataServer: Writing Interactive SQL/A Queries Unify DataServer: SQL/A Reference Unify DataServer: Developing a Database

Using This Manual

This manual is one of a set that describes the Unify DataServer relational database management system. If you are new to Unify DataServer, read Unify DataServer: Developing a Database and Unify DataServer: Managing a Database before trying to use the Unify DataServer tools described in this book. Unify DataServer: Developing a Database describes the various components of the Unify DataServer relational database management system (RDBMS).

Syntax Conventions

This manual uses the following syntax conventions to describe the format of SQL/A statements and functions, RPT statements, and operating system commands: boldface Boldface words are command keywords or literal strings that you must type exactly as shown.
13

About this manual

italic words

Italic words indicate words, variables, numbers, or expressions that you must provide. Examples are table names, column names, and constants. Italic words are also used for manual names and terms defined in the glossary. All-uppercase italics are used for configuration variable names.

UPPERCASE

UPPERCASE words are SQL/A keywords. SQL/A keywords are not case-sensitive: you can type either uppercase or lowercase letters.

[]

Nonbold square brackets indicate that the enclosed word or item is optional and may be left out. Boldface brackets, [ ], are syntax elements that must be included, as in count[character]. Vertical bars enclosing a stacked list of options mean that you can choose one of the listed options, but only one of them. Curly braces enclose items that can be repeated. Ellipsis points indicate that you may repeat the immediately preceding item any number of times, as needed. The immediately preceding item may be enclosed in curly braces.

||

{} ...

14

About this manual

Icons

The manual also uses the following icons:

Tip A Tip contains helpful information. Warning A Warning cautions against actions that could cause data loss or damage to the database. Additional Help Additional Help tells you where to find more information about described topics. Performance Performance tells you information that can improve the performance of the database. A key emblem indicates that you are to type the information on the command line as shown. Do not assume that you will follow the text with a carriage return unless so indicated. A curved arrow symbol instructs you to type a carriage return after the command. A filled triangle indicates the results of the step or information you just entered.

About this manual

15

16

About this manual

Configuration Variable Reference

17

Chapter

Focus

This section of the manual lists the Unify DataServer configuration variables in alphabetical order. The first few pages of this section describe the format that is used for each configuration variable description. For information about how these configuration variables determine your applications environment, see the chapter Configuring Database Environments in Unify DataServer: Managing a Database.

18

Configuration Variable Reference

Configuration Variable Descriptions

Each of the following configuration variable descriptions includes a summary table that contains the following information: Dependencies Indicates any constraints in relation to the operating system, other configuration variables, or other Unify DataServer features. Valid values Lists the possible values and their effects, if applicable. Where valid values are integers, the settings can include these abbreviations: Abbreviation
k b

Meaning
1024, as in 32k instead of 32768 512, as in 15b instead of 7680

Default value

Indicates the value supplied by Unify DataServer if the configuration variable is not set. This in an initial value only, the default may have been overridden in the unify.cf or prod.cf files or in your local environment. Shows one or more examples of valid values for the configuration variable.

Example

Additional help Cites related configuration variables and manuals that contain related information.

Configuration Variable Reference

19

Defined values Unify DataServer recognizes several defined values as configuration variable values. Defined values make the configuration variable settings more readable and are thus more easily maintained. The following table summarizes the defined values that can be used in configuration variable settings. A defined value of undefined enables you to use the software default configuration variable value when another default is defined in the configuration file. Defined Value
TRUE FALSE YES NO

Value
1 0 1 0 Treat the defined configuration variable as if it were undefined.

undefined

20

Configuration Variable Reference

AMFILE

File into which access method information is to be written when an SQL/A query is executed. This access method information is related to B-tree, link, or sequential access methods used in a previous selection operation. If the specified file does not exist, it will be created when the first query is executed. If the file exists, information will be appended to the file. To start writing access method information to this file, use the AMLEVEL configuration variable.
AMFILE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Used with AMLEVEL configuration variable Path and file name.

(no file)

/usr/u2k/database1/scan_info AMLEVEL configuration variable description Analyzing Access Method Performance in Unify DataServer: Managing a Database

AMLEVEL

Option to control reported access method information. The access method information is written to the file that is specified by the AMFILE configuration variable. AMLEVEL Configuration Variable Dependencies: Valid values: Default value: Additional help: Used with AMFILE configuration variable
-1 0 0

Turn on access method reporting Turn off access method reporting

AMFILE configuration variable description Analyzing Access Method Performance in Unify DataServer: Managing a Database

Configuration Variable Reference

21

AMTFMT

Default format template to be used to display AMOUNT data.

The currency symbol ($), triad separator (,) and radix separator (.) may be overridden by the CURRSYM, TRIADSEP and RADIXSEP configuration variables respectively. If these values are not set, then the values used in the template are used. For example, if AMTFMT is set to ###,##&.&&$ (with a single $ on the right), and CURRSYM is set to DM, the amount 123456.78 is displayed as 123,456.78 DM. Any setting for CURR overrides AMTFMT. If you wish to use AMTFMT, be certain that CURR remains unset. To display CURRENCY data, use the UCURRFMT configuration variable.
AMTFMT Configuration Variable Dependencies: Valid values: Deprecated CURR configuration variable must be unset Any valid AMOUNT format template that is recognized by the SQL/A DISPLAY clause, ACCELL/SQL or RPT, or the C language printf( ) function format. The value must be enclosed in quotation marks (print_format).
%1.2lf

Default value: Example: Additional help:

%.2f ###,##&.&& CURRSYM, TRIADSEP, RADIXSEP, and UCURRFMT configuration variable descriptions

22

Configuration Variable Reference

AMTNULLCH

Null display character for AMOUNT data. The character specified in AMTNULLCH overrides NULLCH for AMOUNT data.
AMTNULLCH Configuration Variable Valid values: Default value: Example: Additional help: Any printable character enclosed in quotation marks (character)
*

#
NULLCH configuration variable description

AUSHMKEY

Key that identifies the segment of shared memory that is used by the authorization manager. To specify a shared memory key value, you can use octal, hexadecimal, or decimal numbers that are represented as they are in the C language. For octal values, use a leading 0 (zero), as in the value 0777. For hexadecimal values, use a leading 0x, as in the value 0xbccf. For decimal values, use the decimal numbers, as in 1243. Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.
AUSHMKEY Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file
0 through 0x7fffffff

Value of SHMKEY, for example 6904


0xlcc123 SHMKEY configuration variable description

Configuration Variable Reference

23

AUTOSTART

Database automatic startup flag.


AUTOSTART Configuration Variable Dependencies: Valid values: Can be set only from a configuration file Any database process such as SQL/A can start the database FALSE Only the startdb utility can start the database
TRUE TRUE

Default value: Additional help:

Unify DataServer: Managing a Database

BESHMKEY

Key that identifies the segment of shared memory that is used for backend communications. To specify a shared memory key value, you can use octal, hexadecimal, or decimal numbers that are represented as they are in the C language. For octal values, use a leading 0 (zero), as in the value 0777. For hexadecimal values, use a leading 0x, as in the value 0xbccf. For decimal values, use the decimal numbers, as in 1243. Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.
BESHMKEY Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file
0 through 0x7fffffff

Value of SHMKEY, for example, 6904


0xlcc123 SHMKEY configuration variable description

24

Configuration Variable Reference

BTBUFSIZE

Size of the B-tree look ahead buffer in number of row IDs. The B-tree look ahead buffer is used to hold row IDs obtained from a B-tree scan or select in anticipation of their row values being fetched by the application. The B-tree look ahead buffer increases performance by reducing B-tree accesses to retrieve key values, but reduces concurrency. Concurrency is reduced because the use of a buffer leaves a time frame open in which insertions and deletions by other users are not reflected in the current users buffer and therefore may not be seen. For applications where data integrity is critical, you can turn buffering completely off. Performance If you use the RHLI ordered access functions or perform many B-tree scans, this parameter can greatly improve performance.
BTBUFSIZE can be overridden at the environment level to use the specified value for all B-trees accessed by a particular application. This enables applications where the probability of conflicts is low to improve performance through buffering. BTBUFSIZE Configuration Variable Valid values: Default value: Additional help: Any positive integer set to 0 to disable buffering 100

Unify DataServer: Managing a Database

BTREECUTOFF

The B-tree access method cutoff percentage. The cutoff value determines the percentage of total rows that can be selected by the B-tree access method. If the number of selected rows is above the specified value, sequential access is used.
BTREECUTOFF Configuration Variable Valid values: Default value: Example: Integer from 0 100 15 35

BUCHECKSUM

BUCHECKSUM is no longer optional. All backup tapes will have

checksum turned on by default. If you created a backup with


BUCHECKSUM turned off, you will get a warning that states no checksum

was found. These warnings can be safely ignored.


Configuration Variable Reference 25

BUDEV

Backup device or file name and information, specified as a five-part comma-separated list of keywords and values. The backup device can be a diskette drive, a hard disk, a cartridge tape drive, or a 9-track tape drive. Automat backup management requires a nonmountable device.
BUDEV Configuration Variable Valid values: A comma-separated list of information about the backup device, specified in this format: DEVNM=value, BLKSZ=value, MAXBLK=value, TYPE=value See the table BUDEV Value Components, following
DEVNM=/dev/rmt0,BLKSZ=32k,MAXBLK=0,TYPE=mount

Default value: Example:

The following table describes the components of the BUDEV configuration variable.

BUDEV Value Components

Keyword
DEVNM

Description
Name of the backup device; in automatic backup management mode, the program adds a serial-number extension, followed by two more digits (for the volume number) Size in bytes of the blocks read from or written to the backup device; must be at least 16k Number of blocks that can be read from or written to a volume mounted on the backup device 1

Default
dbname.bu

Example
/dev/rmt0

BLKSZ

32k

32k

MAXBLK

If not set or set 0 to 0, blocks are written until the write fails

1 On some systems, writing to a tape fails if the end of the tape is reached during the write. If you are using a system that does not use the EOT (end-of-tape) feature, you must set the MAXBLK value so that the end of tape is never reached. Set MAXBLK to a number of blocks that all tapes can hold.

(table continued on next page)


26 Configuration Variable Reference

BUDEV Value Components (continued)

Keyword
TYPE

Description

Default

Example

Type of backup device used: nomount mount A mountable device such as a tape or floppy disk nomount A non mountable device such as a hard disk auto specifies automatic backup mode; requires a nomount device

1 On some systems, writing to a tape fails if the end of the tape is reached during the write. If you are using a system that does not use the EOT (end-of-tape) feature, you must set the MAXBLK value so that the end of tape is never reached. Set MAXBLK to a number of blocks that all tapes can hold.

BURDSZ

Size in bytes of the buffer used by budb and redb to write and read the database backup files on devices. On most systems, the default value for BURDSZ is sufficient. However, in some cases, you may need to increase the value. Use the BURDSZ with the NBUBUF and BUDEV configuration variables to improve the performance of systems where the transfer rate of the backup device exceeds the transfer rate of the budb and redb utilities.
BURDSZ Configuration Variable Dependencies: The size of BURDSZ must be less than the tape block size specified by the BLKSZ portion of the BUDEV configuration variable. This variable is used only on devices. Valid values: Default value: Additional help: A positive integer 16384
BUDEV, NBUBUF configuration variable descriptions

Configuration Variable Reference

27

BUSHMKEY

Key that identifies the segment of shared memory used by the backup manager. To specify a shared memory key value, you can use octal, hexadecimal, or decimal numbers that are represented as they are in the C language. For octal values, use a leading 0 (zero), as in the value 0777. For hexadecimal values, use a leading 0x, as in the value 0xbccf. For decimal values, use the decimal numbers, as in 1243. Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.
BUSHMKEY Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file
0 through 0x7fffffff

The current value of SHMKEY, for example 6904


0xlcc123 SHMKEY configuration variable description

CENTURY_ CUTOFF

The first year that DataServer 6.0 will recognize as belonging to the lower of the two nearest centuries. When an application user enters a two-digit year, the program assigns lower numbers to the next century. If, in 2004, CENTURY_CUTOFF is 27, DataServer 6.0 will treat year values of 27 through 99 as 1927 through 1999, but values of 00 through 26 will be seen as 2000 through 2026. Designers often plan on increasing CENTURY_CUTOFF at least decennially. CENTURY_CUTOFF Configuration Variable
Valid values: Default value: Any two-digit number 00

28

Configuration Variable Reference

CLDMAXSLP

Maximum time, in seconds, between the cleanup daemons (cldmn) periodic cleanup operations. Setting CLDMAXSLP too high may not clean out the shared memory quickly enough. Setting this value too low will increase CPU time because cldmn will be performing its cleanup operations more often. The DataServer for Windows cleanup daemon works differently than the UNIX version. If there are 64 or fewer processes, the DataServer for Windows cleanup daemon cleans up after database processes as soon as they exit and a periodic check is not required. If there are more than 64 database processes, CLDMAXSLP is the maximum delay before a dead process is detected and cleaned up.
CLDMAXSLP Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set from a configuration file only. The time is set when the database is started. 1 2147483640 300
120

Architecture and Transaction Processing in Unify DataServer: Managing a Database.

CLIENTINFO
(Unify/Net only)

Directory search path of the directory on the client machine where information about the database is stored.
CLIENTINFO Configuration Variable Valid values: Default value: Example: Additional help: Any valid directory search path .
/usr/remote/clientdb

Unify/Net Guide

Configuration Variable Reference

29

CMAGEINT

Cache management page aging interval (references). The value specified is the number of cache reads or writes between checks. This variable determines how often the cache pages are aged out.
CMAGEINT Configuration Variable Dependencies: Valid values: Can be set only from a configuration file Any positive integer or -1
-1 specifies that MAXCACHE * 10 is used

A recommended value is 100 * the value specified by MAXCACHE. For example, if MAXCACHE is set to 1600, then CMAGEINT is set to 160000. Default value: Example: Additional help:
1000000000 160000 MAXCACHE configuration variable description

CMDENSITY

Flag that indicates the page replacement algorithm to be used by the cache manager when placing pages on the free list. If CMDENSITY is set to TRUE, the reference density algorithm is used to keep track of how often a page is referenced. The cache manager transfers the least-referenced pages from the cache to the free list. If CMDENSITY is set to FALSE, no density algorithm is used. The cache manager simply transfers pages to the free list in order. In this case, you need a larger free list and can expect many reclaims from the free list.
CMDENSITY Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set only from a configuration file
TRUE Use the reference density algorithm FALSE Use a simple hand algorithm

TRUE

Unify DataServer: Managing a Database


Configuration Variable Reference

30

CMMINFRE

For cache management, the minimum percentage of pages on the cache free list. For optimum performance, the number of pages in the cache free list should be between the value for CMMINFRE and CMOPTFRE.
CMMINFRE Configuration Variable Dependencies: Must be less than CMOPTFRE Can be set only from a configuration file Valid values: Default value: Additional help:
1 through 100

5
CMOPTFRE configuration variable description

Tuning the Cache in Unify DataServer: Managing a

Database

CMOPTFRE

For cache management, the optimum percentage of pages on the cache free list. The number of pages in the cache free list should be between the value for CMMINFRE and CMOPTFRE. If the number of pages on the cache free list falls below CMOPTFRE, the cmdmn places the least referenced allocated page buffers on the cache free list.
CMOPTFRE Configuration Variable Dependencies: Must be greater than CMMINFRE Can be set only from a configuration file Valid values: Default value: Additional help:
2 through 100

15
CMMINFRE configuration variable description Tuning the Cache in Unify DataServer: Managing

Database

Configuration Variable Reference

31

CMPLOCK

Cache locking status that indicates whether the cache is to be locked in memory. Once the cache is locked into memory, it cannot be unlocked until a new cmdmn daemon is started.
CMPLOCK Configuration Variable Dependencies: Can only be set by a user with root privilege Can be set only from a configuration file Valid values: Default value: Additional help:
TRUE The cache is locked in memory FALSE The cache is not locked in memory

FALSE

Unify DataServer: Managing a Database

CMPTFLG

Compatibility mode with previous versions of UNIFY DBMS and ACCELL IDS.
CMPTFLG Configuration Variable Valid values: Default value:
TRUE Enables compatibility mode FALSE Disables compatibility mode

FALSE

CMSHMKEY

Key that identifies the segment of shared memory that is used by the cache manager. To specify a shared memory key value, you can use octal, hexadecimal, or decimal numbers that are represented as they are in the C language. For octal values, use a leading 0 (zero), as in the value 0777. For hexadecimal values, use a leading 0x, as in the value 0xbccf. For decimal values, use the decimal numbers, as in 1243.

32

Configuration Variable Reference

Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.
CMSHMKEY Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file
0 through 0x7fffffff

Value of SHMKEY, for example, 6904


0xlcc123 SHMKEY, MAXCACHE configuration variable descriptions

CMSLPINT

For cache management, the minimum amount of time the cache manager will sleep when there is no work to do. The interval is set in 1/100s of a second. Performance When the database cache size is small, the number of pages on the free list can be exhausted before the cache manager can wake up. This results in user page replacements that can slow the execution of database

Configuration Variable Reference

33

processes. You can do any of the following to reduce user page replacements: Increase cache size (MAXCACHE value) This reduces user page replacements and improves database throughput, but database synchronizations will take longer. If you have a high throughput because of a disk drive array, you may wish to increase the cache size. Increase the freelist size (CMOPTFRE value) This reduces user page replacements, but it also increases CPU time due to sequential scanning of the freelist, increases the number of page reclaims, and worsens concurrency. If you have a high throughput because of a disk drive array, you may wish to increase the freelist size. Wake up cmdmn more often (decrease the CMSLPINT value) This reduces user page replacements and reclaims, but increases CPU time to run the cmdmn utility. You may want to decrease the value of CMSLPINT if your cache size is limited because of lengthy database synchronizations. To calculate the best combination of cache size, freelist size, and sleep interval values, first determine the maximum throughput of your disk drives (in 2k blocks per second). Divide this by the freelist size (CMOPTFRE percent of MAXCACHE) to obtain the number of wakeups needed per second. Then divide 100 into this result, rounding down. For example, you would have the following given a MAXCACHE of 1000, CMOPTFRE of 10%, and a disk throughput rate of 800 2k pages per second:
100/(800/(.1*1000))=12.5

You would therefore set CMSLPINT to 12 in order to wake up cmdmn every 12/100s of a second.
CMSLPINT Configuration Variable Valid values: Default value: Example: Additional help: 11600 100 (The cache sleeps for 1 second.) 200
CMDENSITY, CMSHMKEY, SHMKEY, configuration variable descirptions, shmmap, ukill utility descriptions.

34

Configuration Variable Reference

CONFIG_ READONLY

Local configuration variable override status. CONFIG_READONLY specifies whether the variables in the configuration file in which CONFIG_READONLY is set can be overridden by setting them in a higher level configuration file or at the operating system command level. You can use CONFIG_READONLY to protect the application configuration file against user changes at the operating system command level.
CONFIG_READONLY Configuration Variable Dependencies: Valid values: Can be set only from a configuration file All configuration variables in the configuration file (except DBPATH and DBNAME) are read only. FALSE Configuration variables in the configuration file can be overridden by setting them in a higher level configuration file or at the operating system command level.
TRUE FALSE

Default value:

COREDUMP

Corrupt database detection core dump flag. If set to TRUE and a database or shared memory corruption is detected, the name of the core file produced is core.process_ID. For example, if the process 12345 detects the database corruption, the core file name is core.12345 in the directory specified by COREPATH.
COREDUMP Configuration Variable Valid values: If a process detects that the database or shared memory may be corrupt, the process produces a core dump in the directory specified by the COREPATH configuration variable FALSE If a process detects that the database or shared memory may be corrupt, the process does not produce a core file
TRUE

Default value: Additional help:

FALSE
COREPATH, ERRFILE configuration variable descriptions

Configuration Variable Reference

35

COREMAX

Maximum number of core files to write when COREDUMP is TRUE.


COREMAX Configuration Variable Valid values: Default value: Additional help: Any positive integer 1
COREDUMP, COREPATH configuration variable

descriptions

COREPATH

Directory search path of the directory where the core file is to be placed if the COREDUMP configuration variable is TRUE.
COREPATH Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Requires the correct syntax for the operating system Any valid directory search path /tmp
/U2000/etc/tmp COREDUMP configuration variable description

CORESIG

Signal number for which uopndb installs dumpcor as a signal handler. If a signal is specified, signalling the process with CORESIG produces a core file in $COREPATH/core.process_ID/core. The signalled process continues to execute.
CORESIG Configuration Variable Valid values: Default value: Example: Additional help: Any valid signal number that can be handled by the operating system 6
10 COREDUMP, COREPATH configuration variable

descriptions
36 Configuration Variable Reference

CREATSH

Name of the custom create database utility, which is located in the directory specified by the PATH configuration variable. This utility is called by the SQL/A CREATE DATABASE statement and the RHLI uadddb function. The default utility, creatdb.sh, performs no action. You can modify the utility to include an SQL command that performs a desired task, such as creating a schema or table.
CREATSH Configuration Variable Valid values: Default value: Additional help: Any valid executable name creatdb.sh PATH configuration variable description

CURR

An AMOUNT display format provided for compatability purposes only; use AMTFMT or UCURRFMT instead wherever possible. Any setting for CURR overrides the setting of AMTFMT. The value you specify is 3 to7 characters in length. The first 3 characters are required. The following table shows the values that the CURR configuration variable can contain: Position 1 Description Thousands separator Allowed values Comma (,) Period (.) Space Period (.) Comma (,) 0 2 > to display on the right < to display on the left Any characters
37

2 3 4 5, 6, 7
Configuration Variable Reference

Decimal point Digits to the right of the decimal Currency symbol position Currency symbol

CURR Configuration Variable Valid values: Default value: Example: Additional help: A 37 character string ,.2<$ ,1> RL displays 100 thousand dollars as 100000,0 RL AMTFMT configuration variable description

CURRSYM

Currency symbol used to display AMOUNT or CURRENCY data. If you include more than three characters, the additional characters are ignored.
CURRSYM Configuration Variable Valid values: Default value: Example: Additional help: A 13 character string $ DM AMTFMT, UCURRFMT configuration variable descriptions

DATEFMT

Default format in which to accept and display dates. When specifying a date format, follow these guidelines: The date format separator character can be a slash (/), a dash (-), a dot (.), or a space ( ), as shown in these examples:
DD/MM/YY DD.MM.YY DD-MM-YY DD MM YY

The month, day, and year can be specified in any order, as shown in these examples:
DD/MM/YY YY/MM/DD MM/DD/YY

38

Configuration Variable Reference

The year can be specified as a two-character or four-character year, as shown in these examples:
DD/MM/YYYY YY/MM/DD

To specify the month as a number, use MM in the month portion of the date format specification. To specify a three-letter abbreviation for the month, as in May or Dec, use AAA. (The A characters must be specified in uppercase, although the month names will be displayed with initial capital letters only.) So, valid examples are:
DD/AAA/YYYY DDAAAYY DD.AAA.YYYY

If the format cannot be interpreted, then the default of MM/DD/YY is used. (ACCELL/SQL supports formats that DataServer does not support.)
DATEFMT Configuration Variable Valid values: A combination of the letters D (day), M (numeric month) or A (alphabetic month), and Y (year) format characters, plus a separator character. The format template must be enclosed in quotation marks (format_template).
MM/DD/YY

Default value: Example: Additional help:

See preceding bulleted list print statement description in Unify DataServer: Creating Reports with RPT Report Writer

DATNULLCH

Null display character for DATE data. If DATNULLCH is set to *, a null date displays as ********. The character specified in DATNULLCH overrides NULLCH for DATE data.
DATNULLCH Configuration Variable Valid values: Default value: Any printable character enclosed in quotation marks (character)
*

table continued on next page


Configuration Variable Reference 39

DATNULLCH Configuration Variable (continued) Example: Additional help: #


NULLCH configuration variable description

DBCHARSET

The locale name. This read-only configuration variable specifies the character set of the database.
DBCHARSET Configuration Variable Valid values: 0 1 2 3 4 5 6 7 Default value: Additional help:
0 LANG configuration variable description

ANSI, ISO8859X (all single byte character sets) Japanese SJIS Japanese EUC Korean EUC Simplified Chinese (EUC-CN, GBK) Traditional Chinese (BIG-5) UTF8 Traditional Chinese (EUC-TW)

40

Configuration Variable Reference

DBHOST

Network node name that is used to remotely log in to the database machine. DBHOST is part of a fully-qualified database name in the following format:

user identity (DBUSER part) database machine (DBHOST part) database path (DBPATH part)

database name (DBNAME part)

[[dbhost]:[dbuser]:][dbpath] [dbname] Fully-Qualified Database Name

The separator character between the database machine name (dbhost) and the user identity (dbuser) and between the user identity and the database path (dbpath) is a colon (:). The separator character must be part of the fully-qualified database name, even if you omit a portion of the name. If you do omit any portion of the name, the missing value is retrieved from the appropriate configuration variable. If you do not include the separators in the fully-qualified name, the database machine name and user identity are assumed to be missing. If DBHOST is not set, or is set to . or , local database access facilities are used. Otherwise remote database access facilities are used for the machine identified by the database machine name, even if it is on the same machine as the client. That is, you can use remote access facilities on the local database by specifying the client machine as the database machine either in the DBHOST variable or in the dbhost part of the fully-qualified database name. For example, if you always want to use the user name duane with encrypted password xxx, but still let DBHOST determine if the database is local or remote, you can pass this qualified database name:
:duane/xxx:/usr/local/lib/file.db Configuration Variable Reference 41

If you want to always use the users current user name with encrypted password xxx and let DBHOST determine the local or remote access, the database name has this form:
:/xxx:/usr/local/lib/file.db

If you are accessing a remote database, you must set DBPATH to an absolute path name, because DBPATH cannot be a relative directory such as . (the current directory) or ../db. If you are accessing the local database and you do not specify the database path name in either the fully-qualified database name or the DBPATH configuration variable, the path name defaults to the current directory.
DBHOST Configuration Variable Valid values: Default value: Example: Additional help: Any valid machine name or . for local access Local database access facilities .
dbrus DBNAME, DBPATH, DBUSER configuration variable descriptions Unify/Net Guide

DBNAME

Name of the database root file, excluding the directory search path. The base of the specified name (the portion preceding the suffix) is used to build the database configuration file name. For example, if DBNAME is set to accts.db, the application configuration file is named accts.cf. For remote access, DBNAME is part of a fully-qualified name. See DBHOST on page 41. Tip Always set DBPATH and DBNAME to the correct values. These configuration variables are used to supply defaults to many Unify DataServer processes and utilities, such as database cleanup, syncdb, and SQL.

42

Configuration Variable Reference

DBNAME Configuration Variable Valid values: Default value: Example: Additional help: A simple file name; the specified name cannot contain slashes (/) or backslashes (\).
file.db accts.db DBHOST, DBPATH, DBUSER configuration variable

descriptions

Unify/Net Guide

DBPATH

Directory search path, excluding the file name, for the database root file (file.db) and associated files such as variable-length text and binary files (.dbv). This variable is used with DBNAME to find the database files, such as the compiled application configuration file, for example, file.cfg. The application runs faster if DBPATH is set to an absolute path, such as /usr/DB, instead of a relative path, such as . or DB. The value specified should not contain spaces. The directory specified by DBPATH can contain only one database (only one dbname_file.db). This is because each database requires its own errlog file (linked to dbname.err) and its own B-tree index files (named by convention as bt*.idx). For remote access, DBPATH is part of a fully-qualified name. See DBHOST on page 41. Tip Always set DBPATH and DBNAME to the correct values. These configuration variables are used to supply defaults to many Unify DataServer processes and utilities, such as database cleanup, syncdb, and SQL.

Configuration Variable Reference

43

DBPATH Configuration Variable Valid values: Default value: Example: Additional help: Any valid directory search path specification
. /usr/DB DBHOST, DBNAME, DBUSER configuration variable

descriptions

Unify/Net Guide

DBSFILE

Name, usually in the form dbname.dbs, of the file that contains the database state information such as up or down. If no path name is specified, the file location is the directory specified by the DBPATH configuration variable.
DBSFILE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any valid file name, optionally preceded by a path name
file.dbs (if DBNAME is set to file.db) accts.dbs DBNAME configuration variable description

DBSHMKEY

Key that identifies the segment of shared memory that is used by the database manager. To specify a shared memory key value, you can use octal, hexadecimal, or decimal numbers that are represented as they are in the C language. For octal values, use a leading 0 (zero), as in the value 0777. For hexadecimal values, use a leading 0x, as in the value 0xbccf. For decimal values, use the decimal numbers, as in 1243.

44

Configuration Variable Reference

Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.
DBSHMKEY Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file
0 through 0x7fffffff

Value of SHMKEY, for example, 6904


0xlcc123 SHMKEY configuration variable description

DBUSER

User name and encrypted password. The ucrypt utility can be used to initialize DBUSER. (The ucrypt utility is not the same as the Unix crypt utility.) The ucrypt utility interactively prompts the user for the password, which is read from stdin. For example: echo Please enter password DBUSER=$name/ucrypt The quote symbol used in the DBUSER initialization syntax is the backquote ().

Configuration Variable Reference

45

For remote access, DBUSER is part of a fully-qualified name. See DBHOST on page 41.
DBUSER Configuration Variable Dependencies: Valid values: If specified, the password must be in encrypted form Any valid user name and password in the format user_name/encrypted_password. If both are specified, the slash character (/) must separate the user name and password. If the password is not required, the slash character is optional. The current user name with no password
$name/ucrypt

Default value: Example:

(if the user name on the server machine is stored in the name shell variable) Additional help:
DBHOST, DBNAME, DBPATH configuration variable descriptions ucrypt utility description Unify/Net Guide

DBVFILE

Name, usually in the form dbname.dbv, of the file that contains variable-length database columns. If no path name is specified, the file location is the directory specified by the DBPATH configuration variable. Unify DataServer stores variable-length TEXT and BINARY columns in a separate file, not in the database file, dbname.db.
DBVFILE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any valid file name, optionally preceded by a path name
file.dbv (if DBNAME is set to file.db) accts.dbv DBNAME configuration variable description

46

Configuration Variable Reference

DDBTCHNKS

Number of B-tree information structures to be allocated at a time in shared memory. For example, if the value is 20, enough shared memory is allocated for 20 B-tree structures the first time that a B-tree in that chunk is referenced. The first chunk contains information structures for B-trees with IDs from 1 to 20 (the value of DDBTCHNKS). As B-trees are referenced with higher IDs, the appropriate chunk is allocated. In this example, the second chunk contains information structures for B-trees with IDs from 21 to 40, the third chunk, information structures for B-trees with IDs from 41 to 60, and so on. Too low a value causes many separate allocations and possible shared memory fragmentation, while too large a value causes large chunks of potentially unused shared memory to be allocated.
DDBTCHNKS Configuration Variable Valid values: Default value: Example: Additional help: A positive integer 10
20 DDCOLCHNKS, DDHSHCHNKS, DDLNKCHNKS, DDTBLCHNKS, and DDSHMKEY configuration variable

descriptions

DDCOLCHNKS

Number of column descriptors to be allocated at a time in shared memory. For example, if the value is 50, enough shared memory is allocated for 50 column descriptors the first time that a column in that chunk is referenced. The first chunk contains column information structures for columns with IDs from 1 to 50 (the value of DDCOLCHNKS). As columns are referenced with higher IDs, the appropriate chunk is allocated. In this example, the second chunk contains information structures for columns with IDs from 51 to 100, the third chunk, information structures for columns with IDs from 101 to 150, and so on. Too low a value causes many separate allocations and possible shared memory fragmentation, while too large a value causes large chunks of potentially unused shared memory to be allocated. Performance If your database has a large number of columns, and you will not be

Configuration Variable Reference

47

adding new columns, set DDCOLCHNKS to the number of the highest column ID in the database for optimal performance. (To determine the highest column ID, run the schlst utility.) Note, however, that adding more columns later can result in a lot of wasted shared memory when a new large chunk is allocated for just a few column information structures.
DDCOLCHNKS Configuration Variable Valid values: Default value: Example: Additional help: A positive integer 10
50 DDBTCHNKS, DDHSHCHNKS, DDLNKCHNKS, DDTBLCHNKS, and DDSHMKEY configuration variable

descriptions

DDHSHCHNKS

Number of hash table information structures to be allocated at a time in shared memory. For example, if the value is 20, enough shared memory is allocated for 20 hash table information structures the first time that a hash table in that chunk is referenced. The first chunk contains information structures for hash tables with IDs from 1 to 20 (the value of DDHSHCHNKS). As hash tables are referenced with higher IDs, the appropriate chunk is allocated. In this example, the second chunk contains information structures for hash tables with IDs from 21 to 40, the third chunk, information structures for hash tables with IDs from 41 to 60, and so on. Too low a value causes many separate allocations and possible shared memory fragmentation, while too large a value causes large chunks of potentially unused shared memory to be allocated.
DDHSHCHNKS Configuration Variable Valid values: Default value: Example: Additional help: A positive integer 10
20 DDBTCHNKS, DDCOLCHNKS, DDLNKCHNKS, DDTBLCHNKS, and DDSHMKEY configuration variable

descriptions

48

Configuration Variable Reference

DDLNKCHNKS

Number of link information structures to be allocated at a time in shared memory. For example, if the value is 10, enough shared memory is allocated for 10 link information structures the first time that a link in that chunk is referenced. The first chunk contains information structures for links with IDs from 1 to 10 (the value of DDLNKCHNKS). As links are referenced with higher IDs, the appropriate chunk is allocated. In this example, the second chunk contains information structures for links with IDs from 11 to 20, the third chunk, information structures for links with IDs from 21 to 30, and so on. Too low a value causes many separate allocations and possible shared memory fragmentation, while too large a value causes large chunks of potentially unused shared memory to be allocated.
DDLNKCHNKS Configuration Variable Valid values: Default value: Example: Additional help: A positive integer 10
20 DDBTCHNKS, DDCOLCHNKS, DDHSHCHNKS, DDTBLCHNKS, and DDSHMKEY configuration variable

descriptions

DDSHMKEY

Data dictionary shared memory key that specifies the shared memory segment in which the new data dictionary partition is to be created. To specify a shared memory key value, you can use octal, hexadecimal, or decimal numbers that are represented as they are in the C language. For octal values, use a leading 0 (zero), as in the value 0777. For hexadecimal values, use a leading 0x, as in the value 0xbccf. For decimal values, use the decimal numbers, as in 1243. Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.

Configuration Variable Reference

49

DDSHMKEY Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file
0 through 0x7fffffff

Value of SHMKEY, for example, 6904


0xlcc123 SHMKEY, SHMMAX, SHMFULL configuration variable descriptions, lmshow utility description

DDTBLCHNKS

Number of table information structures to be allocated at a time in shared memory. For example, if the value is 10, enough shared memory is allocated for 10 table information structures the first time that a table in that chunk is referenced. The first chunk contains table information structures for tables with IDs from 1 to 10 (the value of DDTBLCHNKS). As tables are referenced with higher IDs, the appropriate chunk is allocated. In this example, the second chunk contains information for tables with IDs from 11 to 20, the third chunk, information for tables with IDs from 21 to 30, and so on. Too low a value causes many separate allocations and possible shared memory fragmentation, while too large a value causes large chunks of potentially unused shared memory to be allocated.
DDTBLCHNKS Configuration Variable Valid values: Default value: Example: Additional help: A positive integer 10
15 DDCOLCHNKS and DDSHMKEY configuration variable

descriptions

50

Configuration Variable Reference

DISFILE

Name, usually in the form dbname.dis, of the Data Integrity Subsystem attributes file (the compiled DIS source file).
DISFILE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any valid file name
file.dis (if DBNAME is set to file.db) accts.dis DBNAME configuration variable description

DMNNICE

The nice priority level at which daemons are started in the background, for example, the default operating system nice value. It is recommended that you set this to a negative value so that the daemons are given a higher priority than other processes to perform their tasks.
DMNNICE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any integer 0
-10 25

Operating system manuals

DMNTMP

Directory search path for daemon log file. The log file contains messages from the database daemons. The log file name is dmnlogpid.
DMNTMP Configuration Variable Valid values: Default value: Example: Additional help: Any valid directory search path specification /tmp
/ASQL/etc/tmp

Unify DataServer: Managing a Database


51

Configuration Variable Reference

EDIT

Name of a text editor or word processor to be used to edit RPT or SQL/A scripts. The specified text editor or word processor is used when you enter the EDIT command in Interactive SQL/A.
EDIT Configuration Variable Dependencies: Valid values: Must be set at the operating system command level Any valid text editor or word processor file name, optionally preceded by a directory search path specification
ed

Default value: Example:

/usr/bin/vi

ERRFILE

Name, usually in the form dbname.err, of the error file that is used by Unify DataServer to report errors. This file is linked to the errlog file.
ERRFILE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any valid file name
file.err (if DBNAME is set to file.db) accts.err DBNAME configuration variable description

52

Configuration Variable Reference

FLTFMT

Default format template to be used to display FLOAT data.


FLTFMT Configuration Variable Valid values: Any valid FLOAT print format that is recognized by the SQL/A DISPLAY clause, ACCELL/SQL, irs, RPT, or the C language printf( ) function format. The value must be enclosed in double quotation marks (print_format).
%g

Default value: Example: Additional help:

###,##&.&& print statement description in Unify DataServer: Writing Reports with RPT Report Writer

The FLTFMT31 configuration variable

FLTFMT31

For compatability purposes, the FLTFMT31 configuration variable restores the following release 3.1 behavior to SQL:
FLOAT values will be left justified if FLTFMT is defined (%g by

default). If FLTFMT is defined it will override any non-default DISPLAY configuration specified for the column when the table was created.
FLTFMT31 Configuration Variable Valid values: Default value:
FALSE The release 3.1 behavior is not restored. TRUE The release 3.1 behavior is restored.

FALSE

FLTNULLCH

Null display character for FLOAT data. The character specified in FLTNULLCH overrides NULLCH for FLOAT data.
FLTNULLCH Configuration Variable Valid values: Default value: Any printable character enclosed in quotation marks (character).
*

table continued on next page


Configuration Variable Reference 53

FLTNULLCH Configuration Variable (continued) Example: Additional help: #


NULLCH configuration variable description

FMMINFRE

File manager minimum percentage of shared memory free space. If the percentage of free space in shared memory falls below this percentage, the file manager performs cleanup tasks.
FMMINFRE Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set only from a configuration file
0 through 100

10

Unify DataServer: Managing a Database

FMNAP

File manager nap time in hundredths of a second. The file manager naps while waiting for updates at a sync point. When updates are suspended, and a user tries to execute an RHLI routine, the file manager sleeps for the specified time before determining whether the data is locked. To use FMNAP, determine how much CPU is available during synchronizations. Increase the FMNAP value if too much CPU is used at the beginning and end of the synchronization and updates are suspended for a long time.
FMNAP Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set only from a configuration file
0 through 100

30

Unify DataServer: Managing a Database


Configuration Variable Reference

54

FMOPNMODE

File manager physical log and database volume opening mode. If FMOPNMODE is set to 0 or 2 to open the database volumes, transaction log, physical log and B-tree files asynchronously, Unify DataServer continues processing without waiting for the write to complete. In this case, if a crash occurred before a sync point, you would have no guarantee that the write made it to the physical log. If FMOPNMODE is set to 1 to open the database volumes, transaction log, physical log, and B-tree files synchronously, Unify DataServer waits for the write to complete before continuing processing. Tip To ensure that the database can be recovered in the event of a system crash, use synchronous (1) mode. Use asynchronous (0) mode only when database performance is more essential than quick recovery from system crashes. If a crash were to occur while running asynchronously, you might have to recover the database by using the latest backup.
FMOPNMODE Configuration Variable Dependencies: Valid values: Can be set only from a configuration file
0 1 2

Open the physical log, database volumes, and B trees asynchronously Open the physical log, database volumes, and B trees synchronously Opens the physical log, database volumes and B trees asynchronously, but a Unix sync is performed after each database sync point

Default value: Additional help:

0
LOGOPNMODE configuration variable description

Configuration Variable Reference

55

FMSHMKEY

Key that identifies the segment of shared memory used by the file manager. After the shared memory manager, the file manager and lock manager are the next most active. If no keys are specified for the file manager and lock manager, they use a partition in the shared memory manager segment. Assigning the file manager and lock manager their own segments improves concurrency and minimizes shared memory fragmentation, which in turn helps performance. To specify a shared memory key value, you can use octal, hexadecimal, or decimal numbers that are represented as they are in the C language. For octal values, use a leading 0 (zero), as in the value 0777. For hexadecimal values, use a leading 0x, as in the value 0xbccf. For decimal values, use the decimal numbers, as in 1243. Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.
FMSHMKEY Configuration Variable Dependencies: Valid values: Valid values: Default value: Additional help: Can be set only from a configuration file Any positive integer
0 through 0x7fffffff

Value of SHMKEY, for example, 6904


SHMKEY configuration variable description

FND1STFASTFAC

B-tree find-first-row-fast slant factor. This configuration variable causes the query optimizer to sort first and then select rows if that is more cost efficient than selecting first and then sorting. If Unify DataServer sorts first (uses a B-tree that matches the query sort order) and then selects the rows, it retrieves the first row in the sort order almost immediately. If Unify DataServer selects first and then sorts, it must select all the rows and then sort them before retrieving the first row in the sort order.

56

Configuration Variable Reference

The slant toward using a matching B-tree is based on the following relationship between FND1STFASTFAC and BTREECUTOFF:
( $FND1STFASTFAC/ 100) * $BTREECUTOFF

Setting FND1STFASTFAC to a positive non-zero number causes the optimizer to slant toward using a B-tree that matches the sorting criteria. For example, if a higher value such as 200 is specified, the slant toward using a matching B-tree is 2 * $BTREECUTOFF For a FND1STFASTFAC value of 100, the slant is 1 * $BTREECUTOFF Setting the variable to a negative number causes the optimizer to slant less toward using a B-tree that matches the sorting criteria. The lower the value, the less likely that such a B-tree will be used. For example, if a more negative value such as -200 is specified, the slant toward using a matching B-tree is
$BTREECUTOFF / 2

For a FND1STFASTFAC value of -100, the slant is


$BTREECUTOFF / 1

The default value of 0 indicates that no B-tree is more likely to be used than any other.
FND1STFASTFAC Configuration Variable Valid values: Default value: Example: Additional help: Positive or negative integer values or zero (0)
0

-100
BTREECUTOFF, HISTINTRVL, and SORTCOST configuration variable descriptions. Analyzing Access Method Performance in Unify DataServer: Managing a Database.

Configuration Variable Reference

57

FREQTYPE

Unit of measurement for determining the frequency of sync points: operations, hours, minutes, or seconds. An operation is defined to be any database operation that modifies the database. Examples are updating, inserting, or deleting a row. Selecting a row is not a database operation, since it does not modify the database. If the normal flow of transactions into the database follows a uniform distribution, set FREQTYPE to hours, minutes, or seconds. If the transactions occur in bursts, set FREQTYPE to operations. If operations is specified, then set the FREQUENCY configuration variable to 50% of the LOGBLK configuration variable value. This causes a sync point to occur when the log is 50% full. Setting FREQUENCY to 50% of the LOGBLK value ensures that if little database activity occurs over time, no unnecessary database synchronizations will occur. This minimizes the impact on the rest of the processes running on the system. Also, if a large burst of transactions should occur, the database will synchronize frequently enough to keep the log file from getting filled and suspending any processes. The FREQUENCY configuration variable can be manipulated and monitored over time to determine the best value for the situation. For a large enough transaction log, the FREQUENCY can be made very high (for example, 80-90%) and still leave enough space in the transaction log.
FREQTYPE Configuration Variable Dependencies: Valid values: Can be set only from a configuration file
operations hours minutes seconds

Default value: Additional help:

minutes
FREQUENCY, LOGBLK, TXLOGFULL configuration variable

descriptions

58

Configuration Variable Reference

FREQUENCY

Number of measurement units between sync points. The measurement units can be operations, hours, minutes, or seconds, depending on the value of FREQTYPE.
FREQUENCY Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set only from a configuration file Any positive integer 30
FREQTYPE configuration variable description

HISTINTRVL

B-tree histogram sampling period. This variable determines the number of rows to skip between samples. The samples are used to build an in-memory table similar to a histogram. These tables are used by the query optimizer to determine the most efficient B-tree to use when more than one B-tree is defined for a large table or when a B-tree is being compared to another access method. If initial scans, or queries, on a table with multiple B-trees seem to be taking too long or consuming too much memory, increase the setting of HISTINTRVL. If the B-trees that are chosen are not the best ones to use, decrease the value. If the value is very low, for example, 50 or 100, and the B-tree is very large, the default value will be used when not enough memory remains for such a large table. In this case, the error log shows a not enough memory error. Performance For fast initial scans, set HISTINTRVL to a large value, such as 1000000.
HISTINTRVL Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer.
5000 4800

FND1STFASTFAC and SORTCOST configuration variable descriptions. Analyzing Access Method Performance in Unify DataServer: Managing a Database.

Configuration Variable Reference

59

HSHCHKUNIQ

Hash table uniqueness checking. When you create a hash table, all of the indexed columns must have unique values. If the column is guaranteed to be unique, you can specify that the column values are not checked for uniqueness by Unify DataServer when the hash table is created. This usually improves the performance of creating hash tables. Columns are guaranteed to be unique when the column contains any of these characteristics: The column is a primary key The column contains a unique attribute The column contains a unique B-tree index on it The column is a direct access column
HSHCHKUNIQ Configuration Variable Valid values: Hash index columns are not checked for uniqueness if either another suitable unique access method or another constraint already guarantees the uniqueness of the row. TRUE, 1 Hash index columns are checked for uniqueness. 2 Hash index columns are never checked for uniqueness even if there is no prior access method that can be used to enforce uniqueness. This setting can result in corrupt hash indexes if there are duplicate rows in a table. Therefore, do not use 2 for HSHCHKUNIQ unless you are absolutely sure there are no duplicate rows in the table.
FALSE,0

Default value:

FALSE

60

Configuration Variable Reference

INITMEM

Initial memory requirements in bytes for RHLI. Use INITMEM to reduce the number of calls to sbrk() if you are using a non-BSD type malloc() that will break a big piece of memory in the malloc arena into smaller pieces.
INITMEM Configuration Variable Valid values: Any positive integer A recommended value is the size of your compiled-schema file (dbname.sch). Default value: Example: 0
50k

IXSHMKEY

Key that identifies the segment of shared memory that is used for index performance. Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.
IXSHMKEY Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set only from a configuration file Any positive integer Value of SHMKEY, for example 6904
SHMKEY configuration variable description

Configuration Variable Reference

61

JOURNAL

Recovery journal file name and information, specified as a five-part comma-separated list of keywords and values. The backup device can be a diskette drive, a hard disk, a cartridge tape drive, or a 9-track tape drive. Entries to this file are automatic only if backup is also automatic. Automatic journal management requires a nonmountable device.
JOURNAL Configuration Variable Dependencies: Valid values: Can be set only from a configuration file A comma-separated list of information about the journal device, specified in this format: DEVNM=value, BLKSZ=value, MAXBLK=value, TYPE=value See the following table JOURNAL Value Components
DEVNM=file.jn,BLKSZ=32k,MAXBLK=0,TYPE=nomount

Default value: Example: Additional help:

BUDEV, JOURNAL2, LOGARCHIVE configuration variable

descriptions

62

Configuration Variable Reference

The following table describes the components of the JOURNAL configuration variable.
JOURNAL Value Components

Keyword
DEVNM

Description
Name of the journal device; with TYPE set to auto, the program adds a serial-number extension, followed by a pair of two-digit numbers (for the journal and volume numbers) Size in bytes of the blocks read from or written to the journal device; must be at least 16k Number of blocks that can be read from or written to a volume mounted on the journal device 1

Default
dbname.jn

Example
file.jn

BLKSZ

32k

32k

MAXBLK

If not set or set 0 to 0, blocks are written until the write fails mount

TYPE

nomount Type of journal device used: mount A mountable device such as a tape or floppy disk nomount A non mountable device such as a hard disk auto specifies automatic journal file management; requires a nomount device

1 On some systems, writing to a tape fails if the end of the tape is reached

during the write. If you are using a system that does not use the EOT (end-of-tape) feature, you must set the MAXBLK value so that the end of tape is never reached. Set MAXBLK to a number of blocks that all tapes can hold.

Configuration Variable Reference

63

JOURNAL2

File name and information, specified as a five-part comma-separated list of keywords and values, for the secondary recovery journal. Automatic journal management requires a non-mountable device.
JOURNAL2 Configuration Variable Dependencies: Valid values: Can be set only from a configuration file A comma-separated list of information about the journal device, specified in this format: DEVNM=value, BLKSZ=value, MAXBLK=value, TYPE=value See the following table JOURNAL2 Value Components
DEVNM=file.j2,BLKSZ=32k,MAXBLK=0,TYPE=nomount

Default value: Example: Additional help:

BUDEV, JOURNAL, LOGARCHIVE2 configuration variable descriptions

The following table describes the components of the JOURNAL2 configuration variable.
JOURNAL2 Value Components

Keyword
DEVNM

Description

Default

Example
file.j2

Name of the journal device; with dbname.j2 TYPE set to auto, the program adds a serial-number extension, followed by a pair of two-digit numbers (for the journal and volume numbers) indicates the position in the backup image Size in bytes of the blocks read from or written to the journal device, must be at least 16k Number of blocks that can be read from or written to a volume mounted on the journal device 32k

BLKSZ

4096

MAXBLK

If not set or set 0 to 0, blocks are written until the write fails

1 On some systems, writing to a tape fails if the end of the tape is reached

during the write. If you are using a system that does not use the EOT (end-of-tape) feature, you must set the MAXBLK value so that the end of tape is never reached. Set MAXBLK to a number of blocks that all tapes can hold.

(table continued on next page)


64 Configuration Variable Reference

JOURNAL2 Value Components (continued)

Keyword
TYPE

Description

Default

Example
auto

Type of journal device used: nomount mount A mountable device such as a tape or floppy disk nomount A non mountable device such as a hard disk auto specifies automatic journal file management; requires a nomount device

1 On some systems, writing to a tape fails if the end of the tape is reached

during the write. If you are using a system that does not use the EOT (end-of-tape) feature, you must set the MAXBLK value so that the end of tape is never reached. Set MAXBLK to a number of blocks that all tapes can hold.

L0FORMAT

Controls use of formatting variables, including WF2DIGITYEARS for dates, when outputting data with a LINES 0 statement that does not include a WITH [NO] FORMAT clause. If set to TRUE, the formatting defined by the applicable configuration variables will be honored. If set to FALSE, the formatting configuration variable settings will be ignored. If the LINES statement is set to a value greater than 0, L0FORMAT has no effect. L0FORMAT Configuration Variable
Dependencies: Valid values: Default value: Additional help: Can be set only from a configuration file
TRUE Formatting will occur FALSE Formatting will not occur

FALSE LINES SQL Statement

Configuration Variable Reference

65

LANG

The collating sequence locale name. The collating sequence locale name also determines the character set used to build the database. The locale name that you specify is dependent on your operating system. Some common multilingual Solaris 8 locale names are:
en_US.UTF8 C de_DE.ISO88591 en_US en_US.ISO88591 fr_FR.ISO88591 ja_JP.eucJP zh_CN.EUC zh.GBK zh_CN.GBK zh.UTF8

On UNIX, you can display the available locale names for your host by using the locale a command. If you set LANG to a UNICODE value, you can use non-alphanumeric characters in database object names, such as table names. The ASCII symbols, * % ! @ | , and so on, are still restricted from database object names. This configuration variable can be set in the database configuration file (dbname.cf) or in the environment. If it is set in both, the setting in dbname.cf is used. LANG Configuration Variable
Valid values: Default value: Example: Any valid locale name on your host C (for the local ASCII collating sequence) de_DE.ISO8859-1

Each database is created based on the current locale as specified by the LANG configuration variable. To access the database, you must have the LANG configuration variable set to the value in effect when the database was created.
66 Configuration Variable Reference

LANGDIR

The current language library directory name in $UNIFY. This directory contains any localized message files. English and Japanese message files are available by default. You can create a directory under $UNIFY that contains your applications localized files. LANGDIR Configuration Variable
Valid values: Default value: Any valid directory name under $UNIFY

LINKCUTOFF

The link index access method cutoff percentage. The cutoff value determines the percentage of total rows that can be selected by the link access method. If the percentage of selected rows is above the specified value, sequential access is used instead of the link.
LINKCUTOFF Configuration Variable Valid values: Default value: Example: Integer from 0 100 25 40

LMBTSPIN

Number of times to try to acquire a B-tree latch. B-tree inserts, deletes, updates, and searches place this type of latch. LMBTSPIN Configuration Variable
Valid values: Default value: Example: Additional help: Any positive integer Value specified by LMCSPIN
10 LMCSPIN configuration variable description

Configuration Variable Reference

67

LMCASPIN

Number of times to try to acquire a compatibility archives latch. Compatibility archives are used when running UNIFY 4.0 or 5.0 CHLI applications with Unify DataServer. The CHLI starttrans function places this type of latch. LMCASPIN Configuration Variable
Valid values: Default value: Example: Additional help: Any positive integer Value specified by LMCSPIN
10 LMCSPIN configuration variable description

LMCDSPIN

Number of times to try to acquire a column latch. Name creation and binding place this type of latch.
LMCDSPIN Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer Value specified by LMCSPIN
10 LMCSPIN configuration variable description

LMCNAP

Lock manager critical nap time in hundredths of a second. The lock manager naps while waiting to acquire a latch on a B-tree, compatibility archive, column, hash table, link index, segmentation map, variable-length data buffer, or volume. An individual nap length cannot be specified for these objects. To set LMCNAP, you need to estimate how much time is likely to be spent waiting for the object with conflicts to be processed, and then experiment.
LMCNAP Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer 25
10

LMCSPIN configuration variable description

68

Configuration Variable Reference

LMCSPIN

Default number of times to try gaining a latch before napping. This value is used only if more specific latch values have not been set for B-trees, hash tables, links, and so on by appropriate configuration variable (see Additional Help below).
LMCSPIN Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer 1
10

LMCDSPIN, LMBTSPIN, LMCASPIN, LMHTSPIN, LMLKSPIN, LMSMSPIN, LMVDSPIN, LMVLSPIN configuration variable descriptions

LMDNAP

Lock manager nap time in hundredths of a second. The lock manager naps while waiting to acquire a lock on a database.
LMDNAP Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer 300
400

Unify DataServer: Managing a Database

LMHTSPIN

Number of times to try to acquire a hash table index latch. Hash table inserts, deletes, updates, and searches place this type of latch.
LMHTSPIN Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer Value specified by LMCSPIN 10
LMCSPIN configuration variable description

Configuration Variable Reference

69

LMLKSPIN

Number of times to try to acquire a link index latch. Link inserts, deletes, updates, and searches place this type of latch.
LMLKSPIN Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer Value specified by LMCSPIN
10 LMCSPIN configuration variable description

LMLOCKDIR

Directory where Unify DataServer creates the named pipe or socket files. This variable is used only on systems that have named pipes or sockets instead of shared memory. If the specified directory does not exist, the named pipes are stored in the directory specified in DBPATH.
LMLOCKDIR Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file. Any valid directory search path specification /tmp
accts/tmp DBPATH configuration variable description

LMNRETRY

For compatibility purposes only. Do not change the default value.


LMNRETRY Configuration Variable Valid values: Default value: Any positive integer 0

70

Configuration Variable Reference

LMPROMO

Number of locks that a transaction can acquire before Unify DataServer promotes the locks to a higher level lock. For example, if LMPROMO is set to 50, and 50 row-level locks have been acquired for a table in the current transaction, at the next lock request for a row in that table, the table will be locked. Setting LMPROMO to a large value increases multi-user access ability, but uses more shared memory. Setting LMPROMO to a small value decreases multi-user access ability, but uses memory more efficiently.
LMPROMO Configuration Variable Valid values: Any positive integer If you do not want locks to be promoted, set to 0. Default value: Example: 100
0

LMRNAP

Lock manager nap time in hundredths of a second. The lock manager naps while waiting to acquire a lock on a row.
LMRNAP Configuration Variable Valid values: Default value: Any positive integer 50

LMROWLOCKHASH

Specifies how many row lock hash entries are created in shared memory per table and creates a shared memory array for each table that has a lock. Each table containing a lock will use LMROWLOCKHASH*4 bytes of shared memory; the default value of 101 therefore uses 404 bytes. This is suitable for approximately 10,000 locks per table. For configurations that have many locks per table and enough shared memory, LMROWLOCKHASH can be significantly increased. The only penalty is increased shared memory consumption in the Lock Manager partition. When a table has more locks than the value of LMROWLOCKHASH, the system is required to perform a sequential scan looking for a specific lock. When the number of locks exceeds approximately 100 times the value of LMROWLOCKHASH, the delay can be noticeable.

Configuration Variable Reference

71

LMROWLOCKHASH Configuration Variable Valid values: The number will be rounded to the next prime number if you enter a nonprime number. The default value is automatically used if you enter a value lower than 5. 101
131 CMDENSITY, CMSHMKEY, and SHMKEY configuration

Default value: Example: Additional help:

variable descriptions, shmmap, ukill utility descriptions.

LMSHMKEY

Key that identifies the segment of shared memory used by the lock manager. After the shared memory manager, this and the file manager are the next most active. If no key is specified for the lock manger, is uses a partition in the shared memory managers segment. Assigning the lock manager its own segment helps improve concurrency and minimize shared memory fragmentation, which in turn helps performance. To specify a shared memory key value, you can use octal, hexadecimal, or decimal numbers that are represented as they are in the C language. For octal values, use a leading 0 (zero), as in the value 0777. For hexadecimal values, use a leading 0x, as in the value 0xbccf. For decimal values, use the decimal numbers, as in 1243. Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.
LMSHMKEY Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can only be set from a configuration file
0 through 0x7fffffff

Value of SHMKEY, for example, 6904


0xlcc123 SHMKEY, FMSHMKEY configuration variable descriptions

72

Configuration Variable Reference

LMSMSPIN

Number of times to try to acquire a segmentation map latch. Insertion and deletion of rows, and retrieval of the number of rows place this type of latch.
LMSMSPIN Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer Value specified by LMCSPIN 10
LMCSPIN configuration variable description

LMTNAP

Lock manger nap time in hundredths of seconds. The lock manager naps while waiting to acquire a lock on a table.
LMTNAP Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer 150
190

Unify DataServer: Managing a Database

LMVDSPIN

Number of times to try to acquire a BINARY or TEXT (VDATA) latch. TEXT and BINARY inserts, deletes, and searches place this type of latch.
LMVDSPIN Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer Value specified by LMCSPIN
10 LMCSPIN configuration variable description

Configuration Variable Reference

73

LMVLSPIN

Number of times to try to acquire a volume latch. Schema changes, backup versioning, and DIS unique default place this type of latch.
LMVLSPIN Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer Value specified by LMCSPIN
10 LMCSPIN configuration variable description

LOGALL

Log unchanged columns. If set to TRUE, all column values of a row being updated will be contained in the transaction log file, even those columns which were not modified.
LOGALL Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set only from a configuration file
TRUE All column values are logged for updated rows FALSE Only changed column values are logged

FALSE LOGARCHIVE, LOGARCHIVE2 configuration variable descriptions

LOGARCHIVE

Current transaction journal status. LOGARCHIVE controls the archiving of committed transactions to the transaction journal specified by the JOURNAL configuration variable. To allow you to recover from a media failure, set LOGARCHIVE to TRUE. Also make sure that LOGTX is set to TRUELOGARCHIVE takes effect only if transaction logging is enabled.

74

Configuration Variable Reference

If LOGARCHIVE is FALSE and a system or software failure occurs, the database can be restored by using the redb utility, but committed transactions cannot be rolled forward; all transactions that have been committed since the last database backup are lost if the database is restored. If the database is recovered using the irma utility, no committed transactions are lost.
LOGARCHIVE Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set only from a configuration file
TRUE Committed transactions are archived FALSE Committed transactions are not archived

TRUE
JOURNAL, LOGTX configuration variable descriptions

LOGARCHIVE2

Secondary transaction journal status.


LOGARCHIVE2 Configuration Variable Dependencies: Valid values: Can be set only from a configuration file Committed transactions are archived to the secondary journal file FALSE Committed transactions are not archived to the secondary journal file
TRUE

Default value: Additional help:

FALSE
LOGARCHIVE, JOURNAL2 configuration variable

descriptions

LOGBLK

Transaction log size as number of file system blocks (pages). The mklog utility uses this value to create the transaction log. The larger the specified value, the less frequently sync points occur. The sync point frequency and log file size should be balanced so that log recycling is attempted and completed before running out of log space.

Configuration Variable Reference

75

Log recycling is controlled by FREQUENCY and FREQTYPE.


LOGBLK Configuration Variable Dependencies: The size must be greater than the maximum number of concurrent transactions allowed (specified by MAXSYSTX). Changes to this variable are not in effect until a database create or mklog -Ooverwrite occurs. Can be set only from a configuration file Valid values: Default value: Example: Additional help: Any valid number of blocks 1000
5000 MAXSYSTX, FREQUENCY, FREQTYPE, TXLOGFULL

configuration variable descriptions prtlghd utility Tuning Syncpoints in Unify DataServer: Managing a

Database

LOGFILE

Name of the transaction (logical) log file; usually in the form dbname.lg.
LOGFILE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any valid file name file.lg

(If DBNAME is set to file.db)

/ASQL/etc/temp
DBNAME configuration variable description

76

Configuration Variable Reference

LOGFM

Physical file logging status.


LOGFM must be set to TRUE if you want to use the irma utility to recover the database in the event of a system failure. If LOGFM is set to FALSE and database crash occurs, the database must be recovered from a backup by using redb. LOGFM Configuration Variable Dependencies: Valid values: Can be set only from a configuration file
TRUE FALSE

Physical logging is set on Physical logging is set off

Default value: Additional help:

TRUE

Unify DataServer: Managing a Database

LOGOPNMODE

Transaction log opening mode. If LOGOPNMODE is set to 1 Unify DataServer waits for the write to complete before continuing processing. If LOGOPNMODE is set to 0, Unify DataServer continues processing without waiting for the write to complete. When LOGOPNMODE is set to 0, if a crash were to occur before a sync point, you would have no guarantee that the write made it to the transaction log. Tip To ensure that you can recover the database if a system crash occurs, use synchronous (1) mode. Use asynchronous (0) mode only when database performance is more essential than quick recovery from system crashes. If a crash were to occur while running asynchronously, you might have to recover the database using the latest backup.
LOGOPNMODE Configuration Variable Dependencies: Valid values: Can be set only from a configuration file
0
1

Transaction log is opened asynchronously Transaction log is opened synchronously

Default value: Additional help:

0
FMOPNMODE configuration variable description

Configuration Variable Reference

77

LOGRC

Name of the recovery log file; usually in the form dbname.rc, although a full path name can be specified. When you restore your database, Unify DataServer copies the transaction log to this file.
LOGRC Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any valid file or device name file.rc (if DBNAME is set to file.db)
accts.rc DBNAME configuration variable description

LOGTX

Current transaction logging status. If LOGTX is set to FALSE, then transactions are not logged and you cannot roll back transactions. To be able to roll back incomplete transactions and recover from system crashes, set LOGTX to TRUE.
LOGTX Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set only from a configuration file
TRUE Enable logging FALSE Disable logging

TRUE

Unify DataServer: Managing a Database

LOGUSER

User login status. This variable indicates whether the users entry to and exit from the database should be logged into the file.err file. The file.err file is linked to the errlog file. Logging users allows you to identify the owner of a process when debugging a database problem and identify active users if the operating system crashes.
LOGUSER Configuration Variable Valid values: Default value:
TRUE Log users as they open and close the database FALSE Do not log users

FALSE

78

Configuration Variable Reference

LONGQUERYTIME

Each time a query takes longer than LONGQUERYTIME seconds, an error log entry is generated.
LONGQUERYTIME Configuration Variable Valid values: Default value: Additional help: Any positive integer. 0 (No slow queries are logged). The uperf and udbsqls utilites can report query exection times.

MAXBTSCAN

Maximum number of B-tree scans that can be active at one time per user.
MAXBTSCAN Configuration Variable Valid values: Default value: Example: Any positive integer 20
10

MAXBUJRNS

Maximum number of automatically-managed backup and journal files. If more than the specified number of files would exist after a back up, the earliest files are removed.
MAXBUJRNS Configuration Variable Valid values: Any integer (providing the disk has enough room for all backup files, including the one currently running, if any: there must be space for MAXBUJRNS + 1 files) 0 (disables automatic backup management) 4

Default value: Example:

MAXCACHE

Maximum number of cache buffers. Initially, you should set MAXCACHE to approximately 10% of the available memory on your machine. For example, if the machine has 16Mb of RAM, then 1600 / page_size is the appropriate value. (Assuming a 2k page size, you would set MAXCACHE to 800; for 512 bytes page size, you would set MAXCACHE to 3200).

Configuration Variable Reference

79

Over time, you should check the cache hit ratio. If the hit ratio is low, increase the size of cache by increasing the setting for MAXCACHE.
MAXCACHE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any 1000
800

Tuning the Cache in Unify DataServer: Managing a

Database

MAXCOLS

For preprocessing and loading RHLI and Embedded SQL/A applications only, the maximum number of columns that can be referenced in a single .c file. The upp and uld utilities use this value.
MAXCOLS Configuration Variable Valid values: Default value: Example: Any positive integer 500
800

MAXOPNBTS

Maximum number of B-tree files that can be open at one time. When MAXOPNBTS is set, the size of MAXOPNBTS affects B-tree performance in the following ways: If MAXOPNBTS is greater than 20, B-tree index searches may be faster than at the default. If MAXOPNBTS is less than 20, B-tree searches are slower, but more file descriptors are available for user programs.

80

Configuration Variable Reference

If your application attempts to open a B-tree file once the value specified by MAXOPNBTS is reached, the oldest open B-tree file is closed and the attempted open succeeds.
MAXOPNBTS Configuration Variable Valid values: Default value: Example: Any positive integer 20
10

MAXSCAN

Maximum number of RHLI scans that can be active at one time per user.
MAXSCAN Configuration Variable Valid values: Default value: Any positive integer 50

MAXSYSTX
(Read only)

The maximum number of transactions that a database application can execute concurrently. The default maximum number of transactions per database application is 300. Your systems memory limitations set the only physical limits on MAXSYSTX.
MAXSYSTX Configuration Variable Dependencies: Valid values: Default value: Example: Can be set only from a configuration file Any positive integer 300
90

Configuration Variable Reference

81

MAXTBLS

For preprocessing and loading RHLI and Embedded SQL/A applications only, the maximum number of tables that can be referenced in a single .c file. The upp and uld utilities use this value.
MAXTBLS Configuration Variable Dependencies: Valid values: Default value: Example: Can be set only from a configuration file Any positive integer 500
5000

MAXUSRTX

Maximum number of transactions that a user can execute concurrently or the maximum number of cursors allowed per Embedded SQL/A application. Your systems memory limitations set the only physical limits on MAXUSRTX.
MAXUSRTX Configuration Variable Dependencies: Valid values: Default value: Example: Can be set only from a configuration file Any positive integer 10
50

MXIAQP

Maximum number of embedded SQL/A query processors that can remain inactive after completing their task. A query processor is synonymous with an operating system process. Each time a query processor is started, an operating system process is started. When the query processor finishes processing the SQL/A query, however, the process is left active for use by the next query. If enough query processors are started at one time, all available operating system processes may become active, preventing the programmer from forking a process from the SQL/A application. Therefore, MXIAQP is used to limit the total number of active operating system processes that do not have an active query.

82

Configuration Variable Reference

After an application terminates, any associated inactive query processors eventually exit also.
MXIAQP Configuration Variable Valid values: Default value: Example: Any positive integer 3
10

MXOPENCURSORS

Maximum number of cursors allowed open at one time in an Embedded SQL/A application. The total number of cursors open at one time includes those defined in a DECLARE CURSOR statement (explicit cursors), as well as those created by SQL/A while executing SQL statements (implicit cursors). Implicit cursors are created for insert, select, delete, and update operations that do not have an explicit cursor associated with them. An implicit cursor is also created by the COMMIT WORK statement. You should set MXOPENCURSORS to a value that is at least two more than the number of explicit cursors used in your application. One additional value is for any implicit cursors (only one implicit cursor exists at a given time) and one value is for an internal-processing cursor that is used by Unify DataServer.
MXOPENCURSORS Configuration Variable Valid values: Default value: Example: Any positive integer greater than 2 10
10

MXQRYTXT

Maximum number of characters allowed in an embedded SQL/A query after host variable values are substituted.
MXQRYTXT Configuration Variable Valid values: Default value: Example: Any positive integer 8192 6000

Configuration Variable Reference

83

NAMECACHE

Database object name caching flag. Database object name caching improves the performance of runtime name binding and ID mapping by storing the objects current ID in local process memory. Without a name cache, Unify DataServer must retrieve the object IDs from the data dictionary. Runtime name binding occurs automatically in SQL/A when a column, table, or schema name is referenced. In an RHLI application, you control when runtime name binding is performed. Performance If your application repeatedly accesses the same table, setting NAMECACHE to TRUE can greatly improve performance.
NAMECACHE Configuration Variable Valid values: Database object name caching is performed for all schema, table, and column IDs FALSE Database object name caching is not performed
TRUE

Default value: Additional help:

TRUE
NAMECACHEMX and NAMECACHESZ configuration variable descriptions

NAMECACHEMX

Maximum size, in bytes, of the name cache. The name cache exists in local process memory. The name cache stores the current object IDs of schemas, tables, and columns referenced in an application. If the storage of object IDs exceeds the size of the name cache, the excess object IDs must be retrieved by accessing the data dictionary.
NAMECACHEMX Configuration Variable Dependencies: Valid values: Default value: Additional help: The value of NAMECACHEMX must be greater than the value of NAMECACHESZ Any positive integer 128k
NAMECACHE and NAMECACHESZ configuration variable

descriptions
84 Configuration Variable Reference

NAMECACHESZ

Initial size, in bytes, of the name cache. The name cache stores the current object IDs of schemas, tables, and columns referenced in an application.
NAMECACHESZ Configuration Variable Dependencies: Valid values: Default value: Additional help: The value of NAMECACHESZ must be less than the value of
NAMECACHEMX

Any positive integer 28k


NAMECACHE and NAMECACHEMX configuration variable

descriptions

NBUBUF

Number of backup device buffers. The backup reader or writer process waits for a buffer to become full, then flushes that buffer directly to the backup device (reader) or database file (writer). On most systems, the default value for NBUBUF is sufficient. However, in some cases, you may need to increase the value to improve performance. Use the NBUBUF configuration variable with the BURDSZ, NBUPROC, and BUDEV configuration variables to improve the performance of systems where the transfer rate of the backup device exceeds the transfer rate of the budb and redb utilities.
NBUBUF Configuration Variable Dependencies: The value of NBUBUF must be twice the value of
NBUPROC

Can be set only from a configuration file Valid values: Default value: Additional help: Any positive integer 4
BURDSZ, BUDEV, NBUPROC configuration variable descriptions

Configuration Variable Reference

85

NBUCKET

The maximum number of hash table bucket cache buffers. Each buffer is just over 2k (file system block size). Setting it lower reduces the runtime memory requirements. Setting it higher means that more hash table buckets can be kept in memory during an operation. This can be important during a hash table split with long overflow chains.
NBUCKET Configuration Variable Valid values: Default value: Example: Any integer greater than 5 10
30

NBUPROC

Number of reader or writer processes used for database backups and restores. A reader process is used when the budb utility backs up the database. A writer process is used when the redb utility restores a database. The set of reader and writer processes work together to fill a pool of backup device buffers, which are controlled by the NBUBUF configuration variable. On most systems, the default value for NBUBUF is sufficient. However, in some cases, you may need to increase the value to improve performance. Use the NBUPROC configuration variable with the BURDSZ, NBUBUF, and BUDEV configuration variables to improve the performance of systems where the transfer rate of the backup device exceeds the transfer rate of the budb and redb utilities.
NBUPROC Configuration Variable Dependencies: The value of NBUPROC must be half the value of NBUBUF Can be set only from a configuration file Valid values: Default value: Additional help: Any positive integer 2
BURDSZ, BUDEV, NBUBUF configuration variable

descriptions
86 Configuration Variable Reference

NFUNNEL

The maximum number of global funnel locks. A global funnel lock is used on some platforms (for example, HP) to ensure the alignment of memory addresses during locking operations. When a row or table lock is desired, a larger (global) funnel lock that encompasses the row or table lock area must first be acquired. As the value of NFUNNEL increases, the likelihood of contention (which would affect performance) for a funnel lock decreases, at the expense of more shared memory use. Lower values of NFUNNEL save shared memory at the expense of more funnel lock contention. A reasonable guideline is to keep the contentions-to-successful locks ratio below .1%.
NFUNNEL Configuration Variable Dependencies: Valid values: Default value: Example: Can be set only from a configuration file Any positive integer 199
16

NULLCH

Default null display character for all data types. You can override this variable by specifying a default null display character for each displayable data type: AMOUNT, BOOL, DATE, FLOAT, NUMERIC, STRING, TIME, and TEXT.
NULLCH Configuration Variable Valid values: Default value: Example: Additional help: Any printable character enclosed in quotation marks (Character)
* # AMTNULLCH, DATNULLCH, FLTNULLCH, NUMNULLCH, STRNULLCH, TIMNULLCH, TXTNULLCH configuration

variable descriptions

Configuration Variable Reference

87

NUMFMT

Default format template to be used to display NUMERIC data.


NUMFMT Configuration Variable Valid values: Any valid NUMERIC print format that is recognized by ACCELL/SQL, irs, or RPT, or the C language printf( ) function format. The value must be enclosed in quotation marks (print_format).
none

Default value: Example: Additional help:

###,##&
DISPLAY_FORMAT field attribute description and DISPLAY statement syntax and description in ACCELL/SQL: Script and Function Reference

print statement description in Unify DataServer: Writing Reports with RPT Report Writer

NUMNULLCH

Null display character for NUMERIC data. The character specified in NUMNULLCH overrides NULLCH for NUMERIC data.
NUMNULLCH Configuration Variable Valid values: Default value: Example: Additional help: Any printable character enclosed in quotation marks (Character)
* # NULLCH configuration variable description

88

Configuration Variable Reference

OPMSGDEV

Operator message device name where system and backup messages are sent by default (the database/backup device operators console); usually in the form dbname.msg.
OPMSGDEV Configuration Variable Dependencies: Ignored if OPNOTIFY is set Can be set only from a configuration file Valid values: Default value: Example: Additional help: Any valid file name file.msg (if DBNAME is set to file.db)
acctsdb.msg

budb utility description redb utility description DBNAME, OPNOTIFY configuration variable description

OPNOTIFY

Operator notify utility that sends messages to the database/backup device operator, located in $UNIFY/../bin, and to /tmp/unify.log. If OPNOTIFY is set, then OPMSGDEV is ignored.
OPNOTIFY Configuration Variable Dependencies: Valid values: Default value: Can be set only from a configuration file Any valid file name To unset OPNOTIFY, specify the value undefined none If OPNOTIFY is not set, messages are sent directly to the operator's console (specified by OPMSGDEV) every 10 minutes until the operator responds. Example: Additional help:
bunotify

OPMSGDEV configuration variable description

Configuration Variable Reference

89

OWNERSTART

Database owner startup and shutdown flag. OWNERSTART Configuration Variable


Dependencies: Valid values: Can be set only from a configuration file Only the database creator (the database owner) can use startdb and shutdb FALSE Anyone can use these utilities
TRUE

Default value:

FALSE

PHYFILE

Name of the physical log file; usually in the form dbname.pl.


PHYFILE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any valid file name file.pl (if DBNAME is set to file.db)
accts.pl DBNAME, LOGFM configuration variable descriptions

90

Configuration Variable Reference

PHYHASH

File manager physical log hash table size; this is the minimum number of blocks available for the physical log hash tables. Increasing PHYHASH increases shared memory requirements for the file manager partition, but it also decreases CPU time.
PHYHASH Configuration Variable Dependencies: For large database files increase the value of PHYHASH. Access to hash tables is controlled by an SHMSPIN latch. Can be set only from a configuration file Valid values: 100 5000 The value that you specify is rounded up to the nearest prime number. Default value: Example: Additional help: 301 1013
SHMSPIN configuration variable description Tuning Latches in Unify DataServer: Managing a

Database

PROCESSOR

Processor ID on which all Unify DataServer executables (including custom loaded and RHLI executables) are to operate. This variable applies to loosely-coupled, non-shared memory machines only. You can only specify a single processor; PROCESSOR cannot specify multiple processors. Unify DataServer does not migrate the process to the correct processor. The user application must migrate the process to the correct processor by using the migrate utility. However, Unify DataServer can validate that the process is running on the correct processor, if the PROCESSOR configuration variable is set. If the process is not running on the correct processor, an error code or message is issued. Warning Changing the PROCESSOR variable while the database is running can corrupt the database. Shut down the database before changing the PROCESSOR configuration variable.

Configuration Variable Reference

91

PROCESSOR Configuration Variable Dependencies: Valid values: Can be set only from a configuration file Any valid process ID executables execute under the machine specific processor ID 1 executables can execute on any available processor without restriction 1
3409

Default value: Example:

RADIXSEP

Radix separator character to be used when displaying AMOUNT, CURRENCY, or FLOAT data. Valid only when using a display template.
RADIXSEP Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Used only with the AMTFMT format template Any printable character used as a radix separator, enclosed in quotation marks: radix_separator. .
, AMTFMT and UCURRFMT configuration variable

descriptions

RALFILE

Name of the recovery audit log file; usually in the form dbname.ral.
RALFILE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any valid file name file.ral (if DBNAME is set to file.db)
accts.ral DBNAME configuration variable description

92

Configuration Variable Reference

REPTMAXMEM

Maximum size of the internal link buffer in bytes. Set this value to approximately half of the available memory on your machine. This variable is used only when a link index is created.
REPTMAXMEM Configuration Variable Valid values: Default value: Example: Any positive integer 256 * 1024
256k

RHLIGLOBCOMPAT

Flag that allows the SQL/A operators LIKE and SHLIKE to work the same as the corresponding RHLI operators ULIKE and UGLOB under these conditions: The query goes through an RHLI scan The search pattern contains trailing blanks The search pattern includes no metacharacters. Tip
RHLI scan queries are usually faster than comparable SQL queries. Unify

DataServer takes advantage of this speed by using the RHLI whenever it determines that the RHLI is adequate to perform the query. Otherwise, the query goes through SQL/A. If RHLIGLOBCOMPAT is TRUE, LIKE and SHLIKE may not return the same results as identical queries using UGLOB or ULIKE. If RHLIGLOBCOMPAT is set to FALSE, LIKE (or SHLIKE) returns the same results as the RHLI ULIKE (or UGLOB) operator. In most queries, a search uses a pattern like JOHN* to locate the names JOHN, JOHNS, JOHNSON, and JOHNSTON. However, with RHLIGLOBCOMPAT set to TRUE, if the search pattern is JOHN , with no metacharacters (or wildcards) and including a trailing blank, an RHLI search will return JOHN. A search using SQL would return JOHN only if the query was passed to RHLI. Conversely, if RHLI does not handle the query, the SQL search will not find any matches.
Configuration Variable Reference 93

With RHLIGLOBCOMPAT set to FALSE, no search, either SQL or RHLI, will return a match because entering data into a database table column strips all trailing spaces.
RHLIGLOBCOMPAT Configuration Variable Valid values:
TRUE Retains the native RHLI ULIKE and UGLOB operator actions. FALSE Forces the RHLI ULIKE and UGLOB operators to work like the SQL/A operators SHLIKE and LIKE.

Default value:

TRUE

RKYMAXMEM

Maximum size of the internal hash table buffer in bytes. For optimum performance, set this value to approximately half of the available memory on your machine. This variable is used only when a hash index is created.
RKYMAXMEM Configuration Variable Valid values: Default value: Example: Any positive integer 4MB
64MB

RMTROWBUFSZ
(Unify/Net only)

The number of rows the row buffer can contain when fetching rows from a remote database. If the rows contain TEXT or BINARY data values, row buffering is not used. (The RMTROWBUFSZ setting is ignored.) Performance For queries that select many rows, you can normally increase performance by setting this variable to a higher value than the default.
RMTROWBUFSZ Configuration Variable Valid values: Default value: Example: Any positive integer 100
100

94

Configuration Variable Reference

RPT13GLOB

RPT trailing-blanks-significance flag for string comparisons. When trailing blanks are not significant (RPT13GLOB is TRUE), these string

values are equivalent: abc, abc , and abc . The strings are not equivalent if trailing blanks are significant (RPT13GLOB is FALSE). This variable can be set in a configuration file (unify.cf or dbname.cf) or at the operating system command level.
RPT13GLOB Configuration Variable Valid values: Default value: Additional help:
TRUE Trailing blanks in strings are not significant. FALSE Trailing blanks in strings are significant. FALSE

Unify DataServer: Creating Reports With RPT Report Writer

SCHEMA

Merge-compiled-schema flag. The compiled schema file is named dbname.sch. This file contains a compiled version of the data dictionary. As DDL operations occur, this file is updated periodically (if SCHEMA is set to TRUE).
SCHEMA Configuration Variable Valid values:
TRUE Merge DDL changes into the schema file FALSE Do not merge DDL changes into the schema

file

Default value:

TRUE

SCHFILE

Name of the compiled database design file (created by Unify DataServer when you define a database design); usually in the form dbname.sch.
SCHFILE Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set only from a configuration file Any valid file name file.sch (if DBNAME is set to file.db)
accts.sch DBNAME configuration variable description

Configuration Variable Reference

95

SEPARATOR

Column separator for RPT and SQL/A. Ordinarily, you should avoid using any of the following reserved characters as the separator character: ^!#@*.() If you do need to use a reserved character as the separator character, make sure you understand how it will affect your results. For example, if you set the separator character to an asterisk (*), you must set the default null character (NULLCH) to a different character, because the default null character is also the asterisk.
SEPARATOR Configuration Variable Valid values: Default value: Example: Any character
| (the UNIX pipe symbol) [

SHELL

Command processor to be used to execute operating system commands from Interactive SQL/A when using the !command to execute a shell command.
SHELL Configuration Variable Dependencies: Valid values: Default value: Example: Must be set at the operating system command level; used only on the UNIX operating system Any valid directory path list
/bin/sh

Bourne Shell (colon-separated paths): /bin/sh

96

Configuration Variable Reference

SHMADDR

Shared memory physical addressing scheme, 0 or 1. Used internally by Unify Corporation to configure the shared memory attach methodology. Do not modify this value, unless recommended by a Unify Technical Support engineer.

SHMDEBUG

Flag that indicates whether shared memory malloc chain debugging is on.
SHMDEBUG Server Configuration Variable Dependencies: Valid values: Default value: Can be set from a configuration file only
TRUE Shared memory malloc chain debugging is on. FALSE Shared memory malloc chain debugging is off.

FALSE

SHMDIR

Shared memory temporary work directory path name. Unify DataServer uses this directory to store temporary files needed for some operations.
SHMDIR Configuration Variable Dependencies: Valid values: Default value: Example: Can be set from a configuration file only Any valid directory path /tmp
/usr/tmp

SHMFULL

Shared memory threshold that triggers shared memory reorganization (garbage collection). The value is specified as a percentage. When the amount of used shared memory reaches this value, Unify DataServer attempts to free memory and perform garbage collection. (An entry is made in the error log to indicate that this has taken place.)

Configuration Variable Reference

97

You usually want to prevent garbage collection because of the overhead associated with it. To prevent garbage collection, you should first increase SHMMAX. The next best option is to place the most active managers partitions into secondary segments. The final option should be to modify SHMFULL.
SHMFULL Configuration Variable Valid values: Default value: Example: Additional help: 10 100 75
80 SHMMAX, SHMFULL configuration variable descriptions

SHMKEY and
XXSHMKEY

Key that identifies the base, or primary, segment of memory used for the shared memory manager. The shared memory manager controls Unify DataServer shared memory, and coordinates and controls all other component managers that use shared memory. The shared memory manager base segment must be unique in the system. The segment cannot previously exist in the system and cannot be used by user processes. A Unify DataServer database always has at least one shared memory segment, known as the base segment, which is identified using the SHMKEY configuration variable. The database can also contain secondary shared memory segments, which are identified using the XXSHMKEY configuration variables, where the XX denotes one of the Unify DataServer software modules. In the default release configuration, all partitions reside in the base shared memory segment, as shown in the following configuration file excerpt:
SHMKEY # AMSHMKEY # AUSHMKEY # CMSHMKEY # DBSHMKEY # FMSHMKEY # LMSHMKEY # TMSHMKEY = = = = = = = = 6904 6904 6904 6904 6904 6904 6904 6904 # # # # # # # # Shared Memory Manager Shared Memory ID ACCELL Manager Shared Memory ID Authorization Manager Shared Memory ID Cache Manager Shared Memory ID Database Manager Shared Memory ID File Manager Shared Memory ID Lock Manager Shared Memory ID Tx Manager Shared Memory ID

98

Configuration Variable Reference

The # character in the configuration file entries indicates a comment. Commented or undefined configuration variables revert to their default value. In Unify DataServer, all shared memory partition configuration variables default to the value of the Shared Memory Manager partition. If the Shared Memory Manager partition configuration variable is not defined, its default value is the value specified when Unify DataServer was installed: typically 6904. The Lock and File Managers are most likely to benefit from being placed in secondary segments. Both managers are referenced frequently during database operations, and each can require a significant amount of memory, depending on the application. Use the shmmap and lmshow utilities to help determine if placing them in a secondary segment would be beneficial. In the following configuration file example, the Lock Manager partition is in a secondary segment that does not contain any other partitions. The remaining partitions all reside in the base partition.
SHMKEY # AMSHMKEY # AUSHMKEY # CMSHMKEY # DBSHMKEY # FMSHMKEY LMSHMKEY # TMSHMKEY = = = = = = = = 6904 6904 6904 6904 6904 6904 6905 6904 # # # # # # # # Shared Memory Manager Shared Memory ID ACCELL Manager Shared Memory ID Authorization Manager Shared Memory ID Cache Manager Shared Memory ID Database Manager Shared Memory ID File Manager Shared Memory ID Lock Manager Shared Memory ID Tx Manager Shared Memory ID

In this example, only the Lock Manager partitions SHMKEY configuration variable is changed. The Shared Memory Manager partitions configuration variable does not need to be changed because that variable defines the base shared memory segment, not the secondary shared memory segments. Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.

Configuration Variable Reference

99

To specify a shared memory key value, you can use octal, hexadecimal, or decimal numbers that are represented as they are in the C language. For octal values, use a leading 0 (zero), as in the value 0777. For hexadecimal values, use a leading 0x, as in the value 0xbccf. For decimal values, use the decimal numbers, as in 1243.
SHMKEY Configuration Variable Dependencies: This variable can be set only from a configuration file and must be set before using startdb to start the database Can be set from a configuration file only Valid values:
0 through 0x7fffffff

The key must be unique in the system. Default value: Example: Additional help: 6904 0xlcc123
SHMMAX, SHMFULL configuration variable descriptions,

lmshow utility description

SHMKIND

Shared memory attach addressing scheme, 0 or 1. Used internally by Unify Corporation to configure the shared memory attach methodology. Do not modify this value. Address space margin between shared memory segment attach points. The margin acts as a buffer to guarantee alignment of secondary shared memory segments.
SHMMARGIN Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set from a configuration file only Any positive integer 1024k SHMOFFSET configuration variable description

SHMMARGIN

SHMMAX

Maximum size of the base and secondary shared memory segments. After selecting a shared memory segment attach address, Unify DataServer creates the largest shared memory segment that fits in the limits of SHMMAX and SHMMIN.

100

Configuration Variable Reference

For example, suppose SHMMAX is set at 32k and SHMMIN is 16k. If memory has enough space for a 32k-byte shared memory segment, Unify DataServer creates the segment. However, if memory does not have enough space for a 32k segment, Unify DataServer tries to create the largest possible segment between 16 and 32k. If a 16k segment cannot be created, Unify DataServer returns an error. Set SHMMAX to a value slightly larger than you expect the application will need, because Unify DataServer always creates a segment larger than SHMMIN but smaller than SHMMAX. However, if you specify an exceptionally large SHMMAX value, you may waste some shared memory space. The default value is fairly low, and system functionality usually improves from having the value increased. However, remember that increasing the amount of shared memory will also increase the processes size and lead to increased or unnecessary swapping.
SHMMAX Configuration Variable Dependencies: Must be larger than SHMMIN Can be set from a configuration file only Valid values: Default value: Example: Additional help: Any positive integer
512k

1024k
SHMMIN configuration variable description

SHMMERGE

Flag that controls whether the cleanup daemon, cldmn, coalesces adjacent pieces of shared memory during cleanup. Because shared memory is coalesced when it is reallocated, you can usually leave this set to FALSE.
SHMMERGE Configuration Variable Valid values: Turn cleanup on-adjacent free shared memory segments are collapsed into one free segment FALSE Turn cleanup off
TRUE

Default value: Additional help:


Configuration Variable Reference

FALSE

Unify DataServer: Managing a Database


101

SHMMIN

Minimum shared memory segment size. After selecting a shared memory segment attach address, Unify DataServer creates the largest shared memory segment that fits in the limits of SHMMAX and SHMMIN. For example, suppose SHMMAX is set at 32k and SHMMIN is 16k. If memory has enough space for a 32k-byte shared memory segment, Unify DataServer creates the segment. However, if memory does not have enough space for a 32k segment, Unify DataServer tries to create the largest possible segment between 16 and 32k. If a 16k segment cannot be created, Unify DataServer returns an error. If you want to create a table that contains over 128 columns, you must increase the values specified by SHMMIN as follows:
SHMMIN = SHMMIN + (652 * number_of_columns_in_table) SHMMIN Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set from a configuration file only Any positive integer 128k
SHMMAX configuration variable description

102

Configuration Variable Reference

SHMMODE

The shared memory segment access modes, expressed as an octal number. Similar to an operating system file, shared memory segments are created with access modes, interpreted as shown below. Once defined, the access modes cannot be changed without removing the shared memory segment. Tip For added security, use SHMMODE to control access to the database.
SHMMODE Configuration Variable Valid values: 0400 0200 0060 0006 Default value: Example: 0666
0660 (allows read and write access by only the creator and

Allow read by owner Allow write by owner Allow read and write by group (0040 + 0020) Allow read and write by others (0004 + 0002)

the creators group)

SHMNAP

The interval in hundredths of a second to delay before rescheduling the process that encountered a SHMSPIN latch conflict. Performance It is strongly recommended that SHMNAP be set lower than 1 for highend servers with processors faster than 1GHz. This will take advantage of the faster processor speeds by reducing the amount of time the processors are idle.
SHMNAP Configuration Variable Valid values: Default value: Example: Additional help: .0001 to any positive integer 3
1 SHMSPIN configuration variable description

Configuration Variable Reference

103

SHMNAPINCR

Nap time increment. As latch timeouts occur, the process that is waiting for the latch naps for SHMNAP hundredths of a second. As each timeout occurs, the nap time is incremented by SHMNAPINCR. This allows the waiting process to wait longer and longer, thus consuming less CPU time while it is waiting.
SHMNAPINCR Configuration Variable Valid values: Default value: Example: Additional help: .0001 to any positive integer 3
.005 SHMNAP configuration variable description

SHMNAPMAX

Maximum value for SHMNAP. This sets an upper bound on the nap value. After SHMNAP is incremented to this value, it will stay until the latch is attained.
SHMNAPMAX Configuration Variable Valid values: Default value: Example: Additional help: .0001 to any positive integer 50
1 SHMNAP configuration variable description

104

Configuration Variable Reference

SHMOFFSET

The number of bytes between the systems default address for a shared memory segment, and the address actually used for the first shared memory segment.
Text Data BSS malloc arena

SHMOFFSET
Shared memory (primary)

SHMMIN SHMMAX SHMMIN SHMMAX MAXCACHE

SHMMARGIN
Shared memory (secondary)

SHMMARGIN
Shared memory cache

stack area

Shared Memory Configuration Variables On some machines, selecting an attach address near the stack area produces an exceptionally large process size. This results in poorer performance due to swapping requirements. However, SHMOFFSET enables you to specify an offset that must not be exceeded when scanning the process space for the attach address. In effect, SHMOFFSET indicates how much dynamic memory is required by the process. For example, if SHMOFFSET is 4096, the scanning attach address cannot be more than 4096 bytes above the start of the sbrk area. This allows only 4096 bytes of dynamic memory for the process, whereas a value of 16k lets a process allocate up to 16k of dynamic memory.
Configuration Variable Reference 105

However, most applications require much more dynamic memory than this. You can safely specify an offset value of 4M or even 16M, because Unify DataServer finds an attach address without exceeding the specified offset. Too large a value can also cause large process size and resident page table, resulting in decreased performance, while too small a value can limit the amount of dynamic memory that Unify DataServer can allocate. If you are doing large scans, increase SHMOFFSET.
SHMOFFSET Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set from a configuration file only Any positive integer platform dependent 4096

Unify DataServer: Managing a Database

SHMRSRV

Percentage of the shared memory segment reserved for process recovery. SHMRSRV defines the amount of shared memory required for one process to complete one RHLI operation successfully, including recovering from system and program failures. For most applications, the reserved space should be approximately 50k. Therefore, if your shared memory segment size is 2 Mb, you would set SHMRSRV to 2 or 3. This reserved segment of shared memory is released when your application runs out of shared memory.
SHMRSRV Configuration Variable Dependencies: Valid values: Default value: Example: Can be set from a configuration file only Between 1 and 90 10 2

106

Configuration Variable Reference

SHMSPIN

Shared memory spin count before a nap. This is the number of database loops before a nap when waiting for a shared memory critical lock to release. An input/output operation cannot occur during database loops specified by SHMSPIN.
SHMSPIN Configuration Variable Valid values: Any positive integer Recommended values: On single processor systems, set to 1; On multiprocessor systems, set to 100 * number_of_processors Default value: Example: Additional help: 1
2000

Tuning Latches in Unify DataServer: Managing a

Database

SHUTDBSIG

Database shutdown signal. SHUTDBSIG is used by shutdb to tell an active user process that the database is being shut down. When the database is opened, the value of SHUTDBSIG is saved in the prcb for the process.
SHUTDBSIG Configuration Variable Valid values: Default value: Any valid signal that can be handled by the operating system 3

SHUTDOWN

Number of seconds to wait before database shutdown begins


SHUTDOWN Configuration Variable Dependencies: Valid values: Can be set from a configuration file only Any positive integer Specify 0 for no wait Default value: Example: 0
20

Configuration Variable Reference

107

SORTCOST

Sorting cost weight factor that enables you to fine-tune the relative cost of sorting on your hardware platform. The cost of a scan is the cost of fetching the rows plus the cost to sort the rows, if needed. To avoid the sort cost, use a B-tree that matches the selection sort criteria. Use a low number if sorting on your platform is very fast; use a higher number if sorting on your platform is very slow. For example, a value of 0 (zero) indicates that the sort takes no time, while a value of 2.0 indicates that the sort is twice as expensive as when SORTCOST is set to the default of 1.0. The optimizer uses the floating point value for this variable as a multiplier when computing sort costs.
SORTCOST Configuration Variable Valid values: Default value: Example: Additional help: Float value >= 0
1.0 2.0 FIND1STFASTFAC and HISTINTRVL configuration variable descriptions. Specifying the Join Order in Unify DataServer: Writing Interactive SQL/A Queries. Analyzing Access Method Performance in Unify DataServer: Managing a Database.

SPMAXNEST

Maximum number of stored procedures that can be nested in a stored procedure. To prevent nesting in stored procedures, set SPMAXNEST to 0.
SPMAXNEST Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer
42 1 TRIGGERMAXNEST configuration variable, Unify DataServer: Writing Interactive SQL/A Queries

108

Configuration Variable Reference

SPOOLER

Name of the print spooler to be used with RPT.


SPOOLER Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: This value may be operating system or hardware dependent Any valid print spooler name
UNIX: lpr UNIX: lpr P pslaser1 ACCELL/SQL: Setting Up a User Environment

SPSECURE

Allows or blocks PIPELINE operations. The default setting for SPSECURE is TRUE which disallows PIPELINE statements and the system$() function. Setting this variable FALSE allows PIPELINE statements. This is a SERVER side configuration variable and must be set in the configuration file for the database.
SPSECURE Configuration Variable Dependencies: Valid values: Can be set only from a configuration file FALSE permits PIPELINE statements and the system$() function TRUE disallows PIPELINE statements and the system$() function
TRUE

Default value:

SPTRACEFILE

Log all stored procedure and trigger calls. If SPTRACEFILE = filename, all calls to a stored procedure or trigger will be logged to filename (including a trace of all lines executed). If there is no SPTRACEFILE variable definition, the function is disabled. In a clientserver environment, filename will be written to the server. SPTRACEFILE is a SHARED configuration variable (either the server or the client can set this configuration variable). The DBA can log all these calls by setting SPTRACEFILE in the configuration file for the database. In the example below, the numbers in the lefthand column are the line numbers from the stored procedure. The text echoes the content of the line from the stored procedure. The line [return] indicates a successful operation. If there is an error, the message [Error xxxx] appears instead of [return].

Configuration Variable Reference

109

Procedure get_pay:

3 4 5

begin set $pay to $gross $taxes return ( $pay ) [return]

SPTRACEFILE Configuration Variable


Dependencies: Valid values: Default value: Example: Can be set from a configuration file or a shell environment. Any valid file name None, SPTRACEFILE disabled tracefile.sp

SQLATOMICDML

Flag that controls whether DML statements must be atomic. In an atomic DML statement, all operations performed by the statement must complete successfully. If a fatal error occurs during the execution of the transaction, the transaction is rolled back. In a non-atomic DML statement, all operations performed by the statement do not need to complete successfully. For example, a non-atomic update can continue even when some rows are locked by another transaction.
SQLATOMICDML Configuration Variable Dependencies: Valid values: Set to TRUE only if transaction logging is on (LOGTX is set to TRUE). DML statements must be atomic. (Transaction logging must be on.) FALSE DML statements need not be atomic.
TRUE

Default value: Additional help:

FALSE
LOGTX configuration variable description

110

Configuration Variable Reference

SQLCHARCNT

Number of characters in the SQL/A string table. The SQL/A string table is used to store the text of displayed messages such as the heading line produced by the SELECT statement. This variable has no upper limit. However, specifying a value that is too large yields no benefit and wastes user memory space. To determine the correct value for this configuration variable, calculate a value that is slightly more than the total number of characters in the names of all the columns in the table that has the most columns. For example, if the largest table in the database has 500 columns and each column name has 20 characters, then the total number of characters in all of the tables column names is 10000 characters. Because other text is also stored in the SQL/A string table, SQLCHARCNT would be set to a value slightly larger than 10000 characters.
SQLCHARCNT Configuration Variable Valid values: Default value: Example: Any positive integer 8192
1000

SQLCNLCNT

Total number of constant lists allowed in a statement. A constant list is the set of values specified with the IN or NOT IN keywords in Interactive SQL/A.
SQLCNLCNT Configuration Variable Valid values: Default value: Example: Any positive integer 10
20

Configuration Variable Reference

111

SQLCONCNT

Total number of constants allowed in a query. The SQL/A constant table includes CHARACTER, SMALLINT, INTEGER, NUMERIC, DECIMAL, DATE, HUGE DATE, TIME, AMOUNT, and HUGE AMOUNT constants.
SQLCONCNT Configuration Variable Valid values: Default value: Example: Any positive integer 100
200

SQLDBGON

Embedded SQL/A debugger status, TRUE or FALSE. If set to TRUE, you can perform debugging on your application by specifying the d option on the sqla.ld command.
SQLDBGON Configuration Variable Valid values: Default value: Additional help:
TRUE Enable the debugger FALSE Disable the debugger

FALSE

sqla.ld in Unify DataServer: Configuration Variable and Utility Reference

SQLDDLSIZ

Size in bytes of the workspace used for DDL operations. If you want to create a table that contains over 128 columns, you must increase the value specified by SQLDDLSIZ as follows:
SQLDDLSIZ = SQLDDLSIZ * 1.1 SQLDDLSIZ Configuration Variable Valid values: Default value: Any positive integer 50000

112

Configuration Variable Reference

SQLESCNTX

Default transaction level for scans performed by an Embedded SQL/A application. Examine the attributes of each transaction locking level to determine which default is best for your application. For example, if you want to select data without acquiring any locks, use the transaction locking level of 7.
SQLESCNTX Configuration Variable Valid values: Default value: Example: Additional help: 1 16 2
7 SET TRANSACTION LEVEL statement in Unify

DataServer: SQL/A Reference

SQLEUPDTX

Default transaction level for updates performed by an Embedded SQL/A application.


SQLEUPDTX Configuration Variable Valid values: Default value: Example: Additional help:
8 or 10

10
8 SET TRANSACTION LEVEL in Unify DataServer:

SQL/A Reference

Configuration Variable Reference

113

SQLFLDCNT

Maximum number of column references allowed in a SQL/A DML statement or the COLUMNS statement. Column references to the same column count as separate references. This variable is also used by ACCELL/SQL to determine the maximum number of updated columns in an UPDATE statement. If you want to create a table that contains over 128 columns, you must increase the value specified by SQLFLDCNT as follows:
SQLFLDCNT = number_of_columns_in_table SQLFLDCNT Configuration Variable Valid values: Default value: Example: Any positive integer
100 20

SQLFNCNT

Maximum number of system variable references allowed in a query. System variables include USER, LOGNAME, and ROWID.
SQLFNCNT Configuration Variable Valid values: Default value: Example: Any positive integer 20
30

SQLIDMAP

Runtime ID mapping status for Embedded SQL/A applications.


SQLIDMAP Configuration Variable Valid values: Default value:
TRUE Enable runtime ID mapping FALSE Disable runtime ID mapping

TRUE

114

Configuration Variable Reference

SQLISCNTX

Default transaction level for scans performed by an Interactive SQL/A application. Examine the attributes of each transaction locking level to determine which default is best for your application. For example, if you want to select data without acquiring any locks, use the transaction locking level of 7.
SQLISCNTX Configuration Variable Valid values: Default value: Example: Additional help:
1 16

2
7 SET TRANSACTION LEVEL in Unify DataServer:

SQL/A Reference

SQLIUPDTX

Default transaction level for updates performed by an Interactive SQL/A application.


SQLIUPDTX Configuration Variable Valid values: Default value: Example: Additional help:
8 or 10

10
8 SET TRANSACTION LEVEL in Unify DataServer:

SQL/A Reference

SQLNMSZ

Maximum number of entries (names to be cached) in the SQL/A name binding cache. Keeping names in the cache improves the time of retrieving the objects ID on subsequent references. The entries in the name binding cache are held for the duration of the transaction.
SQLNMSZ Configuration Variable Valid values: Default value: Example: Any positive integer 200
100

Configuration Variable Reference

115

SQLNODECNT

Number of expression nodes allowed in a query. Each column reference, constant, and operator in a SELECT list or WHERE clause requires one expression node. For example, the following statement has three expression nodes (C1, +, and 1):
SELECT c1 + 1 FROM T1;

SQLNODECNT Configuration Variable Valid values: Default value: Example: Any positive integer 330
500

SQLORDCNT

Number of sort keys per query allowed by the operating system SORT command. Each sort key corresponds to a column specified in the ORDER BY clause of the SELECT statement. You must set this variable if the sort key capacity of SORT is less than 9.
SQLORDCNT Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer 20
15

Your operating system documentation

SQLPBUFSIZ

Size of the buffer used to store a querys output items. The output buffer must be able to store the total of the output items column lengths. You can estimate the size of the buffer by looking at the size of the data types in the output columns. For example, a NUMERIC column takes 4 bytes of storage and a TEXT or BINARY column take 16 bytes of storage. Warning If the buffer is not large enough, your application may fail.

116

Configuration Variable Reference

SQLPBUFSIZ Configuration Variable Valid values: Default value: Example: Any positive integer 8192 10000

SQLPMEM

Amount of memory used during an SQL/A sort to hold projected columns. A sort is performed when the SELECT statement specifies the ORDER BY, GROUP BY, or UNIQUE keywords. If SQLPMEM is set too low and the sort needs more memory for projected columns, SQL/A stores the overflow in a disk file. This slows down sort performance. To determine the appropriate value for SQLPMEM, perform the following calculation: total size of projected columns * number of expected rows For example, in the following query, Job and Manager are projected columns:
SELECT Dept_No, Name, Job, Manager FROM emp, org WHERE Number = Emp_No ORDER BY Dept_No, Name;

Job is 10 bytes in size and Manager is 2 bytes for a total projected columns size of 12 bytes. If you expect 300 rows to be returned, the equation is (12 * 300) for an SQLPMEM setting of 3600.
SQLPMEM Configuration Variable Valid values: Default value: Example:
Configuration Variable Reference

Any positive integer 32k


3600

117

SQLQUERYCNT

Maximum number of query expressions per SELECT statement. This value controls the number of nested SELECT statements in the statement.
SQLQUERYCNT Configuration Variable Valid values: Default value: Example: Any positive integer 20
30

SQLSELCNT

Number of items that can be specified in all SELECT clauses (including nested SELECT clauses) in a query. An item includes an expression, constant, column name, aggregate function, or system variable.
SQLSELCNT Configuration Variable Valid values: Default value: Example: Any positive integer 100
200

SQLSMEM

Amount of memory used during an SQL/A sort to hold the sort columns. The sort columns are the column names specified in the ORDER BY, GROUP BY, or UNIQUE clauses. If SQLSMEM is set too low and the sort needs more room for the sort keys, SQL/A stores the overflow in a disk file. This slows down sort performance. To determine an appropriate value for SQLSMEM, perform the following calculation: (4 + sort column size) * number of expected rows

118

Configuration Variable Reference

For example, in the following query, Dept_No and Name are the sort columns:
SELECT Dept_No, Name, Job, Manager FROM emp, org WHERE Number = Emp_No ORDER BY Dept_No, Name ;

Dept_No is 2 bytes in size and Name is 10 bytes, for a total sort column size of 12 bytes. If you expect 300 rows to be returned, the equation is ( (4 + 12) * 300 ) for an SQLSMEM setting of 4800.
SQLSMEM Configuration Variable Valid values: Default value: Example: Any positive integer 64k
4800

SQLSTATS

For Interactive SQL/A applications, flag to print statistics after each query executes. The statistics include access method information, locking information, cache statistics, and process time. Warning This variable is retained only for compatibility with earlier software releases. For database applications created with this release, you must use the Interactive SQL/A statistics collection commands: BEGIN EXPLAIN, CONTINUE EXPLAIN, END EXPLAIN, and EXPLAIN.
SQLSTATS Configuration Variable Valid values: Default value: Additional help:
TRUE Print statistics FALSE Do not print statistics

FALSE For a description of the SQLSTATS output, see Collecting SQL/A Statistics in Unify DataServer: Managing a Database.
119

Configuration Variable Reference

SQLTABCNT

Number of database table or view references allowed in a query.


SQLTABCNT Configuration Variable Valid values: Default value: Example: Any positive integer 50
100

SQLTFBSIZ

Buffer size in bytes for temporary files. The temporary files are created to hold rows being processed for large update or delete operations. When updating or deleting, Unify DataServer stores the row IDs of the rows in memory. When updates or deletes exceed the buffer, the buffer content is paged to a disk file. The size of the buffer is specified by SQLTFBSIZ.
SQLTFBSIZ Configuration Variable Valid values: Default value: Example: Any positive integer 32k
12000

SQLTPENABLE

Enables the processing of SQL/A statements by the SQL/TP engine. If SQLTPENABLE is set to TRUE, the SQL/TP engine will process particular embedded SQL/A statements. The SQL/TP engine is designed to process only the most common statements requested. Its use can greatly increase the performance of embedded SQL/A applications in many cases.
SQLTPENABLE Configuration Variable Valid values: Enables processing of embedded SQL/A statements by the SQL/TP engine FALSE Disables the SQL/TP engine, thus forcing all statements to be processed by the SQL/CP engine
TRUE

Default value:
120

TRUE

Configuration Variable Reference

STRNULLCH

Null display character for STRING data. The character specified in STRNULLCH overrides NULLCH for STRING data.
STRNULLCH Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Ignored when CMPTFLG is TRUE. Any printable character enclosed in quotation marks (Character)
* # NULLCH configuration variable description

SYNCRETRY

Number of seconds for the interval between synchronization retries.


SYNCRETRY Configuration Variable Dependencies: Valid values: Default value: Additional help: Can be set from a configuration file only Any positive integer Value of SYNCTOUT
SYNCTOUT configuration variable description

SYNCTOUT

Number of seconds to wait before abandoning the synchronization. This configuration variable helps reduce the number of deadlock scenarios around synchronizations. Some synchronizations require updates to suspend.
SYNCTOUT Configuration Variable Dependencies: Valid values: Can be set from a configuration file only Any positive integer Between 5 and 30 seconds is usually sufficient Default value: Additional help: 15
SYNCTOUT configuration variable description

Configuration Variable Reference

121

TBLDSSZ

Maximum number of runtime data buffers. Information from the compiled schema file (dbname.sch) is stored in these buffers.
TBLDSSZ Configuration Variable Valid values: Default value: Additional help: Any positive integer The number of tables currently in the database
SCHEMA configuration variable description

TIMEFMT

Format in which to accept and display TIME values.


TIMEFMT Configuration Variable Valid values: A combination of the letters H (hour) and M (minutes), plus a separator character, enclosed in quotation marks (format_template). The letters can be uppercase or lowercase, and the separator character can be any printable character.
HH:MM HH.MM HH:MMam/pm

Default value: Example:

Additional help:

print statement description in Unify DataServer Writing Reports with RPT Report Writer

TIMEM

For SQL/A, the maximum amount of memory used as a buffer for a temporary index. If TIMEM is set too low and the temporary index requires more memory, SQL/A stores the overflow in one or two disk files. If the temporary index requires less memory than specified in TIMEM, SQL/A uses only the smaller amount. You can calculate the amount of memory needed for a temporary index in three ways, because temporary indexes are used differently for each of these SQL/A operations: nested SELECT statements duplicate joins no-duplicate joins After you calculate the value using each method, set TIMEM to the largest of the three values.

122

Configuration Variable Reference

Determining TIMEM for Nested SELECT Statements


To determine the amount of memory needed by SQL/A to store temporary indexes for nested SELECT statements, use the following calculation: (number of expected rows * (total size of conditional columns + 4)) * 2 Example For example, the following statement contains nested SELECT statements:
SELECT Name, Job, Salary, Dept_No, Number FROM emp, org WHERE Number = Emp_No AND Dept_No <> 10 AND <Job, Salary> IS IN (SELECT Job, Salary FROM emp, org WHERE Number = Emp_No AND Dept_No = 10);

The conditional columns are Job and Salary in the nested query. Job is 10 bytes in size and Salary is 4 bytes for a total conditional column size of 14 bytes. If you expect 50 rows to be returned, the equation is ( (50 * (14 + 4)) * 2 ) for a result of 1800.

Determining TIMEM for Duplicate Joins


To calculate the memory needed for duplicate joins, use the following formula: (number of expected rows * (total size of conditional columns + 4)) * 2 + (12 * (average number of duplicates / 2 )) Example For example, the following statement yields duplicate joins:
SELECT emp.Name, Location, mgr.Name, mgr_dept.Location FROM emp, dept, org, emp mgr, dept mgr_dept, org mgr_org WHERE emp.Manager = mgr.Number AND emp.Number = org.Emp_No AND mgr.Number = mgr_org.Emp_No AND org.Dept_No = dept.Number AND mgr_org.Dept_No = mgr_dept.Number AND dept.Location <> mgr_dept.Location;

Configuration Variable Reference

123

The conditional columns are emp.Manager, emp.Number, mgr.Number, mgr_dept.Number, and dept.Number from the where clause. The total conditional columns size is 10 bytes. If you expect 10 rows and the average number of conditional duplicates that result in the same Manager is 4, the equation is ((10 * (4 + 10)) * 2) + (12 * (4/2)) for a result of 204.

Determining TIMEM for No-Duplicate Joins


To determine the memory needed for no-duplicate joins, use this calculation: (number of expected rows * (total size of conditional columns + 4)) * 2 Example For example, the following query yields a no-duplicate join:
SELECT emp.Name, dept.Name FROM emp, org, dept WHERE emp.Number = Emp_No AND dept.Number = Dept_No AND emp.Number > 12000 ;

The conditional columns are Emp_No and Dept_No for a total conditional columns size of 4 bytes. If the expected number of rows is 30, the equation is ( (30 * (4 + 4)) * 2 ) for a result of 480.
TIMEM Configuration Variable Valid values: Default value: Example: See equations above
8192

480

124

Configuration Variable Reference

TIMNULLCH

Null display character for TIME data. The character specified in TIMNULLCH overrides NULLCH for TIME data.
TIMNULLCH Configuration Variable Valid values: Default value: Example: Additional help: Any printable character enclosed in quotation marks (Character)
* # NULLCH configuration variable description

TMPDIR

The path name of the directory used by the operating system to store temporary files for cc, ld, and so on.
TMPDIR Configuration Variable Dependencies: Valid values: Default value: Example:

Must be set at the operating system command level


Any valid search path specification /tmp
/ASQL/etc/tmp

TMSHMKEY

Key that identifies the portion of shared memory used by the transaction manager. Warning Always shut down the database before trying to change a shared memory key value. Changing a shared memory key while the database is running can corrupt the database. If you want to change a shared memory key configuration variable, first execute shutdb to shut down the database.
TMSHMKEY Configuration Variable Dependencies: Valid values: Default value: Can be set only from a configuration file Any positive integer Value of SHMKEY
125

Configuration Variable Reference

TRIADSEP

Triad separator character to be used when displaying AMOUNT, CURRENCY, or FLOAT data. Valid only when using a display template.
TRIADSEP Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Used only with the AMTFMT format template Any printable character used as a radix separator, enclosed in quotation marks: triad_separator. ,
. AMTFMT and UCURRFMT configuration variable

descriptions

TRIGGERMAX NEST

Maximum number of triggers that can be nested in a trigger. To prevent nesting in triggers, set TRIGGERMAXNEST to 0.
TRIGGERMAXNEST Configuration Variable Valid values: Default value: Example: Additional help: Any positive integer
42 1 SPMAXNEST configuration variable, Unify DataServer: Writing Interactive SQL/A Queries

TUPBUFSIZE

The size in bytes of each SQL/A application row (tuple) buffer. The application row buffer size is calculated using the following formula: 10 * maximum row size
TUPBUFSIZE can be used to set the buffer size (in bytes) for each cursor. Alternatively, you can use the USING BUFFER SIZE option on an OPEN CURSOR statement.

126

Configuration Variable Reference

If things seem to be going slower than expected, verify that the correct access methods are being used. To generate statistics about access methods, use one or more of these methods: (Releases prior to DataServer 6) Run Interactive SQL/A with the SQLSTATS configuration variable set to TRUE. Execute the Interactive SQL/A BEGIN/CONTINUE EXPLAIN and EXPLAIN commands to generate statistics. Run Interactive SQL/A with the AMLEVEL configuration variable set to print access method information to the file specified by AMFILE.

Setting TUPBUFSIZE to a value from 16k through 32k can greatly speed things up.

DataServer Net also has a default row buffer size of 10 rows. The reducing factor on these buffer sizes is locking.

In a multi-user environment performing arbitrary queries, all the rows in the buffer are already locked. The application may lose interest after only fetching a couple of rows and never even fetch the rest of the rows in the buffer. If you had these kinds of applications, you would probably want a smaller buffer size.
TUPBUFSIZE Configuration Variable Valid values: Any positive integer See formula above Default value: Example: Additional help: 0
32k AMFILE, AMLEVEL, and RMTROWBUFSZ configuration variable descriptions OPEN CURSOR statement description in Unify DataServer: SQL/A Reference. Analyzing Access Method Performance and Collecting SQL/A Statistics in Unify DataServer: Managing a Database.

Configuration Variable Reference

127

TXLOGFULL

Threshold at which the transaction log is considered full, specified as a percentage of the total transaction log size. The total transaction log size is specified in the LOGBLK configuration variable. If the percentage of used log records in the transaction log exceeds the value specified in TXLOGFULL, the lgdmn daemon writes a message to the errlog file and forces a database sync point. This in turn causes the transaction log to be archived to the transaction journal file. For example, if TXLOGFULL is set to 70 and LOGBLK is set to 500, then the lgdmn daemon must force a sync point and start cleaning the transaction log when more than 350 transaction records are used. Usually, file system synchronization occurs often enough that the
TXLOGFULL threshold is not exceeded. However, the TXLOGFULL

threshold can be exceeded when one of these conditions exist: The transaction log size is too small for the amount of transaction activity on the system The file system synchronization frequency is too large and should be reduced. Very longduration transactions exist.
TXLOGFULL Configuration Variable Valid values: Between 1 and 100 Do not append the percent sign (%) to the configuration variable value A recommended value is between 80 and 90 Default value: Example: Additional help: 80
90 LOGBLK configuration variable description

Unify DataServer: Managing a Database

128

Configuration Variable Reference

TXTNULLCH

Null display character for TEXT data. The character specified in TXTNULLCH overrides NULLCH for TEXT data.
TXTNULLCH Configuration Variable Valid values: Default value: Example: Additional help: Any printable character enclosed in quotation marks (Character)
* # NULLCH configuration variable description

UAMOUNT64

Flag that specifies whether AMOUNT data is stored as 32-bit or 64-bit for stored procedures, triggers, ACCELL/SQL and RPT variables only. This configuration variable has no impact on Interactive SQL/A, embedded SQL/A, or RHLI variables. By setting the UAMOUNT64 configuration variable to FALSE, you can maintain compatability with previous versions of Unify DataServer.
UAMOUNT64 Configuration Variable Valid values: Amount variables and expressions are stored as currency (64 bit) structures. FALSE Amount variables and expressions are HUGE AMOUNT (double precision), as in previous versions of Unify DataServer.
TRUE

Default value: See Also:

FALSE Variables in Unify DataServer: Creating Reports with RPT Report Writer

Warning A runtime error due to overflow can occur if 64-bit constants are allowed at compile time and not at runtime. That is, if a stored procedure or trigger is compiled with the UAMOUNT64 configuration variable set to FALSE, and the UAMOUNT64 configuration variable is set at runtime, overflow may occur.
Configuration Variable Reference 129

UCCNAME

Name of the UNIX C language compiler used in program loading and by ucc. UCCNAME is also used by upp, which is normally called by ucc. In this case, the specified UNIX compiler is also used to preprocess any macros or include file before being preprocessed by the upp utility.
UCCNAME Configuration Variable Valid values: Any valid search path specification with no embedded spaces, compiler options, or universe prefixes (such as ucbcc)
cc (on most platforms) /lib/cc

Default value: Example: Additional help:

upp, ucc

UCURRFMT

Default format template to be used to display CURRENCY data.

The currency symbol ($), triad separator (,) and radix separator (.) may be overridden by the CURRSYM, TRIADSEP and RADIXSEP configuration variables respectively. If these values are not set, then the values used in the template are used. For example, if UCURRFMT is set to ###,##&.&&$ (with a single $ on the right), and CURRSYM is set to DM, the amount 123456.78 is displayed as 123,456.78 DM.

UCURRFMT Configuration Variable Valid values: Any valid numeric format template that is recognized by the SQL/A DISPLAY clause, ACCELL/SQL, or RPT except the C language printf() function format.
(columns width will be precision+3)

Default value: Example: Additional help:

###,##&.&& CURRSYM, TRIADSEP, and RADIXSEP configuration variable descriptions


Configuration Variable Reference

130

ULDACCESS

Application database access mode for a remote database.


ULDACCESS Configuration Variable Dependencies: Valid values: Used only if the O option is not specified on the sqla.ld command
local_only

Access local databases only


remote_only

Access remote databases only


local_remote

Access both local and remote databases Default value: Additional help:
local_only

Unify/Net Guide

ULDLIBCOUNT

Number of iterations used by uld through database libraries.


ULDLIBCOUNT Configuration Variable Valid values: Default value: Any positive integer 2

Configuration Variable Reference

131

ULDNAME

Name of the linker called by uld when linking custom managers. The linker must be able to process arguments to the cc command.
ULDNAME Configuration Variable Dependencies: The value of the ULDNAME variable depends on the hardware on which you are running Unify DataServer. Requires the correct directory search path specification and file name format for the operating system Any valid directory search path specification with none of the following: embedded spaces, compiler options, or universe prefixes such as ucb The appropriate default has been set in your release software. UNIX: cc /usr/bin/cc ACCELL/SQL: Setting Up a User Environment

Valid values:

Default value: Example: Additional help:

UNICAP

Directory search path specification and file name of the ACCELL/SQL keyboard capabilities file.
UNICAP Configuration Variable Dependencies: Requires the correct directory search path specification and file name format for the operating system Must be set at the operating system command level Valid values: Default value: Example: Additional help: Any valid directory search path specification and file name $UNIFY/unicap
/etc/unicap

ACCELL/SQL: Setting Up a User Environment

132

Configuration Variable Reference

UNIFY

The location of the Unify DataServer system lib directory, usually in the directory where you installed Unify DataServer. For example, if you installed Unify DataServer in the /usr/ASQL directory, UNIFY would be set to /usr/ASQL/lib.
UNIFY Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Must be set at the operating system command level Any valid directory search path specification none
/ASQL/lib

ACCELL/SQL: Setting Up a User Environment

UNIFY_REGCMP

The UNIFY_REGCMP configuration variable indicates whether Unify routines are used to evaluate regular expressions (for example, with SHLIKE): If UNIFY_REGCMP is set to TRUE, the Unify regular-expression evaluation routines are used. These routines do not follow local language specific collating sequences. If UNIFY_REGCMP is not set or is set to FALSE, the operating system regular-expression evaluation routines are used. If LANG is also set in the .uvp file, local language specific collating sequences are followed.
UNIFY_REGCMP Configuration Variable Valid values: Default value: Additional help:
TRUE FALSE FALSE

UNIFY_REGCMP_SZ configuration variable description. SHLIKE in Unify DataServer: Writing Interactive SQL/A Queries
133

Configuration Variable Reference

UNIFY_REGCMP_ SZ

Defines the upper size limit of regular expressions. Increase the value of this configuration variable only if you need to support large regular expressions.
UNIFY_REGCMP_SZ Configuration Variable Valid values: Default value: Example: Additional help: 512 bytes 2GB 512 731
UNIFY_REGCMP configuration variable description

UNIFYPORT

The UNIFYPORT configuration variable indicates the name of the service, or port number, for a dbdmn running on $DBHOST.
UNIFYPORT Configuration Variable Valid values: A service name registered in /etc/services, or a port number. If set to a numeric string, the dbdmn and client processes will use that port number. If set to a name, the dbdmn and client processes will use the port number specified in /etc/services for that service name. unify unify2 29001
dbdmn in Unify DataServer: Configuration Variable and Utility Reference

Default value: Example: Additional help:

134

Configuration Variable Reference

UNIFYTMP

Name of the directory where Unify DataServer places temporary files. The temporary files are used during sort operations.
UNIFYTMP Configuration Variable Dependencies: Requires the correct directory search path specification format for the operating system and may also be hardware dependent Any valid directory search path specification
/tmp

Valid values: Default value: Example:

/usr/ASQL/tmp

UNUMERIC64

Flag that specifies whether NUMERIC data is stored as 32-bit or 64-bit for stored procedures, triggers, ACCELL/SQL, and RPT variables only. This configuration variable has no impact on Interactive SQL/A, embedded SQL/A, or RHLI variables. By setting the UNUMERIC64 configuration variable to FALSE, you can maintain compatability with previous versions of Unify DataServer.
NUMERIC64 Configuration Variable Valid values: Numeric variables and expressions are stored as 64 bit. FALSE Numeric variables and expressions use 32 bits precision, as in previous versions of Unify DataServer.
TRUE

Default value:

FALSE

Warning A runtime error due to overflow can occur if 64-bit constants are allowed at compile time and not at runtime. That is, if a stored procedure or trigger is compiled with the UNUMERIC64 configuration variable set to TRUE, and the UNUMERIC64 configuration variable is not set at runtime, overflow may occur.
Configuration Variable Reference 135

UPPNAME

Name of the C preprocessor.


UPPNAME Configuration Variable Dependencies: Requires the correct directory search path specification format for the operating system and may also be hardware dependent Any valid directory search path specification
/lib/cpp

Valid values: Default value: Example:

/usr/ASQL/tmp

VOLGROUP

Volume file group ID.


VOLGROUP Configuration Variable Dependencies: Valid values: Can be set from a configuration file only Any valid UNIX group ID Specifying a 1 indicates the current user ID and current group ID should be the volume owner Default value: Example: Additional help: 1
doc

Your UNIX operating system manuals

136

Configuration Variable Reference

VOLMODE

Volume file access modes, expressed as an octal number. This is similar to specifying the access mode for user, group, and others in a UNIX-based operating system.
VOLMODE Configuration Variable Dependencies Valid values: Can be set from a configuration file only 0400 0200 0060 0006 Default value: Example: Additional help: 0666
0660 (allows read and write access by the creator and the

Allow read by owner Allow write by owner Allow read and write by group (0040 + 0020) Allow read and write by others (0004 + 0002)

creators group, but nobody else) Your UNIX operating system manuals

VOLOWNER

Volume file owner/user ID.


VOLOWNER Configuration Variable Dependencies: Valid values: Default value: Example: Additional help: Can be set from a configuration file only Any valid user ID or 1 to specify the current user and current group ID 1
23

Your UNIX operating system manuals

Configuration Variable Reference

137

WP4DIGITYEARS

Write dates to a pipeline using 4-digit years.


WP4DIGITYEARS Configuration Variable Dependencies: Valid values: Can be set from a configuration file only
TRUE FALSE The number of digits in dates is defined by DATEFMT.

Write dates using 2-digit years.Century value of the date will be lost.

Default value: Example:

FALSE
TRUE

138

Configuration Variable Reference

Utilities Reference

139

Chapter

Focus

The first part of this chapter contains some important information about entering utility commands and referring to database objects. This information is common to all Unify DataServer utilities, whether they are DBA tools for loading data into the database and defining defaults and legal values, diagnostic tools, or recovery tools. The remainder of the chapter consists of reference pages for Unify DataServer utilities and some of the files that are used by specific utilities. The utilities and files are listed in alphabetical order.

140

Utilities Reference

Using Unify DataServer Utilities


Most of the Unify DataServer utilities are operating system command-level utilities that help you maintain the database application after you have developed it. The Unify DataServer utilities help you maintain a database in the following ways: The database load utility, dbld, enables you to load rows of data from an input file to an existing database table. This means that if you want to transfer data from a different format database, you can write the data to an ASCII or binary file, then load the data to the Unify DataServer table using dbld. You can also use dbld to perform bulk updates. The Data Integrity Subsystem (DIS) enables you to define default values and legal values for columns in database tables. DIS defaults and legal values are defined in an ASCII source file from which the Data Integrity Subsystem compiler, disc, produces an internal system description file. The compiled DIS file produced by disc controls all aspects of data integrity. The database diagnostic tools are utilities that enable you to display database statistics for B-trees (btstats), hash tables (htstats), links (lnkstats), database tables (tblstats), and volumes (volstats). You can use the information provided by these utilities to diagnose problems or tune application performance. Additional diagnostic tools help you manage shared memory, daemons, and processes. The database recovery tools enable you to ensure database consistency should program, system, or media failures occur. Unify DataServer recovery tools include a backup utility (budb) and a utility to restore the database from a backup (redb). Related recovery utilities help you manage the physical and logical logs. Additional diagnostic tools help you to determine which configuration variable settings would optimize your applications performance. These utilities, with any of the Unify DataServer interfaces, should help you obtain top performance from your database application.
Utilities Reference 141

Other utilities are compilation and load utilities that allow you to run an Interactive SQL/A, Embedded SQL/A, or RHLI application. The SQL command starts an Interactive SQL/A session. The EPP command preprocesses an Embedded SQL/A application. The ucc and sqla.ld commands compile and load the embedded application after it is preprocessed. The ucc and uld commands compile and load RHLI applications.

Entering Utility Commands

The Unify DataServer utilities share a uniform set of command line options. Because all utilities use a similar syntax, you should be able to become familiar with a specific utilitys options much faster. You can enter command line options in any order on the command line. Usually, you can also separate the command line option value from the command line argument; for example, the command line options -H1 and -H 1 are identical. (When specifying -O options however, you cannot include a space.) Unify DataServer recognizes the following standard command line options: -version If this command line argument is the only command line argument, the utility displays its version information and terminates. Otherwise, the option is ignored and the utility continues.

-x object_name A lowercase letter argument followed by an object name. The object name must be a valid name that you are authorized to bind with. You can use the following object name arguments: b c d h l
142

B-tree_name column_name database_name hash_table_name link_name


Utilities Reference

m s u v

menu_name schema_name user_name volume_name

Example: -h itemhash. -X object_ID An uppercase letter argument followed by an object ID number. The object ID number must be a valid identifier that you are authorized to access. You can use the following object ID number arguments: B C D H L M S U V Example: -H 1. -Onamed_option=value Specifies a value for a named option. The set of named options is specific to each utility, but typically a named option is the name of a configuration variable. The -O option enables you to override the configuration variables current value with a command line argument value. You must use a separate -O option for each named option. The -O and the option name must be entered with no embedded spaces, for example, -Ologblk=20k. B-tree_ID column_ID database_ID hash_table_ID link_ID menu_ID schema_ID user_ID volume_ID

Referring to Database Objects

You can refer to Unify DataServer database object names in several ways. If a database object is unique to the database and is in the current schema, you can refer to the object merely by its name. If the database object is not in the current schema, you must prefix the name of the schema that contains the database object to the object name.
143

Utilities Reference

For example, if the current schema is payroll, but you want to use a table named emp in the admin schema, refer to the table by using the following format: admin.emp If the database contains several objects that have the same name, you must qualify each object name until it is unique.

Specifying Column Names


To specify column names, use this naming format: [ [schema_name.]table_name.]column_name For example, if three tables (emp, dept, and payroll) have columns named Number, you must indicate which Number column you are using, as in dept.Number or emp.Number. To refer to database columns, follow these rules:
If you want to specify: A column that is unique to the current schema (authorization) A column that is unique to the specified table in the current schema (authorization) A column that is unique to the specified table in the specified schema (authorization)

Then use this format:


column_name table_name.column_name

schema_name.table_name.column_name

Specifying Table and Schema Names


To refer to tables and schemas (authorizations), follow these rules: If you want to specify:
A table that is unique to the current schema A table that is unique to the specified schema A schema that is unique to the database
144

Then use this format: table_name schema_name.table_name schema_name


Utilities Reference

If you do not explicitly specify a schema name, the default is your current schema name. If you do not explicitly specify the table name, the column must have a unique name in the schema.

Specifying Database Names


You can specify a database name by using part or all of the fully-qualified database name, which includes the path and other qualifiers. The following diagram shows the components of a fully-qualified database name: Fully-Qualified Database Name

user identity (DBUSER part) database machine (DBHOST part) database path (DBPATH part)

database name (DBNAME part)

[[dbhost]:[dbuser]:][dbpath] [dbname] The separator character between the database machine name (dbhost) and the user identity (dbuser) and between the user identity and the database path (dbpath) is a colon (:). If you do not include the separators in the fully-qualified name, the database machine name and user identity are assumed to be missing. If either the machine name or user identify are omitted, you must include the associated colon. If you do omit any portion of the name, the missing value is retrieved from the appropriate configuration variable. If a component of the fully-qualified database name is specified, the specified value overrides the value in the associated configuration variable. The following paragraphs describe the components of the fully-qualified database name, and what values are used if a component is not specified.
Utilities Reference 145

dbhost

Network node name that is used to remotely log in to the database machine. You specify a name for dbhost only if you are accessing a remote database. If you are accessing a local machine, set dbhost to . or . Otherwise remote database access facilities are used for the machine identified by the database machine name, even if it is on the same machine as the client. If dbhost is omitted, the value specified by the DBHOST configuration variable is used. If DBHOST has no value, local database access is used.

dbuser

The user name and encrypted password. The ucrypt utility can be used to initialize the password portion of this value. You specify a value for dbuser only if you are accessing a remote database. If omitted, the value specified by the DBUSER configuration variable is used. If DBUSER has no value, then the current user name with no password is used.

dbpath

Directory path, excluding the file name, for the database file (dbname.db) and associated files such as variable-length text and binary files (.dbv). The dbpath value is used with the dbname value to find the database files. The application runs faster if dbpath is set to an absolute path, such as /usr/DB, instead of a relative path, such as . or DB. The directory specified by dbpath can contain only one database (only one dbname.db). This is because each database requires its own dbname.err (may be linked to errlog) and its own B-tree index files (named by convention as btnnn.idx). If omitted, the value specified by the DBPATH configuration variable is used. If you are accessing a local database and you do not specify the database path name in either the fully-qualified database name or the DBPATH configuration variable, the path name defaults to the current directory.

146

Utilities Reference

dbname

Name of the database root file, excluding the directory path. The base of the specified name (the portion preceding the suffix) is used to build the database configuration file name. For example, if dbname is my_database.db, the database files are named my_database.db, my_database.cf, my_database.jn, and so forth. The specified name cannot contain slashes (/) or backslashes (\). If omitted, the value specified by the DBNAME configuration variable is used. If DBNAME has no value, the value file is used. Additional Help About
DBNAME, DBPATH, DBHOST, and DBUSER cofiguration variables

See The configuration variable descriptions in this manual. Unify/Net Guide

Remote access to a database

The following table shows how a local database name is derived with different combinations of settings for dbname, DBPATH and DBNAME. db_name specification
(char *) 0 (char *) 0 (char *) 0 (char *) 0 fofo.db fofo.db fofo.db fofo.db /usr/fofo.db /usr/fofo.db /usr/fofo.db /usr/fofo.db
Utilities Reference

DBPATH
undefined undefined /tmp /tmp undefined undefined /tmp /tmp undefined undefined /tmp /tmp

DBNAME
undefined poof.db undefined poof.db undefined poof.db undefined poof.db undefined poof.db undefined poof.db

Database name
./file.db ./poof.db /tmp/file.db /tmp/poof.db ./fofo.db ./fofo.db /tmp/fofo.db /tmp/fofo.db /usr/fofo.db /usr/fofo.db /usr/fofo.db /usr/fofo.db
147

Using the Name Cache

When Unify DataServer performs operations on database objects, it uses the object ID to locate the object. This unique numeric identifier is automatically generated by Unify DataServer. In SQL/A, the object ID is maintained by Unify DataServer and is not accessible to you. In the RHLI, you control how the object ID is retrieved and used when manipulating database objects. You can specify that the object IDs are kept in a name cache, a section of local process memory. Using a name cache to store the object IDs improves the performance of your application because accessing an object is faster. If the object IDs are not stored in the name cache, they are stored with the actual objects in the data dictionary. The name cache is controlled by the NAMECACHE configuration variable. Additional Help About The NAMECACHE configuration variable Setting configuration variables See The NAMECACHE description in this manual Configuring Database Environments in Unify DataServer: Managing a Database

148

Utilities Reference

Utilities Descriptions
The utilities described in this section are listed in alphabetic order.

Format

Each Unify DataServer utility description is divided into several parts:


Header

Name of the utility.

Summary statement (following the header bar) A brief statement of the utilitys purpose. Syntax Arguments Syntax for the utility. Required and optional arguments that are used when calling the utility. Utility usage and any special conditions and notes.

Description

Related Configuration Variables A list of configuration variables that affect the operation of the utility. Configuration variables that affect all utilities, such as DBPATH or UNIFY, are usually not listed here. Security Permissions required to execute the utility if other than a regular user. Cross references to related information.

See Also

Syntax Conventions

The utility syntax descriptions follow these conventions:


BOLD

Boldface words and characters are keywords. A keyword is usually a required word that must be entered exactly as shown.
149

Utilities Reference

italic

Italicized words are substitution strings. Substitute the item described in the Arguments section for the italicized word. A set of vertical bars surrounds a stack of alternative arguments from which you can choose one. The bars are not part of the command. Square brackets enclose an optional element. The brackets are not part of the command. Boldface parentheses are part of the command, and like keywords, must be typed in exactly as shown. Curly braces enclose items that can be repeated. Ellipsis points indicate that you may repeat the immediately preceding item any number of times, as needed. The immediately preceding item may be enclosed in curly braces.

||

[]

()

{} ...

150

Utilities Reference

addcgp
Adding a column group

Syntax
addcgp (-

u p n

sschema_name Sschema_ID

table_name col_grp_name column1

column2 . . . columnN

Arguments

-u -p -n

Enforce the uniqueness constraint. The group is the primary key (implies uniqueness). No (normal) options specified.

-s schema_name Specifies the name of the schema that contains the table. -S schema_ID Specifies the identifier of the schema that contains the table. (This is also known as the authorization ID). table_name Name of the table that contains the columns to be treated as a group.

col_grp_name The name that is to be used to refer to the column group. column1 column2 . . . columnN The names, separated by spaces, of the columns that make up the column group.

Description

The addcgp utility creates a column group from a list of existing columns. After you create the column group, you can refer to the group of columns by the column group name. By creating a column group, applications that use the CHLI can process the columns just as in Unify DataServer ELS. Unify DataServer does not have COMB type columns.

Utilities Reference

151

Tip If you are converting a UNIFY 5.0 database to Unify DataServer, name your Unify DataServer column groups the same names as your old UNIFY 5.0 COMB fields. This enables you to continue using UNIFY 5.0 SQL scripts that refer to COMB fields, without having to change the names of the columns being selected.

Related Configuration Variables


Before you can run addcgp, check the value of these configuration variables:
DBPATH

Directory search path, without the file name, of the application database file and associated files. Simple file name of the database file, for example, file.db.

DBNAME

Example

The following example creates a column group named group1 from two columns in the orders table:
addcgp orders group1 ord_number ord_company

152

Utilities Reference

bldcmf
Building compiled message files

Syntax

bldcmf [-Oidxint=index_interval] unify.msg

Arguments

-Oidxint=index_interval Specifies the number of messages per hash bucket. The index_interval is used to compute the number of hash table buckets required to index the message file. If index_interval is not specified, then the default number of messages is used: 10. unify.msg Indicates the source message file that bldcmf is to compile.

Description

The bldcmf utility rebuilds the compiled message (unify.cmf) file from the message source file. Both the source file and the compiled message file are stored in the release lib directory (the directory that is specified by the UNIFY configuration variable). The compiled version of the message file improves the performance of message retrieval. You can use the index_interval to adjust the retrieval performance for the modified message file. The message file is an ASCII file that contains the text of Unify DataServer informational error messages. If you want to display alternative messages, such as translated versions, you can edit the file with a text editor such as vi. The following example is a small excerpt from the unify.msg file:
... 10554!Out of memory trying to add a column descriptor (#) 10555!Out of memory, couldnt declare cursor %s (#)

message number

message text

Edit only the message text, not the message number. Unify DataServer uses the message number to access the appropriate message.
Utilities Reference 153

A message can be up to 140 characters in length. The message text can wrap. The text can include special characters, such as \t for a tab and \n for a new-line character. You can also include control characters in the message. Gaps in the message file (nonexistent messages) require hash table bucket entries. For example, if the messages range from message number 30 to message number 100, but the message file actually contains only 50 messages (and the default index value of 10 is used), 7 hash table buckets are allocated: (100 30) / 10 = 7 Warning Before you edit the message file, save backup copies of the source file and the compiled message file. For example, copy unify.msg to a file named unify.msg1, and copy unify.cmf to a file named unify.cmf1. Make sure that you edit the original file, not the backup.

Example

This example rebuilds the unify.cmf file with an index interval of 15:
bldcmf -Oidxint=15 unify.msg

The following excerpt from unify.msg has been modified so that a bell will ring when any of the Out of memory errors occur:
^G is a single character inserted by the text editor when you press CTRL G
... 10554!^GOut of memory trying to add a column descriptor (#) 10555!^GOut of memory, couldnt declare cursor %s (#) 10556!^GOut of memory, couldnt declare host variable %s (#) 10557!^GOut of memory building projected column descriptors (#).\n 10558!^G%s: out of memory building project descriptors (#). 10559!^GOut of memory: skipping to next file (#) 10560!Only the view table can be in FROM clause (#).\n 10561!Required privilege for %s does not exist (#).\n 10562!Required privilege for %s.%s does not exist (#).\n

...

154

Utilities Reference

btstats
B-tree statistic collection

Syntax

btstats [ d dbname] [ s schema_name] [ S schema_ID] [ t table_name] [ T table_ID] [ b btree_name] [ B btree_ID]

Arguments

-d dbname

Specifies the fully-qualified database name of the database that contains the B-trees. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-s schema_name Specifies the name of the schema that contains the B-trees. -S schema_ID Specifies the identifier of the schema that contains the B-trees. (This is also known as the authorization ID). -t table_name Specifies the name of the database table that is associated with the B-tree index. -T table_ID Specifies the table ID of the database table that is associated with the B-tree index.

-b btree_name Specifies the name of the B-tree for which to display statistics. -B btree_ID Specifies the identifier of the B-tree for which to display statistics.

The evaluation of the btstats options is governed by these rules: If this option is included:
-b or -B -t or -T -s or -S None of the above
Utilities Reference

Then btstats displays statistics for:


The specified B-trees All B-trees on the specified table All B-trees on all tables in the specified schema All B-trees in the database
155

Description

The btstats utility displays B-tree statistics. The btstats report contains the following information: Database The complete directory search path and file name of the database that is being accessed. A name that identifies the B-tree to the user. A unique number that identifies the B-tree to Unify DataServer. The fully qualified name of the table that contains the columns that are indexed by the B-tree.

Index Name Index ID Table Name

Index File Name The name of the file that contains the B-tree index if the B-tree is stored in a separate file instead of in a database volume. Otherwise, the resource ID for the B-tree is displayed. Index Size The size in bytes of the Btree index file. For indexes larger than 2GB, the size is in blocks. The number of elements in the B-tree. An element corresponds to a row in a unique B-tree index. In a B-tree index, an element corresponds to each unique occurrence of a column. A message that indicates whether the B-tree allows duplicates. A flag that indicates whether the specified column is indexed in ascending (A) or descending (D) order.

Entry Count

Options Order

Column Name The name of the column at the specified position in the index key.

Example

The following command displays statistics for the B-tree files named Name1 and Name2 and the B-tree that has a B-tree ID of 11:
btstats -b btree1 -b btree2 -B 11

The following command displays information about all B-trees associated with a table named manf:
156 Utilities Reference

btstats -t manf

The following command displays statistics for all tables in a database named /doc/home/examples/file.db:
btstats -d /doc/home/examples/file.db

The btstats utility responds by displaying the following report:


B-Tree Index Statistics Report ============================== Date: Fri Jan 8 11:31:29 1993 Data Base: /doc/home/examples/file.db Index Name: Index ID: Table Name: Index File Name: Index Size: Entry Count: Option: Order A CO_KEY 1 SQL_books.COMPANY Resource 97 4096 bytes 10 Duplicates Allowed Column Name: CO_KEY

See Also

htstats, lnkstats, tblstats, and volstats utilities

Utilities Reference

157

budb
Backing up the database

Syntax

budb [-ddbname] [-Ojrnlwait=number_of_seconds] [-Obudb_util=string ]

Arguments

-d dbname

Specifies the fully-qualified database name of the database that is to be backed up. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-Ojrnlwait=number_of_seconds Indicates that budb is to wait the specified number of seconds for the log daemon (lgdmn) to complete writing to the journal before asking the user whether the backup should be aborted. If omitted, the default (300 seconds) is used. If the number of seconds specified is 0, budb waits indefinitely for lgdmn to finish. -Obudb_util=string A string that contains the name of the script that performs the backup. The string can include arguments to the script. The string is passed to the operating systems system() function for execution. The budb utility executes the script in the $DBPATH directory.

Description

The budb utility backs up a database to the backup device specified by the BUDEV configuration variable. If you have enabled automatic journal management, budb first deletes the oldest exisitng backup and journal volumes that would cause the number of those files to exceed MAXBUJRNS. Before you run budb, be sure to: Save the database exception configuration file (dbname.cf). Make sure that the configuration variables listed on page 162 are set correctly. Alert the system operator to monitor the operators message device. If you are backing up to a tape (TYPE = MOUNT on the BUDEV configuration variable), budb prompts the database operator to mount a backup tape before the backup can begin. At this time, dismount the journal and mount the backup tape.

158

Utilities Reference

As it starts, budb displays this initial message: Waiting for log daemon to complete journalling (number_of_seconds sec ) ... While waiting for lgdmn to complete, budb displays tick marks to indicate that it is waiting. If the number of seconds specified by jrnlwait elapses and lgdmn has still not finished journalling, budb displays this message: Journal wait timeout occurred. Do you want to abort backup (y/n)? If you answer yes, budb aborts the backup. If you answer no, budb waits again for the same number of seconds. If you specify -Ojrnlwait=0 for zero number of seconds, budb does not prompt you, but waits on lgdmn indefinitely. When journalling completes, (and after the operator dismounts the journal and mounts the backup tape, if necessary) budb does a complete physical backup of all database volumes, B-trees, and Data Integrity Subsystem (DIS) files. During the backup, budb displays the current setting for the number of reader processes (NBUPROC) and the number of backup device buffers (NBUBUF). If the journal associated with the database is a device of type AUTO, journalling starts automatically when the backup is complete. The new journal file includes the database sequence number. Otherwise, after budb finishes, you must do one of the following to restart database journalling: If the journal associated with the database is of type MOUNT, the operator must mount the transaction journal so a new journal can be started. budb prompts for this action. If the journal associated with the database is a device of type NOMOUNT, journalling does not automatically start after the backup completes. To initiate journalling, the operator must send the bureply confirm message to the log daemon. On some systems, the write to a backup tape fails when the end of the tape is reached. If this I/O error occurs, more than just the current write operation may have failed. In fact, an indeterminate number of previous writes may have failed as well. Therefore, if you are using a system that does not use the EOT (end-of-tape) feature, you must set the MAXBLK value in the BUDEV and JOURNAL configuration parameters so that the end of tape is never reached. Set MAXBLK to a number of blocks that all tapes can hold. The backup that is created by budb is readable only by the current version of Unify DataServer. If you install a new release of Unify DataServer, it cannot read a backup created by this version.
Utilities Reference 159

Working with Large Backup Files


If your database size is greater than 2 GB, the operating system may need some configuration before it can create the backup files. For example, on Solaris, if the /etc/ mount output doesnt show the word largefiles a file can only grow to 2 GBs. The backup will fail in this case. When backing up large files, backups need to be made often enough to avoid having to roll forward so many transactions that time and/ or resources are over-taxed should you need to restore. Since it contains modified database file pages it may grow very large while irma is rolling forward transactions from the journal. Be aware that your operating systems ulimit statement and system call may return the value 4194304 (2 GBs) when there is actually no such limit on the size of files. However, since it is possible to set the size limit to this same value using ulimit, you may need to research further to know that you will be able to create files greater than 2 GBs in size when ulimit returns 4194304.

Using a Third-Party Backup Utility


To use a third-party backup utility, you typically write a script that performs the following: 1. Reads the backup list file (.bul) to determine which files need to be backed up. 2. Executes the backup utility on these files. 3. Checks for errors The backup list file is an ASCII text file created by the budb utility when it is executed with the -Obudb_util argument. The backup list file is created in the $DBPATH directory. You use this file to identify which files need to be backed up by the third-party backup utility. The redb utility will also use this file to identify the files that need to be restored. The backup list file lists the files to be backed up and contains columns for file type, offset, length, and file path. Any relative file paths listed are assumed relative to the $DBPATH directory. The following example shows a sample backup list file:
67 I L f f f f b 1052672 0 0 0 0 0 0 0 0 4098048 112 3145728 62910464 20971520 TxLogmark file.bul file.lg file.dis file.dbv file.db /MyDB/vols/vol_1gb.db

160

Utilities Reference

The first line consists of a single field, which is the backup version number, in this case 67. The backup version number is the same number that would normally be stored in the file.bu header, and is also stored in all subsequent journal files. The thirdparty backup utility can use this number to maintain ordering of multiple backups and journal sets. The second line, I 1052672 TxLogmark, should be ignored by the thirdparty backup utility. It is used only by the redb utility when restoring the backup. The remaining lines each describe a file that must be backed up by the third-party backup utility. The backup list file itself must also be backed up. Each line has four fields, separated by whitespace. The fields, from left to right, contain the following information: 1. A single character that indicates the file type. Options are: f for regular files b for block special files c for character special files L for the .bul file 2. An offset into the file. If the type is b or c, the offset and length (the next field) describe a portion of the raw device to be backed up. If the type is f or L the offset is not applicable. Offsets and lengths in file.bul are expressed in bytes. 3. A length to be backed up. If the type is b or c, the offset and length (the next field) describe a portion of the raw device to be backed up. If the type is f or L the length is not applicable. Offsets and lengths in file.bul are expressed in bytes. 4. The path name to the file. The path can be listed as an absolute path or a relative path; relative paths are relative to $DBPATH. For the easiest processing of the file path by your script, the file path should contain no spaces. If the file path does contain spaces, your script will have to handle them correctly. A Unix shell script can concatenate strings in variables set using the read command to handle file paths containing one space. The script should exit with a 0 value to indicate that the backup was successful. Any other value causes the budb utility to display a message that the operation failed. The message is also logged to file.ral.
Utilities Reference 161

The script inherits the stdin/stdout/stderr of the budb utility, and so it can display additional information to the user or respond to user input during the third-party backup processing. When using a third party utility to perform a backup, be sure that the original file permissions and ownerships are retained throughout the process. During a backup, the budb utility performs a database sync, writes a backup record to the transaction log, and then suspends the log daemon, causing it to close out the final journal for the current backup version. At this point the script is invoked. After the script completes, the log daemon is allowed to begin processing with a new journal and another database sync is performed. Before invoking the script, the budb utility displays the message of the following form to stdout and logs it in file.ral.
Invoking user backup utility <cmd>, where <cmd> is the value of the Obudb_util option.

After a user utility returns a message of the following form is displayed and logged.
User backup utility returned N ([error | no error]).

Related Configuration Variables


The backup utility, budb, and the read backup utility, redb, use the following configuration variables: Variable
BUDEV DEVNM=name BLKSZ=number

Description
Backup device information, which consists of these parts: Name of the backup device, which can be a file.

Number of bytes per block of blocks read from or written to the backup device.
DRIVER=name

Executable used as the backup device driver.

MAXBLK=number

Number of blocks that can be read from or written to a volume mounted on the backup device.
TYPE=device_type

Type of device, auto, mountable or non-mountable.


162 Utilities Reference

Variable
BURDSZ JOURNAL JOURNAL2

Description
Size of the backup device buffers. Transaction journal device information. The JOURNAL configuration variable consists of the same parts as BUDEV: DEVNM=name BLKSZ=number DRIVER=name MAXBLK=number TYPE=device_type Turns physical logging on; must be set to TRUE. Name of the recovery log file. Number of backup and journal files stored. Number of reader or writer processes active during backup. Name of the operator message devicethe database operators console; such as /dev/console. Name of the operator notify utility that sends messages and prompts to the database operator. Disabled during automated journaling. Number of backup device buffers.

LOGFM LOGRC MAXBUJRNS NBUPROC OPMSGDEV

OPNOTIFY

NBUBUF

Security

To run budb, you must have DBA authority.

Example

The following examples illustrate the use of the BUDEV and JOURNAL configuration parameters. In the following example, both the backup database utility and the log daemon (the journaling program) use the same device, which is mountable; MAXBLK=0 indicates unlimited blocks.
BUDEV=DEVNM=/dev/rmt0,TYPE=mount,BLKSZ=32k,MAXBLK=0 JOURNAL=DEVNM=/dev/rmt0,TYPE=mount,BLKSZ=32k,MAXBLK=0

In the following example, the database backup goes to a mountable device such as a tape drive, and the journal goes to a file.
Utilities Reference 163

BUDEV=DEVNM=/dev/rmt0,TYPE=mount,BLKSZ=32k,MAXBLK=0 JOURNAL=DEVNM=file.jn,TYPE=nomount,BLKSZ=32k,MAXBLK=0 JOURNAL2=DEVNM=file.jn2,TYPE=nomount,BLKSZ=32k,MAXBLK=0

In the following example, the database backup and the journal go to separate files.
BUDEV=DEVNM=file.bu,TYPE=nomount,BLKSZ=32k,MAXBLK=0 JOURNAL=DEVNM=file.jn,TYPE=nomount,BLKSZ=32k,MAXBLK=0

The following script shows a backup using the BudTool product. Following the script is the sample budb command and the backup template.
#!/bin/sh BudToolBackup.sh # Sample script to backup files using BudTool btbu utility build_backup_information_file() { # btbu uses backup information file to determine systems, tape, etc # create backup information file, starting from template cp f backup_information_template.txt backup_information_file # filter out backup number read BU_NUM echo Backup Number: $BU_NUM ALL_BU_FILES= # read values from $DBPATH/file.bul, one line at a time while read FTYP OFFS FLEN BU_FILE; do # transfer file names to backup_information_file, # except files of type I if [ ${FTYP} != I ]; then ALL_BU_FILES=$ALL_BU_FILES $BU_FILE fi done echo $ALL_BU_FILES >> backup_information_file # ensure file ends with a blank line echo >> backup_information_file } # prepare for the backup build_backup_information_file < $DBPATH/file.bul # perform the backup btbu i backup_information_file > backup.out 2>&1 # check return value and report result retvalRsh=$? if [ ${retvalRsh} eq 0 ]; then echo Backup ok! echo

164

Utilities Reference

exit else echo echo echo exit fi

0 Problem backing up files see file btbu.out for details! Return value: ${retvalRsh} 1

The backup_information_template.txt is as follows; the script appends the file names to the template due to the \ in the template:
merc_dlt2,3|backup_dbsys_db @BTADMIN:a, dbsys|root|dbsys:budb|/space|root|1|dump.Solaris|3|0|1|0| \

The following backup command is used:


budb d/space/db/file.db Obudb_util=BudToolBackup.sh

See Also

BUDEV, JOURNAL, OPMSGDEV, OPNOTIFY, JOURNAL, JOURNAL2, LOGRC, NBUPROC, NBUBUF, BUCHECKSUM and BURDSZ configuration variables

bureply and redb utilities The Preventing Data Loss chapter in Unify DataServer: Managing a Database Please refer to the HP-UX Large Files White Paper for more information if you have an HP. See the following files on your system, or contact your HPUX vendor: /usr/share/doc/lg_files.*

Utilities Reference

165

bunotify
Backup notification for the operator

Syntax

bunotify

Description

The bunotify utility allows Unify DataServer daemons to send messages to the operator. Through the bunotify utility, the daemons send system communications to the system console (/dev/console) and to the common message file for operator error messages (/tmp/unify.log). To respond to system messages, the operator uses the backup reply utility, bureply. This is the default operator notify utility. You can specify a custom backup notify utility by setting the OPNOTIFY configuration variable. The value of OPNOTIFY at the time the daemons are started is used. If the OPNOTIFY configuration variable is not set, messages are sent to the device or file specified in the OPMSGDEV configuration variable. The default setting for OPMSGDEV is database.msg, where database is the database name specified in DBNAME; for example, file.msg.

Related Configuration Variables


Before you run bunotify, check the values of these configuration variables:
DBPATH

Directory search path, without the file name, of the application database file and associated files. Simple file name of the database file, for example, file.db.

DBNAME

See Also

bureply utility
OPNOTIFY configuration variable, OPMSGDEV configuration variable

166

Utilities Reference

bureply
Backup reply for the operator

Syntax

bureply [-ddbname] [-Oresponse= ]

confirm deny

Arguments

-d dbname

Specifies the fully-qualified database name of the database that is to be backed up. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Used with the confirm or deny keyword. Specifies that the log daemon should continue the journal for the specified database. Specifies that the log daemon should discontinue the journal.

-Oresponse confirm

deny

Description

The bureply utility allows the operator to respond to Unify DataServer daemons. The log daemon and budb will also send you error messages if your journal or backup device is a non-empty file. In this case, you must save or delete the contents of the file and recreate the file to zero-length. Confirm this using the bureply utility. Use the bureply utility to activate the daemons after a backup (budb) or recovery (redb).

See Also

redb and budb utilities

Utilities Reference

167

chkbu
Verify backup files

Syntax

chkbu [

Obu_num=backup# [Ovolume=volume# ] Oname=DEVNM

Arguments

backup# volume# Obu_num Oname Ovolume

backup version number to be verified (file.bu is the default) volume number to be verified specifes a defined backup file specifies a backup volume to check by name specifies the volume within the backup file as defined by DEVNM

Description

The chkbu utility verifies the integrity of a Unify DataServer backup file. The budb utility writes a checksum to the backup file. The chkbu utility calculates the checksum from the contents of the backup file and compares it to the one budb wrote to the file. See the following table for return values and their meanings. If the returned value is: 0 1 2 3 4 5 Then the exit status of chkbu is: Checksums match No checksum expected or found Checksum expected, but not found Checksums do not match Usage error-bad arguments Error reading backup file

If you do not specify Obu_num=backup#, the chkbu checks file.bu.


168 Utilities Reference

The redb utility will validate and report on the checksum when restoring the database. At the end of each volume that makes up the database, it will declare that the checksum validation for each backup file was successful or that it failed. If there is no checksum (from a database created before checksums were generated, for instance), redb will report that fact.

See Also

chkjrn, redb, and budb utilities

Utilities Reference

169

chkjrn
Verify journal files

Syntax
chkjrn [

Obu_num=backup# [ [Osequence=sequence# [ Ovolume=volume# ] ] Oname=filename

Arguments

backup# filename sequence# volume# Obu_num Oname Osequence Ovolume

backup number to be verified a file, relative to the current directory, to be checked sequence number of journal file to be verified volume in the journal file to be verified specifes a defined backup file version specifies a specific journal file to check specifies a sequence number in the journal file specifies a volume in the journal file

Description

The chkjrn utility verifies the physical integrity of a Unify DataServer journal file. The logdmn utility writes a checksum to the journal file. The chkjrn utility calculates the checksum from the contents of the journal file and compares it to the one logdmn wrote to the file. See the following table for return values and their meanings. If the returned value is: 0 1 2 3 4 5 The exit status of chkjrn is: Checksums match No checksum found or expected Checksum expected, but not found Checksums do not match Usage error-bad arguments Error reading backup file
Utilities Reference

170

The lgdmn keeps track of the current checksum for the journal file and writes it to the journal file when it closes it out (and to each volume if there are multiple volumes). If the backup includes several volumes, you may specify which volume to check using Ovolume=volume#. If you specify Oname=filename, then the volume define by filename is evaluated. If you specify Obu_num and Osequence then all volumes for that sequence are checked. If you specify none of the options, chkjrn checks file.jn. The irma utility will validate and report on the checksum when playing back the journal files. At the end of each volume that makes up the journal file, it will declare that the checksum validation for each backup file was succesful or that it failed. In either case, the journal file replay will be considered successful. If there is no checksum (from a database created before checksums were generated, for instance), irma will report that fact.

See Also

chkbu, redb, and budb utilities

Utilities Reference

171

ckunicap
Check unicap file

Syntax

ckunicap [ Oprint] [unicap_file_name]

Arguments

-Oprint

Indicates that ckunicap is to print the unicap file after the informational ckunicap messages.

unicap_file_name Specifies the directory search path and file name of the unicap file to be used. If no path and file name are specified, ckunicap accepts data from standard input.

Description

The ckunicap utility verifies the syntax of the unicap file and reports errors in the format and representation of characters. ckunicap does not verify semantics. Semantic errors are verified and reported by applications that use unicap entries.

See Also

UNIFY configuration variable description, unicap file description in ACCELL/SQL: Setting Up a User Environment

172

Utilities Reference

cldmn
Dead process cleanup

Syntax

cldmn [ ddbname] [ O[!]log] [ [ Oexit] [ Ocollect]

Osuspend Oresume

] [ Oclean] [ Ostatus]

Arguments

-d dbname

Specifies the fully-qualified database name of the database to which the processes are attached. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Toggles logging of notes to the daemon log file: -Olog enables writing to the daemon log; -O!log disables logging. This option has no effect on transaction logging. Causes the currently running daemon to keep running, but stop cleaning up. Causes the currently running daemon to start cleaning up after it has been suspended by specifying-Osuspend. Forces cleanup for shared memory, regardless of how full the segments are or what the threshold is. The garbage collection is performed during the next cleanup cycle of the cldmn. Causes the shell-invoked process to display status information about the dead processes from the cleanup manager shared memory data structures. Tells the running cldmn daemon to stop executing and exit. Force garbage collection during the next cleanup cycle of the cldmn.

-Olog

-Osuspend

-Oresume

-Oclean

-Ostatus

-Oexit -Ocollect

The cldmn utility also responds to three signals: kill -QUIT Causes cldmn to stop running. This is the normal quit signal used by shutdb.
173

Utilities Reference

kill -HUP

Causes cldmn to display status information from the cleanup manager shared memory structures to the daemon log, if logging is enabled. Resets the sleep duration to the minimum sleep time.

kill -USR1

Description

The cldmn utility starts a cldmn daemon if there is not one already running. The cldmn daemon cleans up dead processes and performs garbage collection in shared memory. The cldmn is normally started when you start the database and runs in the background. You should not need to start a cldmn. You can run cldmn from the shell to return information about the shared memory structures. To do this, specify the -Ostatus option on the cldmn command. The returned information is in the following categories: Clean-up manager Process id of the active cldmn. Status
NORMAL: None of the following conditions are in effect. CLNSHM: Shared memory cleanup will be done next cycle. COLLECT: Shared memory garbage collection will be done next

cycle.
USELOG: Information about the cldmns activities is being logged to

the daemon log for this daemon.


SUSPND: The cldmn is suspended. DOEXIT: The cldmn will exit on the next cleanup cycle.

Process

Description of a process which has opened the database. This includes the name that the program registered with the uinimsg RHLI function and the process ID. This is a count of the number of database agents working on behalf of the process.
Utilities Reference

Agents

174

Options

NORMAL: None of the following conditions are in effect. GHOST: A process that no longer exists, but still has other agents

working for it.


DEAD: A dead process which has not been cleaned up yet.

The cldmn logs messages to the daemon log file. The daemon log file is named dmnlogpid and resides in the directory specified by the DMNTMP configuration variable. For a cldmn that is running, the options do not take effect immediately. Because the cldmn cycles between sleep and wake periods, the options take effect when the cldmn wakes to examine shared memory. The cldmn agents are the mechanism by which the cldmn determines when a process needs to be cleaned up. If the number of agents goes to zero and the process has not been unregistered normally, the cldmn knows it needs to clean up that process. The cldmn will set the options for a process to DEAD until it finishes cleaning up the process. Each process that opens the database registers itself with the cldmn. Such a process is given two agents. One agent represents the process itself and can clean up after that process. The second agent is provided so that the process can work on behalf of another database process. Embedded SQL/A backend utilities and other internal procedures use this second agent to indicate the database process that they are doing work for. Second agents are important because the cldmn does not want to clean up a dead process until all the second agents have finished cleaning up their work. The line after each process indicates which database process the second agent is currently working for.

Example

In this example, cldmn is used to observe the status of the dead processes.
cldmn -Ostatus ******************** Clean-up Manager (1505) status 00 ( NORMAL ) Processes attached to database /doc/home/examples/file.db [304671:-32255] at Thu Jan 14 15:11:55 1993 process lgdmn (1503) agents 2 options 00 ( NORMAL ) agent for lgdmn (1503) process cldmn (1505) agents 2 options 00 ( NORMAL ) agent for cldmn (1505) process SQL (1642) agents 2 options 00 ( NORMAL ) agent for SQL (1642) ********************

Utilities Reference

175

See Also

shutdb utility
DMNTMP configuration variable

176

Utilities Reference

config
System setup

Syntax
config [ cconfig_file_name ddbname ]

a e q variable variable . . .

Arguments

-cconfig_file_name Indicates that config is to display the values of variables in the specified configuration file. If the -cconfig_file_name option is not specified, config uses the default file. The format of the configuration file is described on pages 181 to 183. -d dbname Specifies the fully-qualified database name of the database for which the configuration file values are displayed. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Indicates that config is to display the values of all the configuration variables that are currently set anywhere in the configuration file hierarchy. config also displays the directory search paths and file names of the configuration files in which the variables are set. Variables set at the operating system command level do not display. Indicates that config is to display the current values of all configuration variables. If the variable is set in a configuration file, config displays the configuration files directory search path and file name. If the variable is set at the operating system command level, config displays the word environment. Variables that are set at the operating system command level override variables that are set in configuration files. Indicates that config is to display the value of the specified configuration variable. If the specified configuration variable is undefined, config displays two double quotation marks () to indicate a null value.
177

-a

-e

-q variable

Utilities Reference

tr variable . . .

The -q option can be used only with a single configuration variable and cannot be used with the -a or -e option. Indicates that config is to display the value of one or more specified configuration variables. If a variable is overridden by a variable that is set at the operating system command level, that variables value is displayed as well. If either configuration variable value is numeric, config displays the value in decimal, hexadecimal, and octal.

Description

The config utility displays configuration variable values. If the variable is set in a configuration file, config displays the directory search path and file name of the file. The config utility also recompiles the application configuration file if application.cf has a later time stamp than application.cfg. When you compile the application configuration file, you enhance application performance because configuration variable values can be located quickly in a compiled configuration file. If you are using ACCELL/SQL with your Unify DataServer database, the compiled database configuration file is used as the compiled application configuration file.

Example

These examples show how to use the config utility. The first example calls config to display the values of two configuration variables.
config DBPATH DBNAME SHMOFFSET

config responds:
Value of variable set at operating system command level

DBNAME: file.db (environment) DBPATH: /doc/home/linda/examples (environment) SHMOFFSET: 1024k (/usr/localnet.../lib/unify.cf) 1048576 0x100000 04000000
decimal value hexadecimal value octal value

Value of variable set in configuration file

The second example calls config to display the values of all the configuration variables that are currently set and indicate whether they are set at the operating system command level (in the environment) or in a configuration file.
178 Utilities Reference

config -e

config responds:
Variable Name CURR DBHOST DBNAME DBPATH Configuration Value CNO Configuration File /usr/maint/unify.cf command line environment environment

...

,.2<$ . file.db /doc/home/examples

...

The headers on the config report contain this information: Variable Name Name of the configuration variable for which a value is displayed. Configuration Value Current value of the configuration variable.
CNO

An asterisk (*) in this column means that you cannot override the value of this configuration variable by setting the variable at the operating system command level.

Configuration File Name of the file that contains the configuration variable. If the variable is set at the operating system command level, this column contains environment. If the database is remote, for all values retrieved from the server machine, config prefixes the name of the server machine to the value in the Configuration File column. For example, suppose your system configuration showed the following parameter settings:
DBHOST=dbrus DBPATH=/server/db DBNAME=file.db CLIENTINFO=/client/info UNIFY=/client/release/lib

Running config would yield the results shown below:


Utilities Reference 179

Variable Name ... SEPARATOR SHELL SHMDIR SHMFULL SHMKEY SPOOLER ...

Configuration ValueCNO | /bin/csh /tmp 75 0xcabfee lpr

Configuration File /client/info/file.cf environment dbrus:/rel/lib/unify.cf dbrus:environment dbrus:/server/db/file.cf /client/release/lib/unify.cf

* *

The dbrus: prefix in the Configuration File column entries for the SHMDIR, SHMFULL, and SHMKEY configuration parameters indicates that these are server parameters on the dbrus machine.

See Also

Configuration Source File The chapter Configuring Database Environments in Unify DataServer: Managing a Database

180

Utilities Reference

Configuration Source File


System setup; used with config utility

Syntax

configuration_variable_name = configuration_variable_value

[# comment]

Arguments

configuration_variable_name Specifies the name of a Unify DataServer configuration variable. configuration_variable_value Specifies a valid value for the specified configuration variable. The value can contain abbreviations and defined values that are recognized by Unify DataServer. String values must be enclosed in quotation marks, as in string_value. Configuration variable values can include these abbreviations: k b h m s 1024 512 hours minutes seconds

Configuration variable values can include several defined values, which make the configuration variable settings more readable and more maintainable:
TRUE FALSE YES NO

1 0 1 0 Indicates that the defined configuration variable is to be treated as if it were undefined. A value of undefined enables you to use the software default configuration variable value when another default is defined in the configuration file.
181

undefined

Utilities Reference

comment

Contains a descriptive comment, which must be preceded by the pound sign character (#). You can place comments on lines by themselves or on the same line as the configuration variable setting.

Description

The configuration source file, dbname.cf, is an ASCII file that can be edited by using a text editor. dbname is the name of the database file, without the .db file name suffix. When opening the database, Unify DataServer compiles the configuration source file to create the compiled configuration file, which is named dbname.cfg. The dbname.cf file is created by copying lines from either the master configuration file (unify.cf) or the production configuration file (prod.cf) to the dbname.cf file and editing the variable settings that must be changed.

Example

In the following configuration file excerpt, comments are used to indicate valid configuration variable values and their descriptions.
. . . # Read-only (non over-ridable from the environment) Indicator CONFIG_READONLY = FALSE # FALSE = certain variables may be # over-ridden # TRUE = only DBPATH & DBNAME may # be over-ridden # Master Unify DataServer database Directory (if NOT $UNIFY/lib/db) DBNAME = file.db # Unify DataServer root-file filename # (cannot include path name) # Other database Filenames OPMSGDEV = file.msg # operator message device name # (can be path name) CFGFILE = file.cfg # Compiled Configuration Filename # (can be a path name) . . .

The fxsollowing example shows how a configuration variable value can be changed in a configuration file. For example, to change the database name to accounts.db, you need only change the file.db on the DBNAME line to accounts.db as shown in the following diagram:
Original DBNAME entry Changed DBNAME entry 182
DBNAME = file.db # DataServer root-file filename # (cannot include path name)

DBNAME

= accounts.db

# DataServer root-file filename # (cannot include path name)

Utilities Reference

See Also

For more information About The configuration files that can be used with a Unify DataServer database The search priority followed by Unify DataServer to search for a configuration variable value See The chapter Configuring Database Environments in Unify DataServer: Managing a Database The chapter Configuring Database Environments in Unify DataServer: Managing a Database

Configuration variables and their default Configuration Variable Reference in this manual and valid values

Utilities Reference

183

creatdb
Creating a database

Syntax

creatdb [-ddbpath/dbname] [-vvolume_name] [-Ovoloff=volume_offset_in_bytes] [-Ovollen=volume_length_in_bytes] [-Ovolmode=volume_access_mode] [-Ovolowner=volume-owner_ID] [-Ovolgroup=volume_group_ID] [-Ovolpgsz=volume_volpgsz] [-Ovolfile=volume_file_name] regular [-Ofiletype= ] [-Odesc=database_description] contig device [-Oprivate] [-Opreallocate] [-Oforce] [-Ooverwrite]

Arguments

-ddbpath/dbname Specifies the path and name of the database to be created. If omitted, the values specified by the DBNAME and DBPATH configuration variables are used. -vvolume_name Specifies the simple name, excluding directory search path, of the root volume that contains the database. -Ovoloff=volume_offset_in_bytes For devices only, specifies the offset at which the root volume data storage starts in the physical media where the volume is to be stored. Use the voloff value when you are using a single physical device for two or more volumes. The offset of the second volume is the address immediately following the first volume, and so on. The volume offset is the number of bytes from the start of the disk partition. The offset for the root volume must always be zero (0), although a volume offset of zero does not necessarily identify the root volume. -Ovollen=Volume_length_in_bytes Specifies the length of the root volume in bytes. The volume length is used by volume storage routines to determine how much data can fit in the disk partition.

184

Utilities Reference

The minimum volume length is 4096 bytes. However, if the volume is a UNIX regular file, you can specify a volume length of zero (0) to indicate that the volume length is unlimited. A volume length can be unlimited only if the volume is a UNIX regular file. -Ovolmode=volume_access_mode Indicates the UNIX file access modes that all database files will be created with. The user can change these modes manually by using the UNIX chmod command. -Ovolowner=volume_owner_ID] Specifies the UNIX file owner ID that all database files will be created with. The user can change the owner ID manually by using the UNIX chown command. -Ovolgroup=volume_group_ID Specifies the UNIX file group ID that all database files will be created with. The user can change the group ID manually using the UNIX chgrp command. -Ovolfile=volume_file_name Specifies the full directory search path and file name of the file or device that contains the database volume. -Ofiletype= Indicates the type of file, by using one of these keywords: regular A regular file. Regular files have slower response time on retrieval and insertion than preallocated files or devices, but regular files can grow dynamically. Reserved for future use. A device (character or block-special file). Device volumes provide the fastest retrieval and insertion times. Device volumes must be preallocated.

contig device

-Odesc=database_description Describes the database. -Oprivate Indicates that creatdb is to create a private database.

-Opreallocate Indicates that the root volume is to be preallocated. This option may be used even if the file is a character-special or contiguous file. -Oforce Indicates that Unify DataServer is to force newly-created segments to the disk as they are added.
185

Utilities Reference

-Ooverwrite

Indicates that creatdb is to write over the existing database. The database must be shut down before you execute creatdb with the -Ooverwrite option. If you try to execute creatdb -Ooverwrite while the database is running, you will get an error.

Description

The creatdb utility creates a database. When you create a database, several events take place: The database and related files (dbname.db, dbname.jn, and so forth) are created in the current directory. You are given DBA authority for the database (and thereby can access the database). The PUBLIC schema is implicitly created in the database and becomes the initial default schema. The root volume of the database is created. The database usually requires at least 15 concurrently active file descriptors per process. More file descriptors are needed at runtime depending on the number of B-tree indexes, volumes, and kinds of queries in your application. Make sure your operating system limit for maximum number of file descriptors is large enough to include these files and any files needed by other processes. Before you create a database, decide what type of volumes the database will use. Volumes can be devices or files. If a volume is a device, you must perform the following: With root permission, use the UNIX mknod utility to build the dbname.db file in the $DBPATH directory. Initialize the volume by using the mkvol utility. When estimating the size of the root volume, remember that the root volume must contain the data dictionary tables for the database. The creatdb -Ooverwrite option operates only on databases created in the same software release as creatdb. If you try to use the creatdb -Ooverwrite option on a database that was created under a different release of the software, the following message displays: Software version does not match database version In this case, you must execute mkvol before you can execute creatdb with the -Ooverwrite option. If you have problems executing creatdb with the -Ooverwrite option on a database that was created using the same software release as creatdb, you must remove the database and recreate it.

186

Utilities Reference

For a volume that is a regular file, be sure that the maximum size for a regular file on your operating system is adequate. To increase the maximum size, modify the UNIX ulimit variable.

Example

To create a database on a raw device, specify the following:


creatdb -dmy_database -vmy_volume -Ooverwrite -Ofiletype=device -Ovollen=9625600 -Ovolfile=/db/file.db

You do not need to specify an offset because the entire partition is being used for this volume. Note that the length specified is less than the 94 megabytes that should be available on the partition.

See Also

Creating and Removing Database Objects in Unify DataServer: Writing Interactive SQL/A Queries Your operating system manual mkvol utility

Utilities Reference

187

dbcnv
Database conversion

Syntax

dbcnv

Arguments

None.

Description

The dbcnv utility converts the database identified by the DBPATH and DBNAME configuration variables (or their defaults) from its current version to the version that matches the current Unify DataServer release. The dbcnv utility resides in the diag directory of the release. Warning Always backup your database before using it with a different release, whether or not a database conversion is required. Should this utility fail, you may have to restore the database from a backup. When you install a new Unify DataServer release or update it on the same platform, you may not be able to use it to operate a pre-existing database unless you convert the database using dbcnv. You must convert each database separately, and run the dbcnv utility on the system where the database resides. Make sure you use shutdb with the Unify DataServer release that matches the database before changing your environment to point to the newer release. The database must not be running when you use dbcnv. As dbcnv runs, it indicates how it is progressing through messages. Sometimes the last output it produces gives very important instructions describing additional steps which you must do before the conversion is complete. Warning If these steps are listed and are not completed, symptoms such as severe performance problems, inability to start up the database, or even access method corruption may result.

188

Utilities Reference

Error messages related to dbcnv


When attempting to start a database with a newer release, prior to running the conversion using dbcnv, the following error indicates that a conversion to the new database version is required: The software version does not match the database version. (6) Use the file command to view the database version number of a particular database if it can access the Unify DataServer file magic entries. In the following, the database version is 37: $ file m $UNIFY/../install/magic $DBPATH/file.db/db/file.db: Unify data base Root volume Version 37 Native Machine

Example

You can see the database version number that a given Unify DataServer product release requires and uses by using the version argument with most of the executables. This example shows the release requires database version 38:
$ SQL version Database Version: 38 Revision: 7.0B ...

Whenever you switch to a new or updated Unify DataServer release, even if using this utility is not required, it is strongly recommended that you relink all executables that use (link in) the release libraries. The dbcnv utility will list the major steps involved in the conversion, and give you an opportunity to discontinue the conversion before making any changes to the current database. Following is an example of the output produced by dbcnv when converting a database with version 37, made in the de locale, to the current version 38:
$ dbcnv dbcnv: Checking the data base version. Making sure the database is not running... Converting database /my/db/file.db

Utilities Reference

189

Conversion from database version 37 to 38 involves the following steps: correct the internal representation of string data in access methods. This step requires a rebuild of certain access methods for this db locale (de) Would you like to continue and perform the conversion now? (Y/N) y Opening the database... Updating the database version to 38... There are 2 access methods incompatible with this version of DataServer. These access methods will be dropped. A SQL script to recreate them has been written to $DBPATH/remkam.sql (/my/db/remkam.sql), and should be run after the conversion. Syncing the database.../ab/ds7/unstable/opus/bin/dbcnv conversion successfully completed. $

Second example of converting a database using dbcnv


$ dbcnv dbcnv: Checking the data base version. Making sure the database is not running... Converting database /myother/db/file.db Conversion from database version 35 to 38 involves the following steps: add a locale identifier in the root volume of the DB. prepare the database to support multiple volumes for variable length data. This could take a while depending on the amount of vdata in the database. update database header for this database locale (C). To complete the conversion of this database you must shut down the database, run diag/remkview to recreate all views, and then drop and recreate all of your Btree indexes. Would you like to continue and perform the conversion now? (Y/N) y Opening the database... Updating the database version to 36... Updating the database version to 37... Updating the database version to 38... Syncing the database... /prods/ds/bin/dbcnv conversion successfully completed.

NOTE: To complete the conversion of this database you must shut down the database, run diag/remkview, and then drop and recreate all of your Btree indexes. You can ignore the unable to access required file error log entries that are written when you shut down. The Btree indexes will reside in database volumes when they are recreated. If they are not already in database volumes, you may need to add or resize volumes before recreating your Btrees.

190

Utilities Reference

dbdmn
Master server startup

Syntax

At the operating system prompt, the system rc file: dbdmn& [ Onokeepalive | Ohost=hostname ] The system inittab file: dbdmn [ Onokeepalive | Ohost=hostmane ]

Arguments

Ohost = hostname Directs the dbdmn to listen to a specified network machine or the IP address associated with a secondary network card (when the host machine has two or more). To listen to all of them use Ohost=0.0.0.0. If you want to listen to more than one, but not all, of multiple cards, start a different dbdmn for each card and specify a separate hostname for each daemon. Onokeepalive Disables polling for active clients. Typically, after a client has been idle for a specific time period, the master server checks for an active client connection. The time period is determined by a configuration setting specific to the operating system. If it is found that the client is no longer active, the server terminates the connection.

Description

The dbdmn utility starts the master server executable. Before remote users can access databases on the server machine, the master server executable, dbdmn, must be running on the server machine. To give remote users the ability to access databases on the server machine, start the master server executable using one of the following methods: start dbdmn manually include in the system rc file include in the system inittab file

Running dbdmn Manually


To start the master server manually, login as root on the server machine and enter the dbdmn command followed by an ampersand (&).
Utilities Reference 191

The master server then starts running in the background. When you start the master server manually, you can stop the master server using the kill QUIT command. If the master server is stopped after being started manually, it must be restarted manually by root. The advantage of starting the master server manually is that it gives the system administrator complete control over the master server. The master server is up when you start it up and down when you bring it down. If you want to allow any user to start up the master server, dbdmn must be a setuidroot executable. For more information on using setuid, see your operating system manuals. Tip If the master server shuts down while running, you must wait a few moments before trying to restart it. You must wait to restart because the master server always runs under the same name, and sometimes the operating system takes a few moments to detect that the previous master server process is no longer running. If you restart the master server too soon, you will get an error message telling you another process is running under that server name.

Including dbdmn in the System rc File


To start the master server with the other daemons started by the automatic reboot process (rc), include dbdmn in the system rc file. When you start the master server from the rc file, you can stop dbdmn using the kill QUIT command. If the master server is stopped after being started by rc, you must restart dbdmn manually. The advantage of starting the master server from the rc file is that the master server starts up automatically whenever you boot the system. (For more information on using rc, see your operating system manuals.)

Including dbdmn in the System inittab File


To start the master server with the other processes started by init, include dbdmn in the inittab file. You can create the inittab entry so that if the master server dies, it will automatically be restarted. In this case a kill QUIT command would just stop the current master server and let a new one restart. (For more information on using inittab, see your operating system manuals.)
192 Utilities Reference

The advantage of starting the master server from the inittab file is that if the master server stops it will automatically restart without your having to reboot the system. A possible disadvantage is that to permanently stop the master server, you must edit the inittab file. Errors can occur while the master server is running. For example, the master server will fail if the UNIFY configuration variable is not set, or the user trying to run dbdmn is not logged in as root and dbdmn is not set up as a setuidroot executable. All error messages display on standard error. If you do not want error messages to display on standard error, you can redirect output to another device or to a file. If you are running dbdmn from the system rc file you can redirect output to /dev/console using the following command:
dbdmn 2> /dev/console &

Port Used by dbdmn


The UNIFYPORT configuration variable can be used to specify the port.

See Also

Unify/Net Guide
UNIFYPORT configuration variable in Unify DataServer: Configuration Variable and Utility Reference

Utilities Reference

193

dbld
Database loading

Syntax

dbld [-b] [-

sschema_name Sschema_ID

] [-q] [-Osep_char=character] u n ] [-ddbname]

[-Onosync] [-cN] [rM] [

table_name input_file specification_file [exception_file]

Arguments

-b

Indicates that the input file is in binary format. Because the binary files column values are in a fixed format, the input file needs no separator between column values. You can produce a binary input file by using SQL/A. Sets the current schema name to schema_name. Sets the current schema authorization ID to schema_ID. Indicates that dbld can use sequential access to enforce uniqueness on a column if no other access method exists. Warning Using sequential access slows down table loading significantly; for example, loading as few as 1000 rows can take an hour. As each row is loaded, all the previously loaded rows must be read to make sure that the new row does not violate the uniqueness constraint. As a result, by the time row number 1000 is added, dbld has to read through 999 rows.

-s -S -q

-Osep_char=character Specifies the separator character between column values in the input_file and between column names in the specification_file. If omitted, the separator character is determined by the separator character that appears in the specification_file. -Onosync Indicates that the file manager daemon should not perform file system synchronization after dbld has loaded the data. By default, file system synchronization is performed to ensure that the data is physically stored in the database.
Utilities Reference

194

If you include the -Onosync option, the data loads faster. However, if you use -Onosync and transaction logging has been disabled (the configuration parameter LOGTX is FALSE), the data can be lost if the system crashes before file system synchronization has been performed. -cN Specifies checkpoints as a number (N) of inserted rows. At a checkpoint, the current transaction is committed and an entry to dbname.err is made. If you do not specify the checkpoint frequency, dbld determines an appropriate checkpoint frequency. -rM Indicates that dbld is to start at the Mth line in the input_file instead of at line 1. The first line is line 1. Use this option to restart dbld after a failure. -u Specifies update-only mode (no inserts) and requires that the first column of the input file is the row ID. dbld uses this value to find the row, then updates the row using the remaining values. To use the default insert-and-update mode, specify neither -u nor -n. In default mode, the database table must have a primary key column. If the table does not have a primary key, you must use either the update-only mode (-u) or insert-only mode (-n). -n Specifies insert-only mode (no updates). To use the default insert-and-update mode, specify neither -u nor -n. In default mode, the database table must have a primary key column. If the table does not have a primary key, you must use either the update-only mode (-u) or insert-only mode (-n). -d dbname Specifies the fully-qualified database name of the database to be loaded. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Specifies the name of the database table to be loaded. Specifies the name of the ASCII or binary input file that contains the data to be loaded. For an ASCII file, the rows in the input file contain columns that are separated from each other by separator characters. Each row is terminated by a newline character.
195

table_name input_file

Utilities Reference

To specify that dbld is to accept input from standard input instead of an input file, enter a dash (-) instead of a file name. The format of the input_file is described starting on page 198. specification_file Specifies the name of the ASCII specification file that contains the list of column names in the database table. The columns must be separated from each other by separator characters. The specification file column names must be in the same order as the corresponding columns in the input file. The format of the specification_file is described starting on page 201. exception_file Specifies the name of the exception file to which dbld is to report duplicate key information and other diagnostic messages. If you do not specify an exception file, dbld directs messages to the error log file. Fatal errors are not written to the exception file; instead, they are directed to the standard error device.

Description

The dbld utility bulk loads or bulk updates rows in an existing database table by reading data from an input file (input_file) into the database table columns specified by the specification file (specification_file) . When dbld loads the data, it also updates the associated access methods, validates column entries and fills in default values for columns that are not included in the specification file. The dbld utility also prints error messages for the error codes returned by lower level modules. If a fatal error occurs, dbld exits after appropriate cleanup. The dbld utility can be executed in three modes: insert-only (-n option) update-only (-u option) insert-and-update (default, when neither -n nor -u option is specified) In the default insert-and-update mode, dbld requires that the table to be loaded has a primary key column. The primary key column name must be included in the specification file. If the primary key consists of a group of columns, all of the columns in the key must appear in the input and specification files. In insert-and-update mode, dbld inserts a new database row if the primary key column value in the input file is unique (it does not exist in the database table). If the key column in the input file does exist in the database table, dbld updates the corresponding database row. If the table does not have a primary key, you must use either the update-only mode or insert-only mode.

196

Utilities Reference

Warning The performance of dbld may be severely affected by the presence of BEFORE UPDATE or BEFORE INSERT triggers on the table. If the work done by the triggers is not essential for the load operation you can increase the performance of dbld by dropping the triggers before the load and then recreating them when dbld completes.

Related Configuration Variables


Before you run dbld, check the value of these configuration variables:
DBPATH

Directory search path, without the file name, of the application database file and associated files. Simple file name of the database file, for example, file.db.

DBNAME

Status Values

0 1

The load completed successfully. An option or argument was invalid or some required information was missing. In this case, you must correct the invalid options or arguments and add required information that is missing. The load terminated in an unknown state, as in a crash. In this case you must restart dbld to complete the unfinished operation.

See Also

For more information About


The format of the input file The format of the specification file Obtaining the row ID for a row

See
dbld Input File on pages 198 through 200 dbld Specification File on pages 201 through 202 Selecting Rows and Columns in Unify DataServer: Writing Interactive SQL/A Queries Unify DataServer: RHLI Reference The configuration variable descriptions in this manual Bulk Inserting or Updating Rows in Unify DataServer: Managing a Database
197

The LOGTX, NULLCH, and SEPARATOR configuration variables Checkpoint frequency

Utilities Reference

dbld Input File


Input to dbld

Description

The input file for dbld can be an ASCII file or a binary file. You can create an ASCII input file by using a text editor or by running an application program. The only constraint is that the data in the input file must be organized in the same way as the database tables column names in the specification file. You can create a binary file by using the BINARY keyword with the SELECT statement in Interactive SQL/A. Each row in an ASCII input file contains a group of columns delimited by separator characters and terminated by a newline character. (If you use SQL/A to create an input file, SQL/A inserts the separator characters and newlines.) The separator character that you specify in the input file must be the same as the separator used in the specification file. You do not have to use the same separator after every column listed in the file, but the separator in the input file must be the same as the separator used in the corresponding position in the specification file. If the input file is a binary file, the column values are in a fixed format, so the input file needs no separator between the column values. You usually produce a binary input file by using SQL/A. Valid separator characters are listed on page 201.

Input File Format for Insert-and-Update or Insert-Only Mode


For the default insert-and-update mode or for insert-only mode, the format of the input file must exactly match the format of the specification file.
Specification file format Input file format Column_1_name|Column_2_name|Column_3_name|. Column_1_data|Column_2_data|Column_3_data| Separator character

..

...

Input File Format for Update-only Mode


The input file format for update-only mode is slightly different from that of insert-only mode because the first column of each input file row must contain the row ID. In update-only mode, dbld uses the row ID in the input file to locate the database table row to be updated.
198 Utilities Reference

To obtain the row ID, use the RHLI or the ROWID variable in SQL/A. The specification file contains only the names of the columns to be updated, while each row of the input file contains the row ID, followed by the columns of data to use to update the rows. When adding the row ID value, you must add a separator character. Because this separator character does not appear in the specification file, it must be the separator character specified with the -Osep_char option or the SEPARATOR configuration variable.
Specification file format Input file format Column_1_name|Column_2_name|Column_3_name| . .. ROWID|Column_1_data|Column_2_data|Column_3_data| Additional separator character

...

Input File Data


You can include any type of data in the input file, provided you follow these rules: Every instance of a special character must be escaped to dbld by using the escape character ( \ ). To specify a null value in the input file data, use the character specified by the NULLCH configuration variable. The default setting for NULLCH is *. The UNIFY 4.0 compatibility archive-format dates (**/**/**) can also be used as input values for dbld. When creating an input file that contains data from TEXT or BINARY columns, always escape newline characters by using the escape character ( \ ). If newline characters are not escaped, dbld interprets the newline characters as column terminators. For example, if you use a file created with vi to load a TEXT column, dbld attempts to create a new database row for each line in the vi file. (Lines in a vi file are terminated by newline characters.) For BINARY data type columns, you must also escape the separator characters by using the escape character ( \ ).
BYTE column data can be loaded by dbld from either an ASCII or a binary file.
Utilities Reference 199

Example

In the following example of an ASCII format input file, the row key is the last column, not the first.
Moehr|70|clerk|6400|950.00|0.00|5700 Colucci|40|salesrep|2200|2500.00|3000.00|6700 Amato|40|salesrep|6200|2000.00|750.00|5800 Fiorella|70|clerk|5700|800.00|0.00|68000 Brown|60|engineer|1300|6000.00|0.00|5900

Column data

Separator character

If the input file is to be used in update-only mode, the first value must be the ROWID:
2345|Moehr|70|clerk|6400|950.00|0.00|5700 2346|Colucci|40|salesrep|2200|2500.00|3000.00|6700 2347|Amato|40|salesrep|6200|2000.00|750.00|5800 2348|Fiorella|70|clerk|5700|800.00|0.00|68000 2349|Brown|60|engineer|1300|6000.00|0.00|5900

ROWID values

See Also

For more information About


dbld utility syntax and usage The format of the specification file Obtaining the row ID for an input file

See
dbld on pages 194 through 197 dbld Specification File on pages 201 through 202 Selecting Rows and Columns in Unify DataServer: Writing Interactive SQL/A Queries or

Unify DataServer: RHLI Reference


Creating an input file in Interactive SQL/A Selecting Data Into a File in Unify DataServer: Writing Interactive SQL/A Queries The REGLIKE predicate description in

Escaping special characters

Unify DataServer: Writing Interactive SQL/A Queries


Using the NULLCH or SEPARATOR configuration variable
200

The NULLCH or SEPARATOR description in this manual


Utilities Reference

dbld Specification File


Input to dbld

Description

The specification file describes the format of the table to be loaded, by listing the names of columns that have values in the input file. The specification file column names do not have to be in any special order, but the order must be the same as the order of the corresponding columns in the input file. In the specification file, the column names must be separated by a separator character. You do not have to use the same separator after every column listed in the file, but the separator in the specification file determines which separator you must use in the corresponding location in the input file. The following table shows valid and invalid separator characters. Valid and Invalid Separator Characters Valid Characters
Any characters not in the list of invalid characters

Examples
| , / +

Invalid Characters
Letters Digits Space character Underscore character Character in a string column in the input file Reserved characters: (use with caution only) ^ ! # @ * _

Examples
AZ and az 09

If you must use one of the reserved characters, you are responsible for the results. For example, the asterisk (*) is the default null character. If you use the asterisk, make sure that you specify a different null character by setting the NULLCH configuration variable.
Utilities Reference 201

Example

In the following example of a specification file, the row key is the last column, not the first:
Name|Dept_No|Job|Manager_Num|Salary|Commission|Number

Column name

Separator characters

See Also

For more information About


dbld utility syntax and usage The format of the input file

See
dbld on pages 194 through 197 dbld Input File on pages 198 through 200

202

Utilities Reference

dbname
Database name echoing

Syntax
dbname [-ddbname]

[-Ohost] [-Ouser] [-Opath] [-Oname] [-Ocheck] [-Ois_remote] [-Olang]

Arguments

-d dbname

Specifies the fully-qualified database name of the database. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Indicates that dbname is to echo the database machine name. This option can be used alone to determine whether the database is local or remote. If the database is local, dbname displays .:: . If the database is remote, dbname displays host:: .

-Ohost

-Ouser -Opath -Oname -Ocheck

Indicates that dbname is to echo the database user identity. Indicates that dbname is to echo the database directory search path. Indicates that dbname is to echo the database file name. Prevents dbname from echoing the database file name to standard output. If this option is specified, -Oname, -Opath, -Ouser and -Ohost are ignored. Indicates that dbname is to exit with a value of 0 (TRUE) if the database is remote or exit with a value of 1 (FALSE) if the database is local. Any other exit value indicates that an error has occurred. If the -Ois_remote option is omitted and dbname successfully parses the database name, the exits value is 0. If the option is omitted and an error occurs, the exit value is greater than 1.

-Ois_remote

-Olang

Indicates that dbname is to display the collating sequence locale as specified by the LANG configuration variable.
203

Utilities Reference

Description

The dbname utility echoes the name of the database to standard output, usually the screen. If all four of the -Ohost, -Ouser, -Opath, and -Oname options are omitted, dbname echoes all parts of the database name string in this format: [[dbhost]:[dbuser]:][dbpath/][dbname]
Database machine ( Ohost) User identity ( Ouser) Database name ( Oname) Database path ( Opath)

You can also use the dbname utility to provide the database name for shell scripts and other utilities when you dont know the current database name.

Example

The following example uses the -Ocheck option to ensure that a Bourne shell script operates only on a local database.
dbname -Ocheck -Ois_remote ACCESS=$? if test $ACCESS -ne 1 ; then echo A local database must be specified. exit 1 fi

This example causes dbname to echo the fully-qualified file name of the database root volume:
ROOTVOL=dbname -Opath -Oname

Either of the following two examples cause dbname to echo the fully-expanded database name string:
dbname -Ohost -Ouser -Opath -Oname dbname

This example causes the current database (whatever its name is) to be synchronized:
fmdmn -F dbname

See Also

DBHOST, DBNAME , and DBPATH configuration variables

Unify/Net Guide
204 Utilities Reference

DIS Source File


Input to the disc compiler

Syntax

table [schema.]table { column1: $configuration_variable column2 numeric constant string unique


CURRENT_ROW

[ (

[, [

+ -

] offset ] ) ]

today hour checkreference checkref table (table.column)

[,

]]

[ { [ , ][ not ]

< numeric_constant > numeric_constant numeric_constant numeric_constant >= numeric_constant <= numeric_constant numeric_constant reg_expression

} , ...]

[ERROR=string] }...

Arguments

schema table column1

Specifies the name of the schema that contains the table specified by table. If omitted, the current schema is used. Specifies the name of the table that contains column1. Specifies the name of a column for which to define default or legal values in the current table. The column cannot be a column group name, a TEXT column, or a BINARY column.
205

Utilities Reference

configuration_variable Specifies the name of any configuration variable in the Unify DataServer configuration files or any other configuration variable that is set at the operating system command level. The value of the configuration variable is used as the default value. column2 Specifies the name of another column in the current table. This lets you specify a default value to be obtained from another column. The data type of the column must match the data type of column1.

numeric_constant Specifies a value of type SMALLINT, INTEGER, NUMERIC, HUGE INTEGER, DECIMAL, DATE, HUGE DATE, TIME, REAL, DOUBLE PRECISION, AMOUNT, HUGE AMOUNT, CURRENCY or FLOAT. The data type of the constant must match the data type of the current column. For CURRENCY constants, do not specify the $ as you would in Interactive SQL/A. string (Character columns only) Specifies a sequence of printable characters. String default values are stored literally in the column. You can include the single quotation mark character () in the specified string if you precede the quotation mark character by a backslash character, as in a\bc. This also applies to the backslash character, for example, a\\bc. unique (Numeric columns only) Sets the default value of the column to the next unique number in a series of numbers maintained for the entire database. The unique attribute assigns a unique value to the column. The column must be of type INTEGER, NUMERIC (5-9), HUGE INTEGER, or DECIMAL. Use the unique attribute to assign unique values to numeric type columns that are not primary key columns. To set up unique default values for a tables primary key column, create the table with a direct key column and let Unify DataServer calculate the unique key value.
206 Utilities Reference

CURRENT_ROW

(Numeric columns only) Sets the default value of the column to the current ROWID. Use the CURRENT_ROW attribute to assign unique values to long type columns (INTEGER, NUMERIC (59), and DECIMAL) that are not primary key columns. today (DATE/HUGE DATE columns only) Sets the default value of a DATE or HUGE DATE column to the current month, day, and year. (TIME columns only) This sets the default value of a TIME column to the current hour and minutes, in the format hh:mm. (Numeric, DATE, and TIME columns only) Specifies the amount that the default value is reduced or increased before the value is stored in the column. The following table lists the offset increments allowed for each data type: Column Offsets Data Type
NUMERIC, INT, SMALLINT, DECIMAL, HUGE INTEGER DATE HUGE DATE TIME AMOUNT HUGE AMOUNT CURRENCY FLOAT DOUBLE PRECISION REAL

hour

offset

Offset
Integer

Comments

Day Minute Any legal value for the data type Float

Offset can be applied to today Offset can be applied to hour

Utilities Reference

207

The following examples show how you can offset default values. If the offset is:
today,+1 hour,+3 unique,+10000 Sal_Amt,+2.75 to a unique column

Then the value of column1 is:


One day from the day the row is entered in the database table Three minutes from the time the row is entered in the database table A unique value starting with 10001, instead of 1 The value of Sal_Amt in the current row, plus 2.75, where Sal_Amt is the name of an AMOUNT column in the current table

checkreference or checkref Enforces referential integrity by allowing the user to enter a value for a column only if the value matches an existing unique column value in a referenced table. The entered value must also pass all other domain checks. Columns validated by using the checkreference clause cannot contain null values, because the referenced column must be unique (not null). In this, the checkreference differs from a link index because the child of a link can be null, whereas the child referenced in a checkreference clause cannot be null. The format of the checkreference clause depends on whether column1 has a link index defined for it. The following table shows how to use the checkreference clause. If column1:
Has a link index to a unique column in the database

Then use:
The default checkreference format: checkreference

Does not have a link index to a unique The table format: checkreference table column, but the corresponding column in table is a primary key column Does not have a link index to a unique The table.column format: column, and the corresponding checkreference (table.column) column in table is not a primary key column

not

Negates all the legal values that follow this keyword. The user can enter any value except the values listed.
Utilities Reference

208

ERROR=string

(ACCELL/SQL applications only) Specifies an error message that is displayed when the user makes a mistake entering data to a column that has default or legal values defined. reg_expression (CHARACTER legal values only) A character string that contains a regular expression. The value of the regular expression determines which character values can be entered. The following table shows the special characters that can be used in regular expression notation. Regular Expression Notation This symbol or notation: x \x Matches: an occurrence of the single character x where x is not a special symbol. an occurrence of any non-numeric character including the special symbols: ., *, [ , ], \, {}. The backslash is the escape character. any single character except the new-line (\n). For example, ABC. matches any string of at least four characters that starts with ABC: ABCD, ABCE, and so forth. zero or more occurrences of the single character or regular expression that precedes the asterisk. For example, AA*C matches any string that begins with A followed by zero or more As and ending with C: AC, AAC, AAAAAC, and so forth. as the first character of a regular expression, matches a string at the beginning of a column. The regular expression ^expression$ matches the string that is indicated by expression only when the string fills the entire column length. Regular Expression Notation (continued) This symbol or notation: $ [xxx...x] Matches: As the last character of a regular expression, matches a string that is at the end of a line. (\n matches a new-line.) any single character that is listed between the brackets. For example, [ABCD] matches the uppercase letters A, B, C, or D.
209

Utilities Reference

Regular Expression Notation (continued)(continued) This symbol or notation: [x-x] Matches: any character that falls within the range of characters. For example, [a-z] matches any lower case letter. The symbols (., *, { }) do not have their special meanings within the character class: you do not have to escape them with the backslash. any character (except the new-line character) that is not in the list. following a regular expression, matches any number of min through max successive occurrences of the the preceding regular expression. The regular expression r\{m\} matches exactly m occurrences; the expression r\{m,\} matches at least m occurrences. the regular expression that is the nth grouped regular expression enclosed in \( and \). For example, \(A\)B\(\C\)D\2 is the concatenation of regular expressions ABCDC.

[^xxx...x] \{min,max\}

\(r\)...\n

Because the backslash character is an SQL/A escape character, you must precede a backslash within a regular expression by a backslash. Enclose expressions that use regular expressions in single quotation marks, for example, [A-Z] or Mac.*. To include regular expression notation characters in a character string, escape the characters with two backslash characters, as in \\([0-9]\\) for a number of the form (999).

Description

The DIS source file is an ASCII text file that is used to specify legal and default values for columns in a database. The DIS compiler (disc) processes the file and produces a compiled version of the information. The compiled file is named dbname.dis. You must specify at least one default values section or legal values section. In the DIS source file, there can be one or more table sections. Each table section is introduced by the table keyword. The table section establishes the current table. In each table section, there can be one or more column sections, introduced by the column name. All column sections in a table section must belong to the current table.

210

Utilities Reference

Use the column section to set a default value and legal values for a column. Enclose the default value in parentheses, then list all legal values separated by commas. You can continue statements on additional lines by putting a backslash (\) at the end of the line. You can insert a comment by introducing the comment with a pound sign (#). If a default value is defined for a column, Unify DataServer fills in the default value when the user adds rows to the table. If you do not define a default value for a column, and the user does not enter a value for the column when adding a row to the table, Unify DataServer enters a null value to the column. If legal values are specified for a column and the user enters a value that matches any legal value in this statement, Unify DataServer stores the user-entered value in the column. The syntax of a column section depends on whether the column is a NUMERIC, CHARACTER, or DATE/TIME type column:

Numeric Columns
Numeric type columns include AMOUNT and HUGE AMOUNT, DECIMAL, FLOAT, NUMERIC, INTEGER, SMALLINT, REAL, and DOUBLE PRECISION columns. Default values for a numeric column can be determined from a configuration variable, another column, a constant, or assigned by Unify DataServer (unique or CURRENT_ROW). An offset can be applied to the default value. Legal values for a numeric column can be determined by a list of constants, a range of constants (-), or constants that satisfy one of the following relational expressions: greater than (>), greater than or equal to (>=), less than (<), less than or equal to (<=) . You can use the NOT keyword to indicate that the legal value list excludes specified values.

Character Columns
For character type columns, default and legal values must be enclosed in single quotation marks (), as in CA or AZ. When using DIS to specify legal values for character columns, the values are interpreted as regular expressions. You can use the NOT keyword to indicate that the legal value list excludes the specified values.
Utilities Reference 211

A BYTE column is treated the same as a character column except that BYTE columns do not support regular expressions when specifying a legal values list. (You must specify a constant). Character columns cannot have offsets applied to the default value. To ensure that legal values are interpreted correctly, anchor the start of each regular expression with a circumflex (^) and anchor the end with a dollar sign ($), as shown in the following example:
^[ABC] $

Date and Time Columns


Date and time columns include columns of type DATE, HUGE DATE, and TIME. The default for date and time columns can be determined by the current date and time by specifying the today or hour keywords. An offset can be applied (see page 207).

Example

The following example shows a sample database and a sample DIS source file. The DIS source file controls default and legal values, and displays an error message for the columns in the database, which has the following design: Table Name
employee emp_number emp_name dept_number department dept_number location
NUMERIC CHARACTER

Column Name

Data Type
NUMERIC
CHARACTER NUMERIC

Display Length
7 30 9

9 30

The example database has the following DIS source file:


212 Utilities Reference

Table section

table employee emp_number: (unique,+200) dept_number: (20), checkreference (department.dept_number) # Error message to be used by ACCELL/SQL ERROR=This is an invalid department number; reenter number. emp_name: ^[A-Z][a-z][a-z]*

Line table department dept_number: (20), 20, 30, 40, 55, 60, 69, 72, 80, 85, 88, \ continuation 90, 103, 105, 110, 120 indicator Comment # Error message to be used by ACCELL/SQL ERROR=This is an invalid department number; reenter number. location: ($BLOC), Chicago, New York, Bismark, Portland, \ Column San Francisco, Miami, Dallas, Denver, \ section Salt Lake City, Phoenix, Lexington

The dept_number column statement for the employee table contains the checkreference keyword. Consequently, the only values that users can enter for dept_number are existing values of department.dept_number. A table name is entered for the dept_number checkreference definition because the dept_number column in employee does not have a link between it and the dept_number column in department. The message This is an invalid department number ... is displayed if the user enters a department number that does not exist in the department table. The regular expression specified for emp_name restricts the legal values for employee names to start with an upper case letter followed by one or more lower case letters.

Utilities Reference

213

Other examples of regular expressions are shown in the following table: This expression:
[0-9].[^0-9]\\{2\\}

Matches:
a digit, followed by any single character, then two non-digit characters a social security number of the form 999-99-9999 any legal phone number, with or without area code and punctuation (parentheses, spaces, hyphen). an inventory number of the general form A1B9999C or A1BCCCCA the strings abcdef and defabc the string abcdef but not defabc the string defabc but not abcdef

[0-9]\\{3\\}-[0-9]\\{2\\}[0-9]\\{4\\} ((\\{0,1\\}([2-9][01][1-9]))\\ {0,1\\})*([2-9][0-9]\\{3\\})[ -] \\{0,1\\}[0-9]\\{4\\} [A-Z][0-9][A-Z]....[A-C]

abc ^abc abc$ ^abc$

the string abc only

See Also

For more information About


Configuration variables Creating a direct access table

See
Configuration variable description in this manual Direct Access Columns in the Creating and Removing Database Objects chapter in Unify DataServer: Writing Interactive SQL/A Queries disc utility description on pages 215 through 217
CREATE LINK INDEX statement in Unify

Compiling the DIS source file Creating link indexes

DataServer: SQL/A Reference.

214

Utilities Reference

disc
Data Integrity Subsystem compiler

Syntax

To compile a DIS source file, use this standard syntax: disc source_file_name [-ddbname] [-sschema_name] [-Sschema_ID] [-Oerror=error_file_name] [-Ocompatible] [-Overbose] To create an empty DIS file, use this alternative syntax: disc -Oinitialize [-ddbname]

Arguments

The disc utility has the following standard arguments: source_file_name Specifies the name of the file that contains the DIS definitions. The format of this file is described on pages 205 to 214. -d dbname Specifies the fully-qualified database name of the database. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-s schema_name Specifies the name of the schema. -S schema_ID Specifies the identifier of the schema. (This is also known as the authorization ID). -Oerror=error_file_name Specifies a file where you want the compiler to send error messages, instead of to the standard error device. -Ocompatible Specifies compatibility mode for source files that were created with UNIFY 5.0 or an earlier release. In earlier releases, these files were used to define Advanced Field Attributes (AFA).
Utilities Reference 215

In compatibility mode, source files have the following differences: The record type keyword is used instead of table. The offset increment value for AMOUNT is always a cent. The offset increment for FLOAT is an integer. -Overbose Tells disc to display the name of each table and column definition and whether the definition is being added or dropped. For example, if your old DIS source file referenced tables A, B, and C, and your new DIS source file references tables B, C, and D, disc reports that it is dropping A and adding D. -Oinitialize Indicates that disc is to create an empty DIS file. If you specify this option, all other options are ignored.

Warning You must shut down the database before using the disc utility. The disc utility requires a lock on the database. Also, the AUTOSTART configuration variable must be set to TRUE, unless you are just creating an empty DIS file (the -Oinitialize option). If you execute disc with the -Oinitialize option and the dbname.dis file exists, the new, empty dbname.dis file overwrites the existing file.

Description

When used with the standard syntax, the disc utility compiles a DIS source file to create a DIS file. When used with the alternative syntax, disc creates an empty DIS file.

Compiling a DIS Source File


To compile a DIS source file, use the standard disc utility syntax. The disc utility compiles the specified source file to create a file named dbname.dis, where dbname is the name of the current database or the database specified on the command line. disc creates dbname.dis in the directory specified by DBPATH or in the current directory if DBPATH is not set. If a dbname.dis file exists, disc overwrites it with the newly compiled dbname.dis. Because the new dbname.dis file overwrites the old file, if you were to compile two DIS files at the same time, the last one to finish compiling would determine the contents of dbname.dis.
216 Utilities Reference

Tip If you name your DIS source file something like dbname.ds, you can tell at a glance which file is the DIS source file.

Creating an Empty DIS File


The database must always have a DIS file whether or not you have defined defaults and legal values in a DIS source file. To create an empty DIS file, use the alternative disc utility syntax. The disc utility creates an empty DIS file named dbname.dis. This enables you to replace a DIS file that has been destroyed or removed.

See Also

For more information About


The format of the DIS source file Shutting down the database

See
DIS Source File on pages 205 through 214 Shutting Down the Database in Unify

DataServer: Managing a Database


shutdb utility

Utilities Reference

217

drpobj
Drop object

Description

The drpobj utility is used to drop a database object that has no name. The utility is located in the diag directory of the release. Use drpobj only when recommended by technical support.

218

Utilities Reference

dumpdd
Dump data definition

Syntax

dumpdd [-ddbname] [-sschema_name] [-Sschema ID] [-ttable_name] [-Ttable_ID] [-Overbose] [-Oall] [-Ooutput=file_name] [-Onoschema] [-Onoset] [-Oinsert] [-Oselect] [-Odrop] [-Oprivileges] [-Oroot_volume] [-Ovolumes] [-Onotables] [-Onolinks] [-Onobtrees] [-Onohash] [-Otable_privileges] [-Oonly_privileges] [-Oschema_privileges] [-Odefault_schema] [-Odba] [-Ouser_access]

Arguments

-d dbname

Specifies the fully-qualified database name of the database. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-sschema_name Specifies the name of the schema to be dumped. -Sschema ID Specifies the identifier of the schema to be dumped. By default, the SYS and DBUTIL system schemas cannot be dumped when you use either the -sschema_name or -Sschema_ID option with any other schema name or ID. To dump the SYS and DBUTIL system schemas, you must explicitly specify -sSYS or -sDBUTIL in a separate run of dumpdd. -ttable_name Specifies the name of the table to be dumped. You can specify a table in a schema other than the default either by including both the -s and -t options or by using -tschema.table. Specifies the identifier of the table to be dumped. You can specify a table in a schema other than the default either by including both the -s and -t options or using -tschema.table. Indicates that dumpdd is to generate SQL/A information that describes what is occurring as the utility is executed.
219

-Ttable_ID

-Overbose

Utilities Reference

-Oall -Oaddcgps

Specifies that all information is dumped. Causes dumpdd to generate OS command line addcgp commands. If the stdout output is captured in a file, the script file can be used to create the same named column groups in another database if the corresponding CREATE TABLE statements are used first. The following options may be used with Oaddcgps: [s<authorization name> | S<authorization ID>] [t<table name> | T<table ID>] [Overbose] [Ooutput=<file name>] Here are a couple of small examples (slightly modified for space reasons) that show dumpdds new output:
$ dumpdd sBK Oaddcgps # .:wam/...:/db/file.db # Thu Oct 3 22:02:58 2002 addcgp n sBK books bkncg title author $ $ dumpdd tbooks create table books ( configuration ( segment is 131072, description ), index_no numeric (9) not null configuration ( display (10)), title char (30) configuration ( display (30)), author char (40) not null configuration ( display (40))); commit work; create link index ... references ... commit work;

220

Utilities Reference

addcgp n sBK books bkncg title author ... $

The generated addcgp commands will always start with addcgp n, even when a p or u option was specified in the addcgp command that created the named column group. The default dumpdd output includes PRIMARY KEY (col1,...) and UNIQUE (col1, ...) clauses which may be the result of addcgp p ... and addcgp u ... commands. To avoid duplicating these clauses of dumpdds CREATE TABLE statements, addcgp commands do not have p or u options in the dumpdd output. addcgp commands which would only specify one column for the list of columns to group will not be included in dumpdds output. For example, addcgp sPUBLIC mytable onecgpname col1 will not appear. This is because onecgpname will appear in the default dumpdd output as a column synonym. There is no difference between a column synonym and a onecolumn column group created using addcgp. -Ooutput=file_name Redirects the dumpdd output to the specified file, instead of to standard output. Use the -Ooutput=file_name option when you want to make changes to the database definitions before transferring the information. Edit the output file before using SQL/A to apply the changes. -Onoschema Indicates that dumpdd must not create any schemas. If this option is omitted, dumpdd creates schemas other than PUBLIC by using the SQL/A CREATE SCHEMA statement. If this option is included, the utility uses only the SET CURRENT SCHEMA statement. Indicates that dumpdd must not set current schemas. If the -Onoschema option is specified, the default action is to set the current schema by using the SQL/A SET CURRENT SCHEMA statement. If the -Onoset option is included, the users current schema is not changed. Indicates that dumpdd is to generate SQL/A statements to perform batch inserts into the selected tables. This option cannot be used if either the -Oselect or the -Odrop option is specified. Nor can the option be used on tables that do not have names.
221

-Onoset

-Oinsert

Utilities Reference

-Oselect

Indicates that dumpdd is to generate SQL/A statements to perform batch selects from the selected tables. This option cannot be used if either the -Oinsert or the -Odrop option is specified. Nor can the option be used on tables that do not have names. Indicates that dumpdd is to generate SQL/A statements to drop the selected tables. This option cannot be used if either the -Oinsert or the -Oselect option is specified. Nor can the option be used on tables that do not have names. Generate GRANT statements for all existing permissions.

-Odrop

-Oprivileges -Oroot_volume

Generate root volume definition statements only. -Ovolumes -Onotables -Onolinks -Onobtrees -Onohash -Ononcgps Generate CREATE VOLUME statements only. Omit CREATE TABLE statements from the output. Omit CREATE LINK INDEX statements from the output. Omit CREATE BTREE INDEX statements from the output. Omit CREATE HASH INDEX statements from the output. Cause the addcgp command comments to be omitted.

-Otable_privileges Generate GRANT ... ON TABLE statements only. -Oonly_privileges Generate GRANT ... ON SCHEMA and TABLE statements only. -Oschema_privileges Generate GRANT ... ON SCHEMA statements only. -Ouser_access Generate GRANT ACCESS ON SCHEMA statements only. -Ouser_access Generate GRANT ACCESS ON SCHEMA and GRANT ACCESS ON DBA statements only.
222 Utilities Reference

-Odba

Generate GRANT DBA AUTHORITY and GRANT SCHEMA AUTHORITY statements only.

-Odefault_schema Generate ALTER DEFAULT SCHEMA statements only.

Description

The dumpdd utility writes database definition information for the specified database. Use dumpdd when you want to copy Unify DataServer database applications from machine to machine. The dumpdd utility reads the data definitions from the specified database. By default, dumpdd prints all the schemas, tables, and access methods on the standard output, in a form that SQL/A can read. If you want to copy Unify DataServer applications from machine to machine, you cannot copy the data dictionary directly, because the machines may handle data differently. Instead, perform the following steps: 1. Select the data from the old database to a file. 2. Run the dumpdd utility. 3. Rebuild the data dictionary on the new machine (by running the dumpdd output through SQL/A). 4. Insert the data in the new database. The dumpdd utility generates names for those database objects that do not have names. The -Oinsert, -Oselect, and -Odrop options cannot be used on tables that do not have names because SQL/A requires that all objects be named; dumpdd displays a warning message for those tables that are not named. If the target table has one or more columns with binary data, the dumpdd utility may create two lines: one for the non-binary columns, and one for the columns with the binary data. The dumpdd utility can be used as a pipe.

Related Configuration Variables


Before you run dumpdd, check the value of these configuration variables:
Utilities Reference 223

DBPATH

Directory search path, without the file name, of the application database file and associated files. Simple file name of the database file, for example, file.db.

DBNAME

224

Utilities Reference

Example

The following example shows the file created by the dumpdd command with no arguments:
dumpdd

- /doc/home/examples/file.db Fri Jan 19 15:19:21 2001 create schema ISQL_books; /doc/home/examples/file.db commit work; create table COMPANY ( configuration ( segment is 8192, description ), CO_KEY numeric (9) not null configuration ( display (10)), CO_NAME char (30) not null, CO_ADDRESS_1 char (30) not null, CO_ADDRESS_2 char (30), CO_CITY char (24) not null, CO_STATE char (2) not null, CO_ZIP_CODE char (9) not null, CO_PHONE char (14) not null, CO_STOCK_VALUE currency (19,8) configuration ( display (22,8)), CO_SALES_REP numeric (9) not null configuration ( display (10))); commit work; create btree index CO_KEY on COMPANY ( CO_KEY); commit work; ...

In the following example, dumpdd is used as a pipe to transfer all the data definitions to a different database.
dumpdd -d $ALTDB | SQL

See Also

SQL command

Selecting Data Into a File and Inserting Rows in Unify DataServer: Writing Interactive SQL/A Queries
Utilities Reference 225

EPP
Embedded SQL/A Preprocessor

Syntax
EPP [-s schema] [-c] [-g] [-I include_file]

[src] - i [src] [out] - ix [src] [out] [sym]

,...

Arguments

-s schema

The name of the schema that contains the tables referenced by the embedded SQL/A application. Parses the embedded SQL/A statements without executing them; no output files are created. Error messages and warnings are displayed. Suppresses the file name suffix conventions. If you use this option, you must specify all file names (src, out, and sym) in the command line, with any suffixes. Creates a symbol table. If used with the src option, a symbol table is created using the src name, replacing any .ec suffix with a .sy suffix. Otherwise, the symbol table name is derived from the src file prefix, attaching a .sy suffix. See the -i option. Remove #line directives from the generated file. Without the #line directives, the file can be used with a runtime debugger. Include the specified header file (.h file). This argument must be used if the source file references a header file. The source file name. This is the .ec C language file containing the embedded SQL/A statements. Enter the source file name with or without the .ec suffix. EPP searches for the designated source file name, assuming the source file name ends with a .ec. The file name (including the .ec suffix) must be less than 15 characters in length. The name of the .c C language source file created by EPP. This file contains the preprocessed version of the embedded SQL/A syntax found in the .ec source file. out must be specified with the -i option.
Utilities Reference

-c

-i

-x

-g

-I header_file

src

out

226

sym

The symbol table name. The symbol table name is specified only when using both the -i and -x options. A symbol table is generated when the -x option is used with source file (src).

Description

EPP is an operating system command that preprocesses a C language file containing embedded SQL/A statements and creates a .c C language source file. EPP opens the database, makes the schema specified by schema the current schema

and performs name binding using the tables and columns defined for the current schema.
EPP checks the SQL/A statements for errors and displays any error messages. If any error occurs, EPP displays the .ec source file name and the line number where the error occurred. For a list of EPP error messages and recommended corrections, see Unify DataServer: Error Messages.

When the I option is specified, the preprocessor ($UPPNAME) is called before EPP produces the .c file (the expected output of EPP). EPP automatically places #include statements at the top of each produced .c file. If the source file (.ec) has #include statements, they are processed before EPP adds its #include statements to the produced .c file. Substitution of #define macros is done based on the directives in the user #include files. The #include statements which EPP adds to the .c file will be processed when the .c file is compiled by the C compiler. File include/rhli.h, which resides in the $UNIFY/.. directory, is one of the header files that EPP adds to the top of the .c file when I is specified. You should therefore not include this file in the source file.

Related Configuration Variables


Before you run EPP, check the value of these configuration variables:
DBPATH

Directory search path, without the file name, of the application database file and associated files. Simple file name of the database file, for example, file.db.

DBNAME

Security

EPP accesses the schema for name-binding purposes only. At run-time, however, SQL/A checks that the user executing the application has the appropriate access

privileges for the schema (specified by schema), tables, and columns, that the application accesses.
Utilities Reference 227

If the user is denied access due to lack of privileges, an error results (and SQLCODE is set).

Example

To preprocess a file named source1.ec, use the EPP command with the file name:
# EPP source1.ec

To specify a schema name, use the -s option:


# EPP -s sales source2

To parse the source file, use the -c option:


# EPP -c source3.ec # EPP -xi source5.ec output5.c symbol5.sym -i source6 output6 symbol6.sym # EPP -x source7.ec source7 -i source8.ec output8.c symbol8.sym

See Also

sqla.ld utility

228

Utilities Reference

fmdmn
Database synchronization

Syntax

fmdmn -F dbname [-l] [-r] [-hhours_int] [-ssec_int] [-mmin_int] [-oops_int]

Arguments

dbname

Specifies the fully-qualified database name of the database to be synchronized. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Enables logging to the daemon log. Specifies that other values are being changed; no new daemon is started. Sets the frequency type to hours and the frequency to the integer value specified by hours_int. The setting exists until the database is shut down. Sets the frequency type to seconds and the frequency to the integer value specified by sec_int. The setting exists until the database is shut down. Sets the frequency type to minutes and the frequency to the integer value specified by min_int. The setting exists until the database is shut down. Sets the frequency type to operations and the frequency to the integer value specified by ops_int. The setting exists until the database is shut down.

-l -r

-hhours_int

-ssec_int

-mmin_int

-oops_int

Description

The fmdmn utility synchronizes the database. This utility performs the same functions as the syncdb utility. See syncdb for more information. The setting you specify with the -h, -s, -m, or -o option overrides the settings for the FREQUENCY and FREQTYPE configuration variables.

Utilities Reference

229

Example

The following example synchronizes a database named acctsdb.


fmdmn -F acctsdb

The following example sets the FREQUENCY configuration variable to operations and the FREQTYPE configuration variable to 1000.
fmdmn -O1000

See Also

syncdb utility

230

Utility Reference

htstats
Hash tree index statistic collection

Syntax

htstats

[-d dbname] [-s schema_name] [-S schema_ID] [-t table_name] [-T table_ID] [-h hash_table_name] [-H hash_table_ID] [-Obase=0]

Arguments

-d dbname

Specifies the fully-qualified database name of the database that contains the hash tables. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-s schema_name Specifies the name of the schema that contains the hash tables. -S schema_ID Specifies the identifier of the schema (schema ID) that contains the hash tables. -t table_name Specifies the name of the database table that is associated with the hash table index. -T table_ID Specifies the identifier of the database table that is associated with the hash table index.

-h hash_table_name Specifies the name of the hash table for which to display statistics. -H hash_table_ID Specifies the identifier of the hash table for which to display statistics. -Obase=0 Indicates that the complete chain distribution is to be displayed, starting with the number of zero-length chains. If this option is omitted, or any other value is chosen, htstats displays the chain length distribution starting with a chain length of 1.

The evaluation of the htstats options is governed by these rules:


Utility Reference 231

If this option is included:


-h or -H -t or -T -s or -S None of the above

Then htstats displays statistics for:


The specified hash tables All hash tables on the specified table All hash tables on all tables in the specified schema All hash tables in the database

Description

The htstats utility displays hash table statistics. The htstats report contains the following information. DatabaseThe complete directory search path and file name of the database that contains the hash tables. Index Name Index ID Table Name Index Size A name that identifies the hash table to the user. A unique number that identifies the hash table to Unify DataServer. The fully qualified name of the table that contains columns indexed by the hash table. The size in bytes of the hash table. For indexes larger than 2GB, the size is in blocks.

Index Options A message that indicates whether the hash table allows duplicate values. Entry Count The number of entries in the hash table (one for each row in the database table that is associated with the hash table).

Average Chain Length The average number of keys that hash to the same bucket in the hash table. Longest Chain Length The largest number of keys that hash to the same bucket in the hash table. Average Number of Reads The average number of reads required to find a record. If the reported value is equal to 1, then there are no entries in overflow buckets. If the the reported value is greater than 1, then there are
232 Utility Reference

entries in overflow buckets. If the average number of reads is greater than 2, your hashing algorithm may not be appropriate. You may need to try a different key folding algorithm or a lower the split threshold value. Column Name The name of the column at the specified position in the hash table key. Chain Distribution Chain Length The number of keys (or rows) in a hash bucket. Number of Occurrences The number of chains of the specified length in the hash table. Performance If the number of zero-length chains is greater than the number of other chain lengths, your hashing algorithm may not appropriate. Key Folding Algorithm Number of the hash folding algorithm used, 0 through 9. Maximum Number of Reads Worstcase performance for a single hash value lookup. Primary Bucket Density Percentage of space used within allocated pbkts. Overflow Bucket Density Percentage of space used within allocated overflow buckets. Overflow Bucket Count Number of overflow buckets. Highest Primary Bucket Number of the last primary bucket before the overflow buckets start. For example, if the highest primary bucket is 7, bucket 8 is an overflow bucket. (The highest primary bucket is in the range of the primary hash value to twice that value.) Split Level Number of times that the base size (base number of buckets) has doubled.
233

Utility Reference

For example, if the base size is 7 and the split level is 1, the base size has not doubled; it is still at its initial size of 7 buckets. If the base size is 28 and the split level is 3, the base size has doubled twice from its initial size of 7 buckets (from 7 to 14 to 28). Primary Hash Value Number of primary buckets in the hash table, which is the current base size of the hash table. For example, if the split level is 1 and the initial number of buckets was 7, the primary hash value is 7. If the split level is 2 and the initial number of buckets was 7, the primary hash value is 14. Split Threshold Value The split threshold specified when the hash table was created. Overflow Bucket Size Number of overflow buckets allocated at one time.

Example

The following example report is the result of executing htstats with this command:
htstats -h Turbohash_1

htstats responds with this report:

234

Utility Reference

Hash Index Statistics Report ============================ Date: Thu Jun 16 10:32:35 2005 Data Base: /nbu/tmp/file.db Index Name: Index ID: Table Name: Index Size: Option: Resource ID: Entry Count: Average Chain Length: Longest Chain Length: Average Number of Reads: Maximum Number of Reads: Key Folding Algorithm: Highest Primary Bucket: Split Level: Primary Hash Value: Split Threshold Value: Overflow Bucket Size: Primary bucket density: Overflow bucket density: Overflow bucket count: Column Name: h2 16 PUBLIC.t1 1835010 blocks Duplicates Not Allowed 96 100000000 3.27 20 1.04 2 6 986842 18 917504 0 32768 64.04 1.58 808885 c1 c2

Chain Distribution Chain Length Number of Occurrences 1 4338679 2 5838067 3 5904319 4 4871884 5 3424204 6 2108152 7 1157508 8 577699 9 264365 10 111656 >10 68865

Utility Reference

235

See Also

btstats, lnkstats, tblstats, and volstats utilities Tuning the Access Methods chapter in Unify DataServer: Managing a Database

236

Utility Reference

irma
Integrated recovery manager

Syntax

irma [-ddbname] [-Ojournal] [-Oforward] [-Overbose] [-Oprint_only]

Arguments

-d dbname

Specifies the fully-qualified database name of the database to be recovered. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Specifies that the journal is used to recover the committed transactions. If omitted, the transaction log is used for recovery. Specifies that committed transactions are to be re-done (rolled forward) only; uncommitted transactions are not rolled back. Specifies that more detailed information is displayed as the recovery proceeds.

-Ojournal -Oforward -Overbose

-Oprint_only Specifies that recovery information is displayed only; no recovery operations occur. This option implies -Overbose.

Description

The irma utility performs physical and logical recovery of a database. Before running irma, shut down the database daemons and remove shared memory by running the shutdb utility. When irma begins, it checks the status of the physical log. Depending on the status of the log, irma can either flush the log to the database or throw away the physical log contents. If the physical log indicates that a syncpoint was in progress at the time of the crash, irma attempts to recover from the physical log. If no syncpoint was in progress, irma throws away the physical log contents. After physical recovery, irma performs logical recovery. To perform logical recovery, irma reads the transaction log backwards from the last syncpoint to undo all the effects of uncommitted transactions. IRMA then reads the log forward from the last syncpoint to redo the effects of committed transactions and undo the effects of any active transactions.

Utility Reference

237

When irma finishes, the database should be consistent and updated to the point of the last committed transactions. All committed transactions have been restored, and no incomplete transactions appear. When you restore the database by using redb, redb routinely calls irma to read the physical and transaction logs and perform physical and logical recovery. All activities performed by irma are logged to the recovery audit log, named dbname.ral by default. You can specify a different default file name by using the RALFILE configuration variable. If irma encounters an error while recovering the database, it asks if the recovery should be aborted or continued. If you continue the recovery, the error is logged in the recovery audit log and the recovery continues. IRMA does not prompt for further errors that it encounters. The irma utility may request that you remove active daemons. This task is normally handled by the shutdb utility. You can explicitly remove active daemons by using the ukill command. After a system crash, you may need to restart cldmn to make sure that it is executing in the background. After a system crash you may also need to remove shared memory. The steps for removing shared memory are system-specific; for example, on UNIX System V, you can use ipcs and ipcrm to remove shared memory. Because DDL operations (such as creating a table) are not written to the transaction log, irma does not restore DDL operations. If a system failure occurs at a point where DDL operations have occurred since the last syncpoint, the DDL operations cannot be restored. Therefore, references to these database objects result in errors. You must drop or manually remove the objects.

238

Utility Reference

See Also

For more information About Syntax and usage for operating system utilities, such as ipcs and ipcrm Database recovery See Your operating system manuals The chapter Recovering the Database in Unify DataServer: Managing a Database Page 278 of this manual Page 265 of this manual Shutting Down the Database in Unify DataServer: Managing a Database The chapter Recovering the Database in Unify DataServer: Managing a Database

shutdb shmclean Removing active daemons Recovery audit log

Utility Reference

239

lmshow
Show locking information

Syntax

lmshow [-Onolock] [-ddbname] [-Overbose] [-Oprocess=process_ID]

Arguments

-Onolock -d dbname

Indicates that lmshow does not lock the database when opening it. Specifies the fully-qualified database name of the database for which to show lock information. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Indicates that lmshow is to list all locked objects, including the data dictionary tables that are not normally visible. Names of objects that the user is not authorized to see are not displayed.

-Overbose

-Oprocess=process_ID Indicates that lmshow is to list all locked objects associated with the specified process ID.

Description

The lmshow utility displays the current transactions in shared memory and the objects locked by them. The first line of the lmshow report always identifies the current database and indicates the users lock promotion level if it is specified. Note that lock promotion occurs on a per user basis, not per application. The remainder of the lmshow report describes locks held by various processes accessing the database. This information consists of the following columns: pid
UNIX process ID for the process described by the process name field. If the process ID is followed by an asterisk (*), this is the

current process. program name Name of the process described in the output line. If the process name cannot be determined, unknown is displayed.
240 Utility Reference

txnum

Transaction number against which locks have been placed. A process can have several concurrent transactions. Type of lock held:
SCH OBJ

typ

A lock on a schema definition. A lock on a database object, such as a row, table or database.

If no information displays, the process does not currently hold any locks. lock Lock type held:
SLCK XLCK

A shared lock on an object. An exclusive lock on an object.

table name

Name of the table on which the lock is placed. If the current user is not authorized to see the name, no name displays. Actual row ID in the table on which the lock has been placed. This field is applicable only for OBJ lock types. If the entire table is locked, the value <table> displays.

rid

Example

The following example shows an lmshow report. In this example, the SQL/A process has two transactions, 1 and 7. The lgdmn process shown in the report has no typ listed because the log daemon held no locks when the report was printed.
Database: /doc/home/examples/file.db (lock promotion level: 100) pid program name txnum typ lock table name rid 227 lgdmn 1 639 cldmn 1 697 SQL 1 7 SCH SLCK SQL_books.company OBJ SLCK SQL_books.company 1 OBJ SLCK SQL_books.company 2 OBJ SLCK SQL_books.company 3 OBJ SLCK SQL_books.company 4 OBJ SLCK SQL_books.company 5 OBJ SLCK SQL_books.company 6 OBJ SLCK SQL_books.company 7 OBJ XLCK SQL_books.company 8 OBJ SLCK SQL_books.company 9 709* lmshow 1

Utility Reference

241

lnkstats
Link index statistic collection

Syntax

lnkstats [-d dbname] [-s schema_name] [-S schema_ID] [-t table_name] [-T table_ID] [-l link_name] [-L link_ID] [-P]

Arguments

-d dbname

Specifies the fully-qualified database name of the database that contains the links. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-s schema_name Specifies the name of the schema that contains the links. -S schema_ID Specifies the identifier of the schema that contains the links. -t table_name Specifies the name of the database table that is associated with the link. -T table_ID Specifies the identifier of the database table that is associated with the link. Specifies the name of the link index for which to display statistics. Specifies the identifier of the link index for which to display statistics. Indicates that lnkstats is to format its output into pages.

-l link_name -L link_ID

-P

The evaluation of the lnkstats options is governed by these rules: If this option is included:
-l or -L -t or -T -s or -S None of the above

Then lnkstats displays statistics for:


The specified link indexes All link indexes on the specified table All link indexes on all tables in the specified schema All link indexes in the database

242

Utility Reference

Description

The lnkstats utility displays statistics about database table links. The lnkstats report contains the following information: Database The complete directory search path and file name of the database that contains the links. A unique name that identifies the link to Unify DataServer. A unique number that identifies the link to Unify DataServer.

Link Name Link ID

Parent Link Table Name The name of the parent table in the link. Parent Link Column Names The names of the linked columns in the parent table. Child Link Table Name The name of the child table in the link. Child Link Column Names The names of the linked columns in the child table. Average Number of Children The average number of children rows per parent row. Number of Null Children The number of child table rows that do not have a parent.

Utility Reference

243

Example

This is an example of a possible lnkstats report.


Link Statistics Report ======================= Date: 10/27/87 9:43 am 1987

Database: /prod/inventory/file.db ================================================================ Link Name: Link ID: Parent Link Table Name: Parent Link Column Names: lnk_1 1 PUBLIC.manf mname mcity PUBLIC.employees ename ecity 50 20

Child Link Table Name: Child Link Column Names:

Average Number of Children: Number of Null Children:

Link Name: Link ID: Parent Link Table Name: Parent Link Column Names: lnk_2 2 PUBLIC.class class_no class_nm PUBLIC.students sc_no sc_name

Child Link Table Name: Child Link Column Names:

Average Number of Children: 20 Number of Null Children: 10 . . .

See Also

btstats, htstats, tblstats, and volstats utilities

244

Utility Reference

migrate
Process migration

Syntax

migrate [-Oprocessor=processor_ID] [-Overbose] [-d dbname] [command_line . . . ]

Arguments

-Oprocessor=processor_ID Specifies the identifier of the target processor. Use the -Oprocessor option only to migrate non-Unify DataServer processes, such as /bin/sh. Do not use the -Oprocessor option if you have set the PROCESSOR configuration variable to a processor identifier, because Unify DataServer processes validate that they are running on the processor specified by the PROCESSOR configuration variable. -Overbose Indicates that the migrate utility should display information about what processor the specified process is being migrated to. (The migrate utility normally performs its work silently.) Indicates that the migrate utility should display information about what processor the specified process is being migrated to. (The migrate utility normally performs its work silently.) Specifies the fully-qualified database name of the database that is used for migrate-specified configuration settings. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-Overbose

-d dbname

command_line Specifies the command line used to execute a process. -d dbname Specifies the fully-qualified database name of the database that is used for migrate-specified configuration settings. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.
245

Utility Reference

Description

On systems where shared memory or other resources are distinct to each processor, the Unify DataServer migrate utility allows processes to migrate to a processor designated as the database processor. This is the processor specified by the PROCESSOR configuration variable. The migrate utility executes the process specified by the command_line option, using the processor specified by the PROCESSOR configuration variable from the database configuration file. If no command_line process name is specified, the users shell is executed. All migrate utility command line options must be listed before the name of the process to be executed. Command line options listed after the name of the process to be executed are assumed to be options of the process. The migrate utility can be used only if a processor ID has been specified by the PROCESSOR configuration variable. The operating system must also support processor assignment; some tightly-coupled multi-processor architectures do not allow processor assignment. On loosely-coupled multi-processor machines, resources such as shared memory and data cache typically either are shared by all processors or are distinct to each processor. An example of an architecture where operating resources are shared is the NCR Tower 850. An example of an architecture where operating resources are distinct is the AT&T 3B4000. If shared memory or other resources are shared among all the processors, Unify DataServer can operate on any of the processors concurrently. However, if shared memory or other resources are distinct to each processor, Unify DataServer must operate on only one processor to prevent data corruption.

Related Configuration Variables


Before you run migrate, check the value of this configuration variable:
PROCESSOR

A processor designated as the database processor.

Example

This example migrates the users shell to the processor specified by the PROCESSOR configuration variable:
migrate

246

Utility Reference

This example migrates the lmshow process to the specified processor:


migrate lmshow -Overbose

In the immediately preceding example, the -Overbose command line option applies to the lmshow utility, not to the migrate utility. To verbosely migrate the lmshow example above, you would use the following command:
migrate -Overbose lmshow -Overbose

Utility Reference

247

mklog
Transaction log file creation

Syntax

mklog [ d dbname] [ Ologblk = log_file_size] [ Ooverwrite]

Arguments

-d dbname

Specifies the fully-qualified database name of the database for which you are logging transactions. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-Ologblk = log_file_size Indicates the size of the transaction log file in blocks. If you omit the -O logblk option, the transaction log size is the number of blocks specified in the LOGBLK configuration parameter. -Ooverwrite Specifies to overwrite the existing transaction log. Use this option only if the transaction log file is not empty.

Description

Creates the transaction log file. The transaction log file name specified by the LOGFILE configuration parameter. When you create a database, Unify DataServer implicitly creates a default transaction log file for you. If you want Unify DataServer to log transactions in a different transaction log file, you must create the transaction log file before creating the database. The transaction log file must exist before Unify DataServer can log information. Before you run the mklog utility, you must perform these tasks: Set LOGTX to TRUE, to enable transaction logging. If you do not want the transaction log file to be named file.lg, set LOGFILE to the name you want. You can specify a complete path name or a simple file name. When it creates the transaction log file, mklog also makes sure the file is clean and contains no data.

248

Utility Reference

If the transaction log is too small, it fills quickly. When the log becomes full, requests from user processes cause the log daemon to recycle the space. if enough space is not cleared, the log daemon may abort one or more transactions. If you encounter problems associated with a transaction log that is too small, take these steps: 1. Shut down the database 2. Increase the size of the LOGBLK configuration parameter 3. Run mklog to create a new, larger transaction log 4. Restart the database operations.

Related Configuration Variables


Before you run mklog, check the value of these configuration variables:
LOGBLK
LOGFILE

Size of the transaction log. Name of the transaction log file.

Example

The following example creates a transaction log file using the values set by the
LOGFILE and LOGBLK configuration variables.
mklog

See Also

LOGFILE, LOGBLK, and LOGTX configuration variables

shutdb utility

Utility Reference

249

mkvol
Make a volume

Syntax

mkvol volume_name [-Oconfirm]

Arguments

volume_name Specifies the path and name of the volume to receive the information. If the volume name does not start with a slash (/), $DBPATH is prefixed to the volume name; for example, file.db becomes $DBPATH/file.db. -Oconfirm Indicates that mkvol is to ask the user to confirm the name of the volume.

Description

The mkvol utility enables you to put volume information into a device volume. A device volume cannot be used until this utility has been run on the volume. The mkvol utility does not operate on a regular or contiguous file.

Security

You must have DBA authority to execute the mkvol utility. Warning When using mkvol, always check to insure that the physical device is not used by another process. If it is, there is a risk that the mkvol will not honor the protection scheme found in the UNIX OS file definition. This may also corrupt the raw device.

Example

This example initializes a volume that is a device. The device was made available before the mkvol command is executed.
mkvol /dev/sd0a

250

Utility Reference

See Also

For more information About


Acquiring a device Specifying a volume is to be of type device

See
Your operating system documentation. creatdb utility Creating a Volume in Unify DataServer: Writing Interactive SQL/A Queries

Utility Reference

251

pdbld
Parallel database loading

Syntax

pdbld [ s schema_name] [ S schema_ID] per_table_executable

[ Omaxproc=max_process]

[ Osepout=output_file] [ Overbose]

Arguments

schema_name or schema_ID A schema name that is specified by either name (-s) or ID (-S). If no schema is specified, the tables are processed from the users default schema. -Omaxproc Maximum number of child processes that can be active at one time. The default maximum number of processes is 1. Changes the stdout and stderr of each child program. If the file name argument to this option contains a %s, the name of the table is written by sprint to the file name that will be used. For example, -Osepout=dbld.res will cause all the child processes to have their stdout and stderr redirected to dbld.res. If -Osepout=dbld.%s.res, then each child process will be redirected to a different file, with the %s replaced the table name the child process is processing. If the file does not exist, it is created, and if it does exist, it is appended to. The executable name, table name, and date are printed to the output file prior to running the child program. If -Osepout is omitted, the stdout and stderr of each child program is the same as the stdout and stderr of pdbld. -Overbose Prints the executable name, table name, and date to be printed to stdout of pdbld whenever a process is started.
Utility Reference

-Osepout

252

per_table_executable A per_table_executable is started for each table without a parent. As each of these child processes finish, all the children of the finished parent are examined, and if all their parents have been processed, then a per_table_executable is started for the child table. The per_table_executable is passed one argumentthe name of the table it should process.

Description

The pdbld utility processes tables in a Unify DataServer database so that the parent tables of a link index are processed before any children of a link index are processed. Recent enhancements to dumpdd now allow the building of an SQL script to load tables in this order, but pdbld has significant advantages. pdbld allows you to run one or more processes of an executable simultaneously, hence the name parallel in the Parallel Database Load title. Significant uses for this parallel feature is to run multiple database loads (dblds) to copy data from one database to another, or to load a new database. A per_table_executable could be specified as a shell script named dbld.script containing:
dbld $1 $1.data $1.spec

First, pdbld builds an internal table that describes the link index relationships; then pdbld looks for loops in the link indexes: if a loop is detected, the table and link index names that make up the loop are displayed to stderr as a warning and the tables are marked as unloadable. Any tables involved in a loop in link indexes and their children cannot be processed. All tables with no parents are started first. If more than -Omaxproc parent tables exist, the first -Omaxproc tables are started, and the rest are put on a special list. When a child process exits, this list is examined first, and if not empty, the tables here are started before examining the children tables of the process that exited. After starting all tables with no parents, pdbld enters its main event loop. pdbld waits for either a child process to exit or an interrupt. If a child process exits, the exit status of the child is examined. If the exit status is 0, then the table is marked as having successfully completed. All tables that cannot run because of -Omaxproc are started first. Then, children of the finished process are examined: if they can be run, they are started. If the child process dies with an non-zero exit status, then pdbld assumes that the table was not processed correctly. All child tables of the table that failed are marked as having an error and will not be processed.
Utility Reference 253

When the last child process finishes, pdbld prints a list of the tables that failed, along with a list of link indexes on the table. A table may fail to load if it is involved in a loop in link indexes or the child process that loaded the table or its parents, exited with a non-zero exit status. pdbld prints a status report to stdout whenever SIGINT is detected. The date when the interrupt is received is printed followed by a list of tables in each of the following categories: 1. Tables that may not be loaded due to loops in link indexes. 2. Tables that have been processed successfully 3. Tables that are currently being processed. 4. Tables that cannot be loaded because the maximum number of processes would be exceeded. 5. Tables that cannot be processed until their parent table is loaded.

Related Configuration Variables


Before you run pdbld, check the value of these configuration variables:
DBPATH

Directory search path, without the file name, of the application database file and associated files. Simple file name of the database file, for example, file.db.

DBNAME

Example

To run pdbld, enter the following:


pdbld -Omaxproc=10 dbld.script

See Also

dbld utility

254

Utility Reference

prtlghd
Print transaction log information

Syntax

prtglhd

Description

The prtlghd utility displays the current state of the transaction log file (dbname.lg). prtlghd is located in the $UNIFY/../diag directory and is executed from the shell with no arguments.

Example

The following is a sample report generated by prtlghd:

% prtlghd Log block size in bytes .. .. .. .. .. .. ..2048 Log file size in blocks .. .. .. .. .. .. .. .500 Free log space in blocks .. .. .. .. .. .. . 498 # of committed Tx from shared memory .. .. .. 1

The sections in the report are: Log block size The size of a transaction log block. Log file size The maximum size of the transaction log.

Free log space The current number of unallocated log blocks. # of committed Tx The total number of transactions committed since the database was started.

Utility Reference

255

redb
Restore database

Syntax

redb [Oredb_util=string ] [Obu_num]

Arguments

Oredb_util

A string that contains the name of the script that performs the restore. The string can include arguments to the script.The string is passed to the operating systems system() function for execution. The script starts executing in the $DBPATH directory. An integer that signifies a backup version number that is appended to the Oredb_util string before the script is executed.

Obu_num

Description

The redb utility reads the database backup to restore a database. To execute redb, take these steps: 1. Shut down the database by using shutdb. 2. Make sure that the BUDEV, JOURNAL, and OPMSGDEV configuration variables are set correctly. 3. If you are executing redb after a media failure, correct the hardware problem and reboot your computer. 4. Be sure that there is enough memory available to store an additional copy of the dbname.lg file; redb copies dbname.lg to the recovery log file (dbname.rc or the value of the LOGRC configuration variable). 5. Type the command to start redb. After redb reads the database backup, if the journal is mountable, redb prompts the operator to mount a journal device to roll forward the completed transactions. You can restore the database only on a system that has the same Unify DataServer release number as the release number on the system on which the backup was performed. For example, if the backup was performed by using release 6.0, the restore must also be performed by using release 6.0.

256

Utility Reference

Be sure there is enough room to hold the restored database. The restored database may be larger than its saved size because non-allocated disk blocks are packed in the save format but then unpacked during the restore.

Executing redb on an Empty Database Directory


When you execute redb on an empty database directory, your environment must meet the following conditions: The database or production configuration file that is named dbname.cf must exist. This version of dbname.cf must be the same dbname.cf that existed before the crash. The file or device that is specified by the DEVNM portion of the BUDEV configuration variable must exist and must contain the first volume of the backup media to be restored. Tip Always keep a copy of the dbname.cf configuration file so that the file can be used by redb if a crash occurs.

Using a third-party restore utility


To use a third-party restore utility, you typically write a script that performs the following: 1. Executes the restore utility on a backup. 2. Checks for errors The script should exit with a 0 value to indicate that the restore was successful. Any other value causes the redb utility to display a message that the operation failed. The message is also logged to file.ral. The script inherits the stdin/stdout/stderr of the redb utility, and so it can display additional information to the user or respond to user input during the third-party restore processing. When using a third party utility to perform a restore, be sure that the original file permissions and ownerships are retained throughout the process.
Utility Reference 257

During a restore, the redb utility invokes the script immediately after initializing the environment by reading the configuration file (dbname.cf). If the script returns a nonzero value (indicating failure) no further processing is performed. If the script returns 0, the redb utility proceeds with its normal processing, after checking that all the files listed in file.bul exist and are the expected size (devices are not included in this sanity checking): 1. Undo any uncommitted transactions contained in the restored transaction log (file.rc),
up to the point of the backup record written by the budb utility

2. Replay any journals made since the backup. 3. Redo any completed transactions in the transaction log from the time of the crash (if
available).

Before invoking the script, the redb utility displays a message of the following form to stdout and logs it in file.ral.
Invoking user restore utility <cmd>, where <cmd> is the value of the Oredb_util option.

After a user utility returns a message of the following form is displayed and logged.
User restore utility returned N ([error | no error]).

Related Configuration Variables


Before you run redb, check the value of the configuration variables described with budb on page 162.

Example

redb

The following example script shows a restore using the BudTool product. Following the script is the sample redb command.

258

Utility Reference

#!/bin/sh BudToolRestore.sh # Sample script to restore backup files using BudTool btr utility prepare_restore_list() { cp /dev/null restore_list read BU_NUM while read FTYP OFFS FLEN BU_FILE; do if [ ${FTYP} != I ]; then # prepend the system name to the file paths echo dbsys:$BU_FILE >> restore_list fi done } # restore file.bul redb requires file.bul in order to validate restore # specify the media server and full path to file (including system name) btr m merc_dlt2,3 dbsys:/space/db/file.bul > restore.out 2>&1 # check return value and report result retvalBtr=$? if [ ${retvalBtr} eq 0 ]; then echo echo Restore of file.bul ok! echo else echo echo Unable to restore file.bul see file restore.out for details. exit 1 fi # prepare for the restore prepare_restore_list < $DBPATH/file.bul # perform the restore specify the media server and files to restore btr m merc_dlt2,3 cat restore_list >> restore.out 2>&1 # check return value and report result retvalBtr=$? if [ ${retvalBtr} eq 0 ]; then echo Restore ok! echo exit 0 else echo Problem restoring files see file restore.out for details. echo exit 1 fi

Utility Reference

259

The following restore command is used:


redb Oredb_util=BudToolRestore.sh

See Also

shutdb and budb utilities

260

Utility Reference

remkview
Remake view

Syntax

remkview

Arguments

None

Description

The remkview utility remakes all views in the database identified by the DBPATH and DBNAME configuration variables (or their defaults). This utility applies only for certain platforms and database versions where it is necessary to drop and re-create all database views in the database. The dbcnv utility will inform you when you need to run remkview. If dbcnv does not tell you to run remkview, then it is not necessary to run remkview. The remkview utility is located in the diag directory of the release.

Error messages related to remkview


If you attempt to start a database after performing a database conversion (via dbcnv), but before running the required remkview, the startdb will fail and you will receive the following error: The software version does not match the database version. (6)

Example

Following is an example of the output you will see when running remkview:
$ $UNIFY/../diag/remkview Phase I... Phase II... Phase III... Phase IV... Views successfully recreated. $

During Phase I, remkview checks permissions and gathers information about your database for use in future phases. Phase II involves creating the SQL statements that will be run to actually drop and re-create the views in the database. Statements to grant privileges are created in Phase III, and finally in Phase IV all the SQL statements created in the previous phases are executed.
Utility Reference 261

schempt
Data dictionary information

Arguments

d dbname

Specifies the fully-qualified database name of the database for which to display information. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

Description

The schempt (schema print) utility lists the systems and users data dictionary information. The schempt utility also displays information for all accessible tables in all accessible schemas. The data dictionary information includes the table names, column names, column data types, and column lengths. Information for internal tables (the DBUTIL schema) and and the Unify DataServer data dictionary tables (the SYS schema) are also included. You can also display user data dictionary information by using the schlst utility.

Example

The following example shows the information returned by the schempt command:
schempt Table/Column name DBUTIL.UTLATH ATHID ... SQL_books.COMPANY CO_KEY CO_NAME CO_ADDRESS_1 CO_ADDRESS_2 CO_CITY CO_STATE CO_ZIP_CODE CO_PHONE CO_SALES_REP Type Len

HUGE INT 9

HUGE INT STRING0 STRING0 STRING0 STRING0 STRING0 STRING0 STRING0 HUGE INT

9 30 30 30 24 2 9 14 9

See Also
262

schlst utility
Utilities Reference

schlst
Data dictionary information

Syntax

schlst [-d dbname] [-s schema_name] [-t table_name] [-Onoaccess] [-Odefid]

Arguments

-d dbname

Specifies the fully-qualified database name of the database for which to display information. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-s schema_name Limits the report to a specific schema, instead of all schemas in the database. -t table_name Limits the report to a specific table, instead of all tables in the database. -Onoaccess Indicates that schlst is to exclude access method information from the report. Indicates that schlst is to include object definition IDs.

-Odefid

Description

The schlst (schema list) utility lists the data dictionary information for the current user. If no options are specified, schlst displays information for all tables in all schemas in the users database. The first few lines of the schlst report identify the date the report was created, the page of the report, and the database name that the report applies to. The remainder of the report describes the tables and their associated access methods. This information consists of the following columns:

Utilities Reference

263

ID

A unique number that identifies the row or table to Unify DataServer.

Table/Column Name For a table, the table and associated schema. For a column, the column name. Type Length Options For a column, the data type of the column. For a column, the length of the column. For a column, any column options that are associated with the column.

Btree Information Columns that are B-tree indexes are listed. Link Information Child and parent columns of a link index are listed. Hash Table Information Columns that are hash table indexes are listed. Defid For an object, the object definition ID. The object ID is used to identify the runtime version of the object.

Example

This example is a report generated by entering the following command:


schlst ttable1 Odefid Date: Fri Jun 4 09:14:03 1999 Page: 1 Schema Listing for Id Table/Column Name /doc/home/examples/file.db Type Len Options Defid 557 558 559 560

89 PUBLIC.table1 418 A 419 B 420 C

INTEGER STRING FLOAT

9 20 64

See Also
264

schempt utility
Utilities Reference

shmclean
Shared memory cleanup

Syntax

shmclean [-ddbname]

Arguments

-d dbname

Specifies the fully-qualified database name of the database for which to perform clean-up. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

Description

The shmclean utility cleans up shared memory by removing the base segments associated with the database. This task is normally performed by shutdb; however, you can use this command to clean up shared memory after a system error. Use only if there are no active daemons or applications using the database; if there are active daemons, use shutdb to terminate them. The shmclean utility can work only if no processes are attached to the databases shared memory segments. If any processes remain attached to the shared memory segments, the utility displays a list of the attached processes PIDs and names.

Example

This example shows the command to clean up shared memory for the current database.
shmclean

shmclean responds with this message:


shmclean: Removing shared memory for database /doc/example/file.db shmclean: shared memory segment 13320 removed

See Also
Utilities Reference

shutdb utility
265

shmmap
Shared memory information

Syntax

shmmap [-ddbname] [-Oid=component_ID] [-Okey=key] [-Opartition] [-Osegment] [-Olong_format] [-Ofragment] [-Omap]

Arguments

-ddbname

Specifies the fully-qualified name of the database for which to display shared memory usage information. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-Oid=component_ID Identifies a Unify DataServer component manager. The ID may be one of the following codes: 1 2 3 4 5 6 7 8 10 Lock Manager Transaction Manager File Manager Cache Manager Database Manager
ACCELL/Manager

Authorization Manager Clean up Manager Back up Manager

Use the -Oid=component_ID argument to determine which Unify DataServer component managers require the most shared memory. -Okey=key -Opartition -Osegment Shared memory key specified by SHMKEY. Display partition information for each component. Display information about the shared memory segments allocated for a database.
Utilities Reference

266

-Olong_format Display shared memory map information (allocation and contents of shared memory). -Ofragment -Omap Display shared memory fragmentation information. Display shared memory map information only.

Description

The Unify DataServer shmmap utility displays information about database shared memory use. Much of the information is machine specific. The values displayed by shmmap apply only to the machine on which the shmmap output is obtained. The shmmap utility report displays the following information, depending on the command line options: Argument
Any

Information Displayed
Header

Description
Information about each shared memory segment. This information is displayed regardless of the command line options specified. Information about the shared memory segments allocated for a database. Information about a databases active partitions. Information about the fragmentation status for each shared memory segment allocated for a database. Diagnostic information about the contents of each shared memory segment.

-Osegment -Opartition -Ofragment

Shared memory segment information Shared memory partition information Shared memory segment fragmentation status information Shared memory map information

None

The shmmap report information is shown in the following subsections. Examples are shown on pages 273 through 277.

Required Configuration Variables


If you specify -Oid=component_ID or -Okey=key, the DBPATH configuration variable must be set.
Utilities Reference 267

Header
The shmmap header contains the following information: Segment Shared memory segment operating system identifier, known as the shmid in UNIX. Shared memory key value defined in the configuration file. The hexadecimal values display in parentheses. Logical memory hexadecimal address at which the shared memory segment will be attached to each Unify DataServer executable at runtime. This value is different for each shared memory segment and is determined by Unify DataServer. The value can be adjusted using the SHMOFFSET and SHMMARGIN configuration variables. Type of internal shared memory critical section locking performed by Unify DataServer. Depending on the operating system and the hardware, either memory or disk locking is used. The lock type is determined when the software is ported to the machine and is not configurable.
UNIX access modes with which the shared memory segment was

Key

Address

Locking

Mode

created. All shared memory segments allocated to a database have the same access modes. This value can be configured by setting the SHMMODE configuration variable. Once defined, the access modes cannot be changed without removing the shared memory segment. For further information on UNIX access modes, see your operating system chmod manual description.

Shared Memory Segment Information


The segment section contains the following information: id Name of a partition that has requested allocation of a shared memory segment. Shared memory manager partitions (ShmemMgr) are listed for shared memory segments that correspond to SHMKEY configuration variables. Other partitions (CacheMgr) listed are for shared memory segments designated as PRIVATE.
268 Utilities Reference

shmid key hexkey address

Shared memory segment operating system identifier, for example, 4 or 20. Decimal value of the shared memory key defined in the configuration file, for example, 6904 or 6905. Hexadecimal value of the shared memory key value defined in the configuration file, for example, 0x1af8 and 0x1af9. Logical memory hexadecimal address at which the shared memory segment is attached to each Unify DataServer executable at runtime. The value is different for each shared memory segment. This value is determined by Unify DataServer and can be adjusted by using the SHMOFFSET, SHMMARGIN, SHMADDR, and SHMKIND configuration variables.

size

Size of the shared memory segment, in decimal bytes. This value can be configured by using the SHMMAX and SHMMIN configuration variables. Once defined, the segment sizes cannot be changed without removing the shared memory segment. In the current release of Unify DataServer, all base and secondary shared memory segments are always the same size. The private shared memory segments (the segments not owned by the shared memory manager) are configured by setting partition-specific configuration variables. Private segment size is determined by the partition software and may vary from the size of the base or secondary shared memory segments.

locking

Type of internal shared memory critical section locking performed by Unify DataServer. Depending on the operating system and the hardware, either memory or disk locking is used. The lock type is determined when the software is ported to the machine and is not configurable. The locking value should not be confused with locking a UNIX process in physical memory or with locking a shared memory segment in physical memory.

mode

UNIX access modes with which the shared memory segment was

created. All shared memory segments allocated to a database have the same access modes.
Utilities Reference 269

This value can be configured by using the SHMMODE configuration variable. Once defined, the access modes cannot be changed without removing the shared memory segment. For more information about UNIX access modes, see your operating system manual description of the chmod command. users Number of user application executables that currently have the shared memory segment attached to their process. This value is for information only and cannot be configured. On machines that do not support the ability to determine the number of attached users, the value is always zero (0). On machines that support the ability to determine the number of attached users, the value will always be non-zero.

Shared Memory Partition Information


The partition section contains the following information: id Name of a partition that resides in the specified shared memory segment. The order in which partitions are listed does not follow a set sequence. Partition version number. The partition version number must match the software version number. If the software partition version number is not the same as the shared memory partition version number, the executable cannot be used on this database. To determine the software version number of a Unify DataServer executable, run the executable by using the -version command line option by itself with no other options. address Logical hexadecimal address where the partition resides in the shared memory segment. The shared memory manager partition always resides at the shared memory segment attach address. This value is not configurable and varies, depending on which Unify DataServer executable was first executed to create the shared memory segment. Decimal number of bytes of shared memory allocated by the partition. The shared memory manager partition value always indicates the shared memory allocated for all partitions in the segment. That is, the shared memory managers value indicates the total amount of shared memory allocated in the segment.
Utilities Reference

version

size

270

The other partition values indicate the amount actually allocated by each partition. The percentage listed in parentheses is the total percentage of shared memory allocated by each partition. Again, the shared memory managers value indicates the total amount of shared memory allocated in the partition. If the percentage for any given partition exceeds 50%, warning messages display in the shmmap output. The partitions that exceed 50% should be moved to their own shared memory segment, or the size of the shared memory segment in which the partitions reside should be increased. The amount of shared memory reserved is specified by the SHMRSRV configuration variable is allocated by the shared memory manager partition. locked
UNIX process ID of the Unify DataServer executable that currently has the partition locked. This value is for diagnostic use only and cannot be configured. If the partition is not locked by any process, the value 0 displays. Because of timing constraints, a process ID seldom actually displays in this field.

depth

Current lock nesting level of the partition lock. If the locked field indicates the process ID of a Unify DataServer executable, a non-zero value displays in this field. This value is for diagnostic use only and cannot be configured. If the locked field contains the value 0, this field also contains the value 0. Number of private shared memory segments currently allocated by the partition. This value is controlled by Unify DataServer and cannot be configured.

sgmts

Shared Memory Segment Fragmentation Status Information


The Segment Fragmentation Status section contains the following information: Size Used UsedSize Unit of shared memory allocationthe chunk size. Decimal number of chunks of shared memory allocated for the corresponding chunk size. Total shared memory storage allocated for this chunk size. The value displayed does not exactly correspond to the chunk size, because this field represents the storage requested, not the chunk size used.
271

Utilities Reference

For example, if the chunk size specified by Size is 32768, and the number of chunks specified by Used is 1, but the allocated size specified by UsedSize is 17480 bytes, a 32,768-byte chunk was used to allocate 17,480 bytes of shared memory. % Frag FragSize Total percentage of shared memory allocated for the corresponding chunk size. Decimal number of chunks of unallocated shared memory for the corresponding chunk size. Total shared memory storage unallocated for this chunk size. Note that the value displayed does not exactly correspond to the chunk size, because this field represents the actual storage unallocated, not the chunk size used. For example, if the number of chunks specified by Used is 1, but the unallocated size specified by FragSize is 552 bytes, a 1,024 byte chunk was used to allocate 552 bytes of shared memory that has since been deallocated. % Total percentage of memory unallocated for the corresponding chunk size. For example, if the number of bytes specified by FragSize is 552, the 552 bytes represents 1% of the total shared memory segment size.

Shared Memory Map Information


When you omit all command line options, the shmmap utility displays diagnostic information about the contents of each shared memory segment. This display is the shared memory map; each line represents one unit of storage allocation. address size Logical hexadecimal address described by each line of the output. Number of decimal bytes of storage allocation. The value following the + indicates the number of decimal bytes of overhead associated with each unit of storage. The sum of the two numbers indicates the total number of decimal bytes allocated. Logical hexadecimal address of the next storage allocation unit. This value should be identical to the address value on the next line of information.
Utilities Reference

ptr

272

-*-

If this column contains an asterisk (*), the storage allocation is actively assigned to a partition. If the column does not contain an asterisk (*), this unit of storage allocation is unused and may be allocated for future storage requests. Adjacent storage allocation units can be unused; shared memory will be merged when the actual storage request is made.

hex contents

The first fifteen hexadecimal bytes of the storage allocation unit, if the storage allocation unit is active (assigned to a partition). The format used is similar to that of the od -x utility. If the command line option -l is specified, the entire contents of the storage allocation unit are displayed in hexadecimal. Note that the -l format displays every byte in the shared memory segment; be prepared for a lengthy output. On certain output lines, messages such as CacheMgr partition display. Such messages indicate that this particular storage allocation unit is the actual partition header storage.

Example

The first example requests that shmmap display segment information (-s), partition information (-p), and fragmentation information (-f).
shmmap -s -p -f

The shmmap utility displays the following information:


Shared Memory Segment Information id shmid key hexkey address size ksize users ShmemMgr: 31746 16550 0x000040a6 0xe0000000 253952 248 5 CacheMgr: 91139 PRIVATE 0xe003e000 123248 120 4 id shmid key hexkey address size ksize users Shared Memory Partition Information id version address size locked depth sgmts ShmemMgr: 15 0xe0000000 38768 (15%) 12961 1 1 CleanMgr: 3 0xe000366c 248 ( 0%) 0 0 0 CacheMgr: 9 0xe0001c80 504 ( 0%) 0 0 1 Lock Mgr: 15 0xe0001a58 5132 ( 2%) 0 0 0 TransMgr: 13 0xe000193c 7292 ( 2%) 0 0 0 File Mgr: 22 0xe000018c 24968 ( 9%) 0 0 0 DB Mgr: 1 0xe0000070 480 ( 0%) 0 0 0 id version address size locked depth sgmts

Utilities Reference

273

Shared Memory Fragmentation Information Size Used UsedSize % Frag FragSize % 4 2 8 0 0 0 0 8 0 0 0 0 0 0 16 21 300 0 6 72 0 32 36 1008 0 3 80 0 64 75 3536 1 2 100 0 128 13 1208 0 2 160 0 256 22 3948 1 0 0 0 512 0 0 0 0 0 0 1024 1 524 0 0 0 0 2048 0 0 0 0 0 0 4096 3 6156 2 1 2052 0 8192 4 21984 8 0 0 0 16384 0 0 0 0 0 0 32768 0 0 0 0 0 0 65536 0 0 0 0 0 0 131072 0 0 0 0 0 0 262144 0 0 0 1 212724 83 total: 177 38672 12 15 215188 83

Header Information
The following example shows the types of information contained in the header:
Shared Memory Segment 4 key: 6904 (0x1af8) address: 0x00250000 Locking: memory Mode: 0666 Shared Memory Segment 20 key: 6905 (0x1af9) address: 0x002f0000 Locking: memory Mode: 0666

This example has two header blocks, one for each shared memory segment defined in the configuration file. The first block listed is always the base shared memory segment; secondary shared memory segment information follows the base segment information.

Segment Information
In the following example, the -Osegment command line argument is used to display additional information about the shared memory segments allocated for a database:
shmmap -Osegment

shmmap responds with this report:


Shared Memory Segment Information id shmid key hexkey address size locking mode users ShmemMgr: 20 6905 0x001af9 0x002f0000 262144 mem 000666 0 ShmemMgr: 4 6904 0x001af8 0x00250000 262144 mem 000666 0 CacheMgr: 5 PRIVATE 0x002a0000 270624 mem 000666 0 id shmid key hexkey address size locking mode users

274

Utilities Reference

Partition Information
This example uses the shmmap -Opartition command line option to display information about a databases active partitions:
shmmap -Opartition

shmmap responds with this report:


Shared Memory Segment 20 key: 6905 (0x1af9) address: 0x002f0000 Locking: memory Mode: 0666 Shared Memory id version ShmemMgr: 18 Lock Mgr: 15 id version Partition Information address size locked depth sgmts 0x002f0000 26500 (10%) 0 0 1 0x002f006c 8908 ( 3%) 0 0 0 address size locked depth sgmts

Shared Memory Segment 4 key: 6904 (0x1af8) address: 0x00250000 Locking: memory Mode: 0666 Shared Memory id version ShmemMgr: 18 CleanMgr: 3 TransMgr: 16 DB Mgr: 1 CacheMgr: 9 File Mgr: 23 id version Partition Information address size locked depth sgmts 0x00250000 81164 (30%) 0 0 1 0x00253654 268 ( 0%) 0 0 0 0x00251ad8 19456 ( 7%) 0 0 0 0x00251918 668 ( 0%) 0 0 0 0x002517f4 504 ( 0%) 0 0 1 0x0025006c 42648 (16%) 0 0 0 address size locked depth sgmts

Utilities Reference

275

Segment Fragmentation Status Information


This example uses the Ofragment command line option to display information about the fragmentation status for each shared memory segment allocated for a database:
shmmap -Ofragment

shmmap responds with this report:


Shared Memory Segment 20 key: 6905 (0x1af9) address: 0x002f0000 Locking: memory Mode: 0666 Shared Memory Fragmentation Information Size Used UsedSize % Frag FragSize % 4 2 8 0 0 0 0 8 0 0 0 0 0 0 16 1 16 0 0 0 0 32 47 1316 0 0 0 0 64 80 3392 1 0 0 0 128 1 100 0 0 0 0 256 16 3584 1 0 0 0 512 0 0 0 0 0 0 1024 1 516 0 0 0 0 2048 0 0 0 0 0 0 4096 0 0 0 0 0 0 8192 0 0 0 0 0 0 16384 0 0 0 0 0 0 32768 1 17480 6 0 0 0 65536 0 0 0 0 0 0 131072 0 0 0 0 0 0 262144 0 0 0 1 235648 89 524288 0 0 0 0 0 0 total: 149 26412 8 1 235648 89 Shared Memory Segment 4 key: 6904 (0x1af8) address: 0x00250000 Locking: memory Mode: 0666 Shared Memory Fragmentation Information Size Used UsedSize % Frag FragSize % 4 2 8 0 0 0 0 8 0 0 0 0 0 0 16 27 388 0 15 180 0 32 15 372 0 4 96 0 64 38 1728 0 7 348 0 128 36 3036 1 16 1212 0 256 20 2896 1 0 0 0 512 1 512 0 1 476 0 1024 0 0 0 1 552 0 2048 0 0 0 3 5232 1 4096 8 16416 6 2 4104 1 8192 8 43936 16 0 0 0 16384 0 0 0 0 0 0 32768 1 17480 6 0 0 0 65536 0 0 0 0 0 0 131072 0 0 0 0 0 0 262144 0 0 0 1 163088 62 524288 0 0 0 0 0 0 total: 156 86772 30 50 175288 64

276

Utilities Reference

Shared Memory Map Information


In the following example, all command line options to shmmap have been omitted.
shmmap

shmmap responds by displaying the shared memory map information:


Shared Memory Map Information address size ptr * hex contents 0x00250054 0+4 0x00250058 * 0x00250058 12+4 0x00250068 * / u s r / t m p 00 00 00 00 0x00250068 164+4 0x00250110 * (File Mgr partition) 0x00250110 12+4 0x00250120 * 00 00 00 00 00 00 00 00 00 00 00 00 0x00250120 12+4 0x00250130 * 14 01 % 00 00 00 00 00 00 00 00 00 0x00250130 12+4 0x00250140 * $ 01 % 00 00 00 00 00 00 00 00 00 0x00250140 12+4 0x00250150 * d 01 % 00 05 00 00 00 05 00 00 00 0x00250150 12+4 0x00250160 * t 01 % 00 01 00 00 00 08 00 00 00 0x00250160 12+4 0x00250170 * 84 01 % 00 03 00 00 00 01 00 00 00 0x00250170 12+4 0x00250180 * 00 00 00 00 0c 00 00 00 05 00 00 00 0x00250180 12+4 0x00250190 * 4 01 % 00 08 00 00 00 02 00 00 00 0x00250190 12+4 0x002501a0 * 00 00 00 00 0b 00 00 00 01 00 00 00 0x002501a0 12+4 0x002501b0 * 94 01 % 00 0a 00 00 00 01 00 00 00 0x002501b0 8+4 0x002501bc * 14 00 00 00 0f 00 00 00 0x002501bc 140+4 0x0025024c * 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0025024c 5488+4 0x002517c0 * < % 00 1c E % 00 b0 5 % 00 00 00 00 0x002517c0 44+4 0x002517f0 * 00 00 00 00 b0 5 % 00 00 00 00 00 00 00 00 0x002517f0 36+4 0x00251818 * (CacheMgr partition) 0x00251818 104+4 0x00251884 * c1 00 00 00 10 00 * 00 00 00 00 00 00 00 00 0x00251884 20+4 0x0025189c * / s 2 / t s / d b / f i l e . 0x0025189c 28+4 0x002518bc * 85 + 02 00 00 01 00 00 V 07 00 00 | 00 00 0x002518bc 24+4 0x002518d8 * 00 00 00 00 00 00 * 00 f4 17 % 00 00 00 00 0x002518d8 32+4 0x002518fc address size ptr * hex contents

See Also

Managing Shared Memory in Unify DataServer: Managing a Database


SHMOFFSET, SHMMARGIN, SHMADDR, SHMKIND, SHMMAX, and SHMMIN

configuration variable descriptions.

Utilities Reference

277

shutdb
Database shutdown

Syntax

shutdb [-ddbname] [-Onowait] [-Owait=number_of_seconds] [-Ousr_wait=number_of_seconds] [-Odmn_wait=number_of_seconds ] [-Oconfirm] [-Ointeractive[=yes]] [-Oreset] [-Oforce] [-Onosync] [-Oemergency] [-Onoshmclean]

Arguments

-ddbname

Specifies the fully-qualified database name of the database to shut down. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Indicates that shutdb is to return to the shell without waiting until all the daemons have been shut down. If omitted, shutdb instructs the daemons to shut down and waits for them to terminate. If the shutdb utility cannot terminate a daemon after the time specified by -Owait, it gives up trying and goes on to the next daemon.

-Onowait

-Owait=number_of_seconds Indicates that shutdb is to wait the specified number of seconds for a daemon to terminate. Any positive integer can be specified. This option also specifies the time to wait for user processes to terminate before moving to the next step of the user process termination procedure. -Ousr_wait=number_of_seconds Indicates that shutdb is to wait the specified number of seconds for any user processes to terminate. Any positive integer can be specified. If omitted, the value specified for -Owait is used. if that option is not specified, then 60 seconds is used. If this option is specified, the -Owait option is ignored. -Odmn_wait=number_of_seconds Indicates that shutdb is to wait the specified number of seconds for a daemon to terminate. Any positive integer can be specified. The default is 20 seconds.
278 Utilities Reference

If this option is specified, the -Owait option is ignored. -Oconfirm Indicates that shutdb is to request that the user confirm that the database being shut down is the correct one.

-Ointeractive Indicates that shutdb is to attempt to interactively correct problems that it finds by asking the user for yes or no responses. If you use the optional =yes form of the option, shutdb assumes that the answer to all questions is yes and performs the work. (shutdb still asks the question.) -Oreset Resets the database state to down if a previous shutdb failed or was killed while shutting down a database, leaving the database in an inconsistent state. Indicates that shutdb is to try to terminate all active user processes in an orderly fashion. Indicates that shutdb does not perform a database synchronization as part of the shutdown procedure. This option also implies -Onoshmclean unless recovery is required. If database recovery is needed, this option is ignored.

-Oforce -Onosync

-Oemergency Indicates that shutdb can use the SIGKILL signal to terminate an active user process or daemon if necessary. This option also implies -Oforce. -Onoshmclean Indicates that shutdb will not remove shared memory when the shutdown process is complete. If database recovery is needed, this option is ignored. Warning -Onosync and -Onoshmclean are for diagnostic purposes only and should be used only if directed by Unify Customer Support.

Description

The shutdb utility informs the database daemons that it is time to shut down. After the daemons are shutdown, shutdb synchronizes the database and removes shared memory by calling the shmclean utility (unless -Onoshmclean is specified). All user processes that are accessing the database must be terminated before the database can be shut down. To determine which processes are accessing a database, use the cldmn -Ostatus command.

Utilities Reference

279

You can specify that shutdb forces the termination of user processes accessing the database by specifying the -Oforce or -Oemergency options. To be able to signal all user, execute shutdb as a root user. These options cause shutdb to send a signal to all user processes that are accessing the database. The signal is determined by the SHUTDBSIG configuration variable. The user processes should handle the signal, complete their transaction, and exit. Normally, the daemon shutdown procedure may take several minutes to completedo not kill the daemons if they do not immediately exit. The shutdb utility terminates the log daemon, which shuts down the transaction journal. The journal must be saved if the database is not immediately backed up for possible use by redb. If the journal is a non-tape device, failure to save it can result in data loss if a subsequent crash occurs. Occasionally, the database shutdown does not complete and the database state is database is being shutdown. Use the -Oreset option to restart the shutdown process. If a daemon cannot be terminated, shutdb fails. The exit status is the number of daemons that could not be killed; this number can be used by scripts, for example, to report informative messages.

Example

The following example uses shutdb to shut down a database named dbrus that is operating normally.
shutdb d/v1/dbrus/file.db

In the second example, the -Owait option is used to specify that shutdb is to wait 51 seconds for the daemons to terminate when shutting down the dbrus database.
shutdb -d/v1/dbrus/file.db -Owait=15

In this example, shutdb will force the shutdown of all user processes accessing the database:
cldmn Ostatus su root xxxxx shutdb Oemergency

See Also

shmclean utility The chapter Starting and Stopping the Database in Unify DataServer: Managing a Database

280

Utilities Reference

SQL
SQL processor

Syntax

SQL [-bcqlrx] [-d dbname] [-m ops] [-s schema] [-convert] [script_file]

Options

-b

If a script file is specified, converts the results of executing the script file into binary format. If omitted, the results are in ASCII format. If a script file is specified, parses the statements in the script file and displays any error messages. The statements in the script file are not executed. Suppresses informational messages (such as Recognized query!). Error and warning messages are not suppressed. Places a shared object definition lock on all objects in the database. If a script file is specified, removes the script file after execution of the script completes. Specifies that the ! command is disabled. The ! command allows you to enter operating system commands during an interactive SQL/A session. Specifies the fully-qualified database name of the database to be started. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Specifies that transactions are implicitly committed after ops number of operations; ops is an integer. Specifies the schema to become the current schema; this option overrides the schema set by the ALTER DEFAULT SCHEMA statement. Specifies conversion specific activity when converting from ACCELL/IDS. See UNIFY 2000: Converting Release 1 Applications to Release 2.
281

-c

-q

-l -r

-x

-d dbname

-m ops

-s schema

-convert

Utilities Reference

script_file

The script file to be executed or parsed. The script file is a group of one or more interactive SQL/A statements.

Description

The SQL command begins an interactive SQL/A session or executes a script file. If an interactive session begins, Unify DataServer opens the default database. The default schema is PUBLIC. If the default database does not exist, no database is opened. You can create a database during an interactive SQL/A session.
SQL is executed from the operating system shell.

To end an interactive SQL/A session, use the END command.

Related Configuration Variables


Make sure that the DBPATH, UNIFY and PATH configuration variables are set correctly before executing the SQL command.

Example

To start an interactive SQL/A session using default values for all options, enter the SQL command:
# SQL Unify DataServer SQL/A 2.1.0.0.0 Unify Corporation. Copyright 1991. Opening default database sql>

If the default database is not created, the message Database unavailable is returned. You can create a database after you start an SQL/A session.

See Also

END command DBPATH, UNIFY, and PATH configuration variables,

The chapter Entering and Executing Statements in Unify DataServer: Writing Interactive SQL/A Queries UNIFY 2000: Converting Release 1 Applications to Release 2
282 Utilities Reference

sqla.ld
Embedded SQL/A linker

Syntax

sqla.ld [-d] application_name .o ... [.a ...] [-Olocal_only] [-Oremote_only] [-Olocal_remote] [load_options]

Arguments

-d

Specifies to load the SQL/A Debugger libraries. Use this option if you want to debug your application using the SQL/A Debugger.

application_name The name of the SQL/A application executable file being created. This is the name you will use to execute the application. .o .a The object file that resulted from compiling .c files with ucc. Any archive or library files to load. If the archive files are not in the current directory, include the full or relative path names.

-Olocal_only, -Oremote_only, -Olocal_remote Used with remote access databases; see description below. load_options Options passed to the UNIX system C Loader, ld. If you are passing additional options to sqla.ld to be passed on to ld, see your host operating system C loader manual for more information about placement of ld options. Also see the manual for your UNIX or operating system C loader for more information about the use of reserved symbols and the loading of libraries.

Description

The sqla.ld command links and loads specified .o object and .a archive (library) files, creating an executable file (appl). The .o files are created by the ucc compiler. To create an archive file, refer to the ar command in your operating system manual.

Required Configuration Variables


The sqla.ld command uses the values in the DBPATH, UNIFY, and PATH configuration variables. Be sure that these variables are set correctly before running sqla.ld.
Utilities Reference 283

Remote Access
Using sqla.ld, you can link Embedded SQL/A applications to access: local databases only remote databases only local or remote databases Local-Only Access
Specifying the option -Olocal_only tells sqla.ld to link the application to access local databases only. This means the DBHOST configuration variable must be undefined or set to the value ..

For example, to link an Embedded SQL/A program named payroll.o as payroll and make the program a local-only application, you would use the following command:
sqla.ld -Olocal_only payroll payroll.o

Remote-Only Access

Specifying the option -Oremote_only tells sqla.ld to link the application to access remote databases only. This means DBHOST must be set to a value of the form host_name. Refer to the section titled Accessing a Remote Database for details on how such an application would be run. For example, to link the payroll.o program as payroll and make the program a remote-only application, you would use this command:
sqla.ld -Oremote_only payroll payroll.o

Specifying Local or Remote Access

Specifying the option -Olocal_remote tells sqla.ld to link the application to access local and remote databases. This means DBHOST must be set to either . or a value of the form host_name. For example, to link the payroll.o program as payroll and make the program a local or remote application, use the following command line:
sqla.ld -Olocal_remote payroll payroll.o

Specifying No Options

If you do not include -Olocal_only, -Oremote_only, or -Olocal_remote on the command line, the ULDACCESS configuration variable is used to determine how the application should be linked: If ULDACCESS is set to local_only, the application is linked to access only local databases. If ULDACCESS is set to remote_only, the application is linked to access only remote databases. If ULDACCESS is set to local_remote, then the application is linked to access local and remote databases.

284

Utilities Reference

If the access mode is not specified on the command line and the ULDACCESS configuration variable is not set, sqla.ld by default links the application as local-only. The application will be able to access local databases only. You can also use the -Olocal_only, -Oremote_only, and -Olocal_remote options with the uld utility.

Security

No privileges required.

Example

To link and load an object file named payroll.o, use the sqla.ld utility; the executable file is named payroll:
# sqla.ld payroll payroll.o

To specify multiple object files, separate the file names with spaces:
# sqla.ld WIP orders.o ship.o accrec.o accpay.o wip.a

To include the debug libraries in the loaded version of the application, use the -d option:
# sqla.ld -d orders.run orders.o

See Also

EPP command, ucc command Unify/Net Guide

Utilities Reference

285

startdb
Database startup

Syntax

startdb [-ddbname] [-Ojournal] [-Oconfirm] [-Ointeractive[=yes]] [-Ostate= single max_#_of_users multi ] [-Oquiet]

Arguments

-d dbname

Specifies the fully-qualified database name of the database to be started. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Indicates that startdb is to issue a warning if the journal file is not zero length (for regular files only). Indicates that startdb is to request that the user confirm that the database being started up is the correct one.

-Ojournal

-Oconfirm

-Ointeractive Indicates that startdb is to attempt to interactively correct problems that it finds by asking the user for yes or no responses. If you use the optional =yes form of the option, startdb assumes that the answer to all questions is yes and performs the work. (startdb still asks the question.) -Ostate=single Sets the database to single-user database, where only the database creator can access the database, although more than one process is allowed. -Ostate=single is different from -Ostate=1. -Ostate=max_#_of_users Limits the database to the specified number of users. -Ostate=multi Sets the database to a multi-user database, with no limit to the number of users.
286 Utilities Reference

-Oquiet

Indicates that startdb is to work silently and only set exit status codes. This option is useful when executing startdb from shell scripts. The exit codes that can be set by startdb in quiet mode include the following codes: 99 0 1 98 The database does not exist. The database startup was successful. There was a startup error. The database was already started up.

Description

The startdb utility starts an existing database by prompting you through the startup procedure. In its default mode, the startdb utility detects all of the problems that would stop the database from being opened, and reports them as errors. Examples of startup problems include cases where the errlog file is missing or the journal is not zero length. The startdb utility detects the problems that would stop a database from being opened and corrects them interactively. For the journal to operate correctly, your database journal must satisfy these conditions: The journal file (dbname.jn) must exist. The log daemon must be able to write to the journal file. The journal file must also be zero-length if it is a regular UNIX disk file. The following files must exist and have access permissions that allow you to access them from database processes (such as the log daemon): dbname.cf dbname.dbs dbname.dbv dbname.dis dbname.jn dbname.lg dbname.msg dbname.pl dbname.sch dbname.err (linked to errlog)

Also, you must be able to access the directory indicated by DBPATH. The startdb utility starts the database daemons as background processes; it does not open the database.
Utilities Reference 287

Security

The user who executes the startdb utility owns the daemon processes.

Example

The following example uses the startdb utility for normal database startup of a database named /doc/home/examples/file.db. The utility asks if the current settings for SHMKEY and MAXCACHE are acceptable:
startdb Ointeractive d/doc/home/examples/file.db startdb: Starting up database /doc/home/examples/file.db. Is the default shared memory key 460 acceptable to use?yes Is the default MAXCACHE value 58 acceptable to use?yes startdb: Warning: Transaction archiving has been disabled; Configuration variable LOGARCHIVE value: FALSE startdb: The daemons will be assigned to user lcc. startdb: Database /doc/home/examples/file.db start up complete.

Prompts

In the following example, startdb is also executed with the -Ointeractive option. Because the required journal file and errlog file is missing, startdb displays the following progress messages and interactive prompts, to which the user responds yes.
startdb: Starting up database /doc/home/examples/file.db. Is the default shared memory key 460 acceptable to use?yes Is the default MAXCACHE value 58 acceptable to use?yes startdb: Database file /doc/home/examples/file.jn does not exist. Create the database file /doc/home/examples/file.jn now?yes startdb: new database file /doc/home/examples/file.jn successfully created. startdb: Database file /doc/home/examples/errlog does not exist. Create the database file /doc/home/examples/errlog now?yes startdb: new database file /doc/home/examples/errlog successfully created ...

Journal file missing Errlog file missing

See Also

SHMKEY configuration variable

288

Utilities Reference

syncdb
Database synchronization

Syntax

syncdb [dbname]

Arguments

dbname

Specifies the fully-qualified database name of the database to be synchronized. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

Description

The syncdb utility synchronizes the database. When the database is synchronized, committed transactions are written to the database files. The database is implicitly synchronized at syncpoints. You can use this utility to force the synchronization. Because DDL operations are not logged in the transaction log, you cannot roll back a transaction that contains DDL statements should a system error occur. You can use this utility to ensure that DDL operations are saved. No transaction log is maintained for DDL operations, so they cannot be recovered. Any DDL operations performed after the last database syncpoint will not be in the database in the event of a crash. Therefore, it is recommended that you force a database synchronization after each DDL operation or group of DDL operations. Also back up the database after DDL. You can also synchronize a database by using the fmdmn utility.

Example

The following example forces a database synchronization for the current database:
syncdb

See Also

fmdmn utility Tuning Syncpoints in Unify DataServer: Managing a Database

Utilities Reference

289

tblstats
Table statistic collection

Syntax

tblstats [-d dbname] [-s schema_name] [-S schema_ID] [-t table_name] [-T table_ID]

Arguments

-d dbname

Specifies the fully-qualified database name of the database that contains the links. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-s schema_name Specifies the name of the schema that contains the tables. -S schema_ID Specifies the identifier of the schema that contains the tables. -t table_name Specifies the name of the database table for which to display statistics. -T table_ID Specifies the identifier of the database table for which to display statistics.

The evaluation of the tblstats options is governed by the following rules: If this option is included:
-t or -T -s or -S None of the above

Then tblstats displays statistics for:


The specified table All tables in the specified schema All tables in the database

Description

The tblstats utility displays table statistics. If you specify no options when you call tblstats, tblstats displays statistics about all the tables in the database. These statistics include information for the SYS and DBUTIL schemas. Every database contains a SYS and DBUTIL schema. The SYS schema contains the Unify DataServer data dictionary tables. The DBUTIL contains tables used for internal processing and can be ignored.

290

Utilities Reference

The tblstats report contains the following information: Database The complete directory search path and file name of the database that contains the tables. The name of a database table. The number of data rows in the table.

Table Name Row Count

Expected Number of Rows The number of expected data rows that were specified when the table was added to the database. Segment Size The size in bytes of a segment of the table. Record Length The size in bytes of the row. Number of Rows/Segment The number of data rows contained in each segment of the table. Logical Row Use Density The percentage of the actual number of rows to the maximum number of rows computed for all segments in the table. Physical Row Use Density The percentage of the size of all rows to the size of all the segments in the table. Volume Preference List The names of the volumes in which Unify DataServer can store the tables rows. Options Information about the type of table and associated volumes, for example, whether rows should be scattered evenly across only the volumes in the preference list or scattered across all volumes.

Example

This is an example of a tblstats report generated from the following command:


tblstats

Utilities Reference

291

Because no options are specified, tblstats displays statistics for every table in the database.
Table Statistics Report ======================= Date: Tue Jan 12 11:38:48 1993 Data Base: /doc/home/examples/file.db ============================================================================= Table Name: DBUTIL.UTLATH Table ID: 43 ... Table Name: SQL_books.COMPANY Table ID: 90 Row Count: 10 Expected Number of Rows: 0 Segment Size: 8192 bytes Record Length: 157 bytes Number of Rows/Segment: 52 Logical Row Use Density: 19.23 % Physical Row Use Density: 19.17 % Volume Preference List: vol_1 vol_2 Options: Directkeyed Table Name: SQL_books.NVNTRY Table ID: 91 Row Count: 49 Expected Number of Rows: 0 Segment Size: 4096 bytes Record Length: 60 bytes Number of Rows/Segment: 68 Logical Row Use Density: 72.06 % Physical Row Use Density: 71.78 % Volume Preference List: <NONE> Option: No Option

See Also

btstats, htstats, lnkstats, and volstats utilities

292

Utilities Reference

ucc
RHLI or Embedded SQL/A compilation

Syntax

ucc [ option ... ] [-Ocompatible] [-Iinclude_directory] [-Onoinclude] file ...

Arguments

option -Ocompatible

Any valid options for upp, cc, or uld.

Specifies to use the Unify C preprocessor from the compatibility archives. This option is used for compatibility with previous software versions. If you specify this option, you must also specify -c (an option from your system cc compiler) -Iinclude_directory Looks in the specified include directory for header files referenced in the C source file, where include is the directory name. For example, if the header files are in the $UNIFY/../include directory, set the option to -I$UNIFY/../include. Also, see the -Onoinclude option. -Onoinclude Tells ucc to omit the -I$UNIFY/../include option from the upp command line. That is, the -Onoinclude option prohibits the addition of the -I option. -Onoinclude is used because adding the -I$UNIFY/../include option to the command line can affect the way users expect header files to be resolved. Specifies a C source program and has a .c file name suffix. They are compiled, and each object program is left on the file whose name is that of the source with .o substituted for .c. The .o file is normally deleted, however, if a single C program is compiled and loaded one time. Files are compiled in the order given. File names ending in .c are preprocessed and compiled as C source programs. File names ending in .s are assembled as assembly source programs.
Utilities Reference 293

file

Description

The ucc utility preprocesses and compiles RHLI and embedded SQL/A program source files. The embedded SQL/A program source files must have been preprocessed by the EPP utility. The ucc utility builds internal tables of information about the source files. The UNIX cc utility does not produce the same information and cannot be used as a replacement for ucc. The ucc utility has been designed to be used as a direct replacement for the UNIX cc utility; that is, wherever the cc utility is currently used, it can be replaced with the ucc utility with no loss of functionality or performance. The ucc utility calls the upp preprocessor and eventually the cc system C compiler. The upp preprocessor performs the following tasks: performs compile-time name binding of database objects, if required. For each source file, upp generates an intermediate temporary data file named file.p, which contains the source code with the expanded object names. generates a temporary data file named file.u for each source file that requires compile-time name binding. The .u file contains information identifying the tables and columns referenced in the source file. processes and expands any #include statements in the .c files. The cc compiler compiles the program. The output of the ucc utility is normally an object program file. This file is used as input to the Unify DataServer C loader uld or sqla.ld. To create an object program file, specify the -c option and the .c source file names as arguments. The -c option forces the compiler to suppress loading and create an object file. If the -c option is not specified, uld must be used to load the executable. If any of the source files reference database object names, you must load using uldattempts to load with cc will fail. ucc then renames the .p files using the format Ufile.c. ucc then invokes the UNIX or operating system C compiler, cc, passing in as arguments the Ufile.c files, any .u files, and any compiler options named in the ucc command line. At this point, the program is compiled. ucc then renames the resulting object file (Ufile.o) using the standard format file.o. When using the -Ocompatible option, use the following format: ucc -Ocompatible -c file

294

Utilities Reference

This creates two files: an object file named file.o that is used as an input file to uld and Cfile.c which is the preprocessed source file sent to cc. The Cfile.c can be used for debugging but is not required for uld. To avoid unwanted versions of .c files, do not use metacharacters to specify the file name (such as *.c ). If desired, ucc can be configured using the configuration variables UCCNAME to invoke a specific UNIX utility to perform the actual work. UCCNAME specifies the name of the UNIX compiler called by ucc; the default is cc, which is normally the standard UNIX compiler. The UCCNAME configuration variable can also used by the preprocessor, upp, which is normally called by ucc. In this case, the specified UNIX compiler is used to preprocess any C macros or #include files before being preprocessed by the upp utility. The UNIX compiler name specified in UCCNAME can be a full path name, but cannot contain embedded spaces. The exit status for this command is either of the following; 0 1 The program(s) compiled successfully. The compilation failed.

Warning Use the Unify DataServer utilities ucc and uld to compile any RHLI source files and link any Unify DataServer executables, including custom executables such as AMGR and RPT. If these utilities are not used, the resulting executable may not operate efficiently and will result in the entire database being locked against DDL changes. Warning Dollar sign symbols ($) in assembler code can cause problems for ucc. If possible, avoid including assembler files in the source files to be compiled.

See Also

uld, EPP, and upp utilities cc in your operating system manual

Utilities Reference

295

ucrypt
Password encryption

Syntax

ucrypt

Description

The ucrypt utility encrypts a password with a user name. The ucrypt utility interactively prompts for the password, without echoing the characters in the password. ucrypt then prints the encrypted password on stdout. You can initialize the DBUSER configuration variable by using this utility. For example, if the user name on the server machine is stored in the name shell variable, then DBUSER can be initialized as:
DBUSER=$name/ucrypt

The quote symbol used in the DBUSER initialization syntax is the backquote (). The ucrypt utility is not the same as the UNIX crypt utility.

See Also

ucrypwd function in Unify DataServer: RHLI Reference

296

Utilities Reference

udbqls
List database queries

Syntax

udbqls [ [-ddbname ] [-x ] [-a] [-tX] [h]

Arguments

d dbname -x -a -tX -h

Query an alternate database. Produce XML output. Generate information for all queries, including completed queries. Use X as the separator for ASCII output (default: TAB). Skip the header.

Description

The udbqls utility lists all database queries running and the elapsed time in minutes. As they complete, the query is prefixed with an asterisk. For example, the output from udbqls a would be :
PID 15551 15558 Program SQL SQL* Time 40322 23242 Query select * from bigtable into binary dump select * from smalltable into report1

If the x argument is specified, the XML file format is validated through the following udbqls.dtd:
<!ELEMENT <!ELEMENT <!ATTLIST <!ATTLIST <!ATTLIST <!ATTLIST <!ATTLIST udbqls (query*)> query EMPTY> query pid #REQUIRED> query text #REQUIRED> query time #REQUIRED> query program #REQUIRED> query complete (yes|no) no>

An example XML file would appear as:


<udbqls> <query pid=15551 text= select * from bigtable into binary dump program= SQL time= 40322/> </udbqls>

Utilities Reference

297

See Also

uperf

298

Utilities Reference

ukill
Process termination

Syntax

ukill [-ddbname] [-signal] [db_pid] [-l]

Arguments

-ddbname

Specifies the fully-qualified name of the database for which the process is to be killed. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Interrupt signal name or number. For example, these are commonly used signals: l5 or TERM 1 or HUP 9 or KILL Terminate the specified process. Hang up the specified process. Kill the specified process.

-signal

db_pid

Process ID of a Unify DataServer database processrefer to the ps operating system command listing. List signal names. Use this argument alone to determine the signals to use.

-l

You must type at least one of the command line arguments shown.

Description

Gracefully terminates Unify DataServer database processes or gathers Cache Manager daemon statistics. When terminated processes, ukill is similar to the kill operating system command with one exception: ukill translates the SIGKILL signal into a SIGTERM signal. ukill sends the SIGTERM (terminate, 15) signal to the specified processes. If a signal name or number is specified as a command-line argument (using a - prefix), that signal is sent instead.

Utilities Reference

299

The terminate signal will kill processes that do not catch the signal; ukill -9 is identical to the terminate signal. By convention, if process number 0 is specified, all members in the process group (for example, processes resulting from the current login) are signaledhowever, this works only if you use the Bourne shell, sh(1), not the C-shell, csh(1). Terminated processes must belong to the current user unless the current user is a super user. ukill can also be used to gather cache statistics by specifying the -1 option and the Cache Manager daemon (cmdmn) process ID (db_pid).

Related Configuration Variables


ukill writes the cache statistics into a diagnostic log file in the directory specified by the DMNTMP configuration variable. The diagnostic log is named dmnlogpid, where pid is the process ID of a cmdmn. If DMNTMP is not specified, the log directory defaults to /tmp.

See Also

uperf utility Your operating system documentation

300

Utilities Reference

uld
RHLI application loader

Syntax

uld [-ddbname] [-Iufile_list] [-E] [-Olibraries] [-Omaxcols=num] [-Omaxtbls=num] [-Oaccess_mode] [-Oldopt=load_option] [load_options] file...

Arguments

-d dbname

Specifies the fully-qualified database name of the database. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Specifies a path name to a file that contains the absolute path names of the .u files generated by upp (or ucc). If omitted, any .u files must be located in the same directory as their corresponding .o files. The -I option must be used in these cases: The .u files are not located in the same directory that their corresponding .o files are located. The object files you are loading have database references and have been archived to an archive file. You must specify (by using the file to which the I option points) where the corresponding .u file is located for each .o in the archive file.

-Iufile_list

-E

Specifies that the intermediate Ufile_h.c data files are to be placed in the current directory. The data files can be useful in determining whether name binding is being performed correctly during compilation and whether changes have occurred to any database objects between the time that the object files are loaded and the time that the executable is run.

-Omaxcols=num Allocates space in the preprocessors descriptor table for columns, where num is the number of column entries to allocate space for. An entry is generated for each column referenced in the source file (specified by file).
Utilities Reference 301

If the -Omaxcols option is not specified, uld uses the default value specified by the MAXCOLS configuration variable. -Omaxtbls=num Allocates space in the preprocessors descriptor table for tables, where num is the number of table entries to allocate space for. An entry is generated for each table referenced in the source file (specified by file). If the -Omaxtbls option is not specified, uld uses the default value specified by the MAXTBLS configuration variable. -Oaccess_mode Specifies the database access mode for remote access databases. The access_mode keyword must be one of these: local_only Links the files to access local databases only. In this case, the DBHOST configuration variable must be undefined or set to the value .. Links the files to access remote databases only. In this case the DBHOST configuration variable must be set to a value of the form host_name. Links the files to access local and remote databases. In this case, the DBHOST configuration variable must be set to either . or a value of the form host_name.

remote_only

local_remote

If you do not include -Olocal_only, -Oremote_only, or -Olocal_remote on the command line, the ULDACCESS configuration variable is used to determine how the application should be linked: If ULDACCESS is set to local_only, the files are linked to access only local databases. If ULDACCESS is set to remote_only, the files are linked to access only remote databases. If ULDACCESS is set to local_remote, the files are linked to access local and remote databases. If the access mode is not specified on the command line and the ULDACCESS configuration variable is not set, uld by default links the files as local only. -Olibraries
302

Substitutes the names of the Unify DataServer RHLI libraries.


Utilities Reference

-Oldopt=load_options Specifies that the specified load options are to be overlooked by uld and passed directly to the UNIX system C Loader, ld. file File to be loaded. This can include RHLI program object file names, archive file names, or .u files. Files are loaded in the order given. The archive file name is U2000?.a, where ? is an alphabetic letter. These typically reside in the $UNIFY directory. When loading a program using the CHLI compatibility archives, the archive file complib.a must be included on the command line before the RHLI archive files. The complib.a file also typically resides in the $UNIFY directory.

Description

The uld utility verifies that changes have not occurred to the referenced database objects since the object files were compiled, then links the files by using the operating system link loader, ld. The uld utility is required to link RHLI executables because it builds internal tables of information about the executable; the UNIX ld utility does not produce the same information and cannot be used directly. The uld utility has been designed to be used as a direct replacement for the UNIX ld utility. That is, wherever the ld utility is currently used, it can be replaced with the uld utility with no loss of functionality or performance. The input to uld is the .u files created by upp (or ucc) and the .o files created by cc. The .u files are created when any references to database objects are found in the C program source file that is being compiled. The .u files are then used by uld to verify that the referenced database objects have not been redefined since the last time the source files were compiled. ld combines the object files, resolves external references, searches libraries and produces an executable file. Unless an output file is specified, ld produces a file named a.out. If you are passing additional options to uld to be passed onto ld, see your host operating system C loader manual for more information about placement of ld options. Also see the manual for your UNIX or operating system C loader for more information about the use of reserved symbols and the loading of libraries.

Utilities Reference

303

If archive files are specified in the uld command line and these files rely on compile-time name binding, then the -I option can be specified. This option requires that the user create a file that lists the absolute path names of the .u files that correspond to the archive files specified and provide the name of this file with the option. The .u files are created by ucc (or upp) during compilation if the source file performed compile-time name binding to database objects. uld uses this information to verify that the referenced objects have not been updated between compile time and load time. If changes have been made to the referenced tables and columns since compilation, uld fails and generates an error code stating which object files reference the invalid identifiers. If no changes have been made to the referenced tables and columns since compilation, uld generates an intermediate source code file named Ufile_h.c which contains information that is used when the executable is run and the database is opened. The intermediate source file Ufile_h.c is compiled by uld and the resulting object file is then passed on to the UNIX or host operating system C loader ld. ld then produces an object module that can be executed (if no errors occurred during loading). Once the files have been loaded and an executable has been produced, information from the Ufile_h.c files is used to perform checks again at runtime. At runtime, this information is used to verify that where name binding was performed at compile-time, the referenced objects have not been updated between compile-time and runtime. In order to bypass load and run-time object validation, you must remove all .u files from the current directory before loading the object files. In addition, while running uld, you should not specify the -I option. If desired, uld can be configured by using the configuration variable ULDNAME to invoke a specific UNIX utility to perform the actual work. ULDNAME specifies the name of the UNIX linker called by uldthe default is cc, which is normally used to call the standard UNIX linker. The UNIX linker name specified in ULDNAME can be a full path name, but cannot contain embedded spaces. ULDNAME values cannot include compiler options or universe prefixes such as ucb cc.
304 Utilities Reference

For example, the following contains a sample command line used to call uld with the -Olibraries option and the results:
uld tblmap19.o lx Olibraries ly LABC lz tblmap19.o lx /ASQL/lib/U2000a.a /ASQL/lib/U2000c.a /ASQL/lib/U2000e.a /ASQL/lib/U2000g.a /ASQL/lib/U2000i.a /ASQL/lib/U2000x.a /ASQL/lib/U2000z.a /ASQL/lib/U2000a.a /ASQL/lib/U2000c.a /ASQL/lib/U2000e.a /ASQL/lib/U2000g.a /ASQL/lib/U2000i.a /ASQL/lib/U2000x.a /ASQL/lib/U2000z.a ly LABC lz

/ASQL/lib/U2000b.a /ASQL/lib/U2000d.a /ASQL/lib/U2000f.a /ASQL/lib/U2000h.a /ASQL/lib/U2000j.a /ASQL/lib/U2000y.a lPW lm /ASQL/lib/U2000b.a /ASQL/lib/U2000d.a /ASQL/lib/U2000f.a /ASQL/lib/U2000h.a /ASQL/lib/U2000j.a /ASQL/lib/U2000y.a lPW lm

Otherwise, if you do not use the -Olibraries option, the following command-line sequence is passed to the linker:
ID mapping object files (generated by uld)

user object files and unknown command-line options Unify DataServer Compatibility Archives library, if the -Ocompatible command-line option has been specified Unify DataServer RHLI libraries user libraries specified by using the -l command-line option When using the same command line as shown in the previous example, but without the -Olibraries option, the results are slightly different:

Utilities Reference

305

uld tblmap19.o lx ly LABC lz tblmap19.o LABC /ASQL/lib/U2000a.a /ASQL/lib/U2000c.a /ASQL/lib/U2000e.a /ASQL/lib/U2000g.a /ASQL/lib/U2000i.a /ASQL/lib/U2000x.a /ASQL/lib/U2000z.a /ASQL/lib/U2000a.a /ASQL/lib/U2000c.a /ASQL/lib/U2000e.a /ASQL/lib/U2000g.a /ASQL/lib/U2000i.a /ASQL/lib/U2000x.a /ASQL/lib/U2000z.a ly ly lz

/ASQL/lib/U2000b.a /ASQL/lib/U2000d.a /ASQL/lib/U2000f.a /ASQL/lib/U2000h.a /ASQL/lib/U2000j.a /ASQL/lib/U2000y.a lPW lm /ASQL/lib/U2000b.a /ASQL/lib/U2000d.a /ASQL/lib/U2000f.a /ASQL/lib/U2000h.a /ASQL/lib/U2000j.a /ASQL/lib/U2000y.a lPW lm

For example, if a -l option is positionally dependent on another preceding command-line option, -Olibraries can be used to preserve the command-line sequence. Warning Use the Unify DataServer utilities ucc and uld to compile any RHLI source files and link any Unify DataServer executables, including custom executables such as AMGR and RPT. If these utilities are not used, the resulting executable may not operate efficiently and will result in the entire database being locked against DDL changes.

Example

If the .u files for your application are located in a different location than the corresponding .o files, you must specify the location of the .u files when loading with uld by using the -I option. For example: 1. Create a file that contains a list of absolute path names for the .u files that were created by upp or ucc. 2. For example, the file /tmp/ufiles contains the following:
/usr/proj/bin/acctg.u /usr/acct/frank/payroll.u /usr/acct/frank/payables.u /usr/acct/frank/receivables.u

3. Load the application .u, object, and archive files by using the C Program Loader, uld.
306 Utilities Reference

4. For example:
# uld -I/tmp/ufiles acctg.o acctg.a

5. The contents of archive file acctg.a are as follows:


# ar tv acctg.a /usr/acct/frank/payroll.o /usr/acct/frank/payables.o /usr/acct/frank/receivables.o

On some platforms, the -d option is used to indicate which loader mode to operate in, dynamic or static, and specifies the database name for uld to use. For example, the following command results in an error message:
uld -dn prog ar1.a ar2.a Error: uld: unable to open database; No such file, directory, or program (-22)

To avoid the option conflict, use the following command to pass the -dn option directly to ld:
uld -Oldopt=-dn prog ar1.a ar2.a

See Also

ucc and upp utilities The ld description in your operating system documentation

Utilities Reference

307

ulint
RHLI application verifier

Syntax
ulint [-Ocompatible] [lint_options] file ... [-lrhli]

Arguments

-Ocompatible This option indicates that all of the files are compatibility CHLI source files, emulating UNIFY 4.0 functionality. This option will validate any CHLI functions in the source file for correctness. lint_options These are the UNIX lint options, as documented in the UNIX reference manual. The file to be verified. File names ending with .c are interpreted as RHLI C source file names. Files with other suffixes are handled according to the rules of the UNIX lint utility. This option will validate any RHLI functions in the source file for correctness. This option requires more processing time.

file

-lrhli

You can use any number of ulint options, in any order, intermixed with the file arguments.

Description

Attempts to detect RHLI C program file features that are likely to be bugs, non-portable, or wasteful. ulint also checks type usage more strictly than the compilers. ulint is identical to the UNIX lint utility, except that ulint works with RHLI source files containing ucc compiler directives that UNIX lint cannot process.

See Also
308

The lint description in your operating system documentation.


Utilities Reference

unifybug

Syntax

unifybug

Description

The unifybug utility lets you submit bug reports to Unify Customer Support. The bug report covers all applicable areas of the Unify DataServer product, such as software problems, hardware interactions, device configurations, and documentation problems, as well as enhancement requests. After supplying a valid Customer Support Access ID, your submitted bug will be logged with Customer Support. The unifybug utility prompts you for the information necessary to complete the bug report and appends to the bug report any useful information about your user environment. The utility then lets you edit the report before it is actually sent to Unify Customer Support. You can also cancel the report before it is sent. The bug report is sent to Unify Customer Support by using electronic mail. The mail system must be installed on your user site to enable the unifybug utility to work correctly. For mail, unifybug uses the following default directory search path: unifybug If your mail system provides an aliasing mechanism, adding an entry for unifybug is sufficient to send the bug report. Otherwise, the MAIL_PATH variable in the install/settings file (in the release directory) must contain the full directory search path and file name, as shown in this example:
MAIL_PATH=ucdavis!csusac!unify!unifybug

You must establish the correct mail directory search path and file name to Unifys Customer Support machine. If you need help, customer support will be happy to help you.
Utilities Reference 309

Example

The following example shows a sample unifybug session that reports a documentation error in this manual.

=== ACCELL/SQL DataServer Problem Incidence Tracking System bug report === Follow the instructions; press RETURN after entering each response. Please enter your Unify Customer Support Access ID (optional) (press RETURN if unknown)? xxxxx One line summary of bug? Example on p. 165 does not work Problem Severity: Critical (prevents total use of system) High (prevents use of affected portion) Medium (workarounds difficult or unknown) Low (workarounds known, but a nuisance) Enhancement (a request for a new capability or feature) For Your Information (initial upper/lowercase letter is sufficient)? Low Problem Category: Documentation (ACCELL/SQL Manuals, message files, etc) Hardware (ACCELL/SQL interaction with vendor hardware, etc) Utilities (ACCELL/SQL interaction with operating system utilities; ie lpr) Software (ACCELL/SQL interfaces, libraries, etc) Tools (ACCELL/SQL utilities, diagnostics etc) Other (or unidentified) (initial upper/lowercase letter is sufficient)? ... please enter one of the above selections d Document Title (Release Note number)? Unify DataServer: Configuration Variable and Utility Reference Part number (on back cover)? 7878 Page Number? 165 Rev Number on Page? none Enter a description ... (terminate with CTRLD) I tried the example listed for creatdb. I entered it exactly as shown and it did not work.

Report complete; what do you want to do with the report? Display bug report Edit report Send report Abort (and save report) Quit (do not save report) (initial upper/lowercase letter is sufficient)? s This will send the report to Unify Corporation do you really want to do this? (please enter yes or no) yes

310

Utilities Reference

uperf
Performance monitoring

Syntax

uperf [ C ] [ ddbname] [ E ] [ e ] [ F log_file ] [ L log_file ] [ l free_log_thresh ] [ M ] [ m shared_mem_thresh] [ O ] [ Q ] [ s delay ] [ T ] [ U ] [ x ]

Arguments

-C -d dbname

Do not display cache statistics. Specifies the fully-qualified database name of the database to be monitored. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145. Do not read the dbname.err file. Do not skip existing error log entries. All entries in the current dbname.err file will be printed after start up. Do not skip existing error log entries. All entries in the current dbname.err file will be printed after start up. Specifies the name of the file to write funnel locking information.

-E -e

-e

-F log_file

-l free_log_thresh Change the number of free log blocks threshold. When the number of blocks falls below this level, the information is highlighted and the terminal bell is rung, unless disabled with the -Q flag. -M Do not display shared memory map information.

-m shared_mem_thresh Reserved for future use. -O -Q


Utilities Reference

Display active process IDs instead of total number of operations. Do not ring the terminal bell.
311

-s delay

Specifies a different number of seconds to sleep than the default value of 2. Do not display transactions per second (TPS) information. Do not display TPS per user (TPS/U) information. Display each query as it executes, including its elapsed time. If a query has completed execution, the elapsed time is prefixed with an asterisk. Queries in prepared statements are also displayed. This information can help you identify queries which run slowly and therefore require optimization.

-T -U -x

Interactive screen commands for uperf are: C c Toggle the display of cache statistics. Toggle between the display of cache statistic totals since the database was started and the display of cache statistics since last screen update. Toggle error log checking. Skip to the end of the error log. This argument is useful when the -e command-line argument is specified and you want to skip to the end, or if the number of error log entries is excessive and uperf cannot catch up. Force a database syncpoint. The syncpoint is accomplished by running a separate File Manager daemon (fmdmn) process to perform the synchronization. Reserved for future use. Change the number of free log blocks threshold. When the number of blocks falls below this level, the information is highlighted and the terminal bell is rung, unless the bell is disabled with the -Q command-line argument. Reserved for future use. Toggle the display of the transactions per second per user (TPS/U) information.
Utilities Reference

E e

h or ? l

m U

312

M O Q q ^R s T

Toggle the display of shared memory statistics. Toggle between displaying total number of operations and active process IDs. Toggle the state of the quiet flag. Quit. Most signals (SIGHUP, SIGINT, SIGQUIT, SIGTERM) also cause uperf to exit gracefully. Refresh the screen display. Prompts for a new number of seconds to sleep between display updates. Toggle the display of the transactions per second (TPS) information.

Description

The uperf utility is an interactive performance monitoring utility that displays the current state of the database. The displayed items are described below. uperf is designed to be run continuously on an unused terminal. Since uperf does not register itself with the database, unexpected results can occur if the database is shut down or the shared memory is removed while uperf is running. uperf does not read the TPS information during interactive input of some variables. Therefore, the transaction per second (TPS) and TPS per process (TPS/P) information can become out of date quickly when altering variables. It is best to specify the variables on the command line. The screen manager for uperf uses curses, a UNIX-specific library. The output from uperf is described below. Physical Logging Section Heading
Last sync # users

Description
The date and time of the last database sync is displayed. It is updated whenever a database sync completes. The number of users that have the database open. Since uperf and the cmdmn do not open the database, they are not included in this number.

Utilities Reference

313

Heading
# active ops

Description
The number of active operations. These operations are usually short lived, so larger numbers indicate more database activity. During part of the database sync, these operations are suspended, if so, this number is zero. The number of operations since the last sync. The process ID. This is displayed when there are any active operations, instead of total number of operations (total # ops). The current physical log state. These include: SYNC-RUNNING A syncpoint is in progress; user processes are not suspended UPDATERS-SUSPENDED A syncpoint is in progress; user processes are suspended BACKUP-RUNNING A backup is being performed FAILURE The database needs to be recovered. NORMAL Normal processing

total # ops PID state

Transaction Logging Section Heading


free blocks

Description
The number of log blocks still available in the transaction log. The maximum amount of this value is controlled by the LOGBLK configuration variable. Running low on log space may result in additional file system syncs. The percentage available is also shown. If the number of blocks is below the current log threshold, this information is shown in reverse video, and the terminal bell is rung, unless inhibited by the -Q flag. The number of transactions that are currently active in the system. Most processes will have a user transaction and a system transaction, so this number will generally be two times the number of processes. These three numbers represent the average number of committed transactions over the last minute, five minutes, and fifteen minutes respectively. Since statistics are gathered by uperf itself, enough elapsed time is needed before these figures are correct. A letter e following a figure indicates the number has been estimated because the utility has not been run long enough. These three numbers represent the average number of committed transactions over the last minute, five minutes, and fifteen minutes respectively per process

# active tx

Transactions per second

TPS per user

314

Utilities Reference

Shared Memory Section Heading


shmkey inuse

Description
The segment shared memory key, in decimal. This number matches the associated SHMKEY configuration variable. The bytes of memory which are currently allocated in the shared memory segment. This normally includes the shared memory reserve, assuming it has not yet been released. The bytes of memory which are currently available in the shared memory segment. Percentage free. Note that percentages do not include the reserve shared memory. Percentage in use.

free % free % used

Cache Section Heading


Logical Reads Logical Writes Physical Reads

Description
The total number of database reads requested by database processes. The total number of database writes requested by database processes. The total number of database reads that did not have a page currently in the database cache and were fetched from the file system. The total number of pages that have been written to the file system. The number of pages which were on the free list when found in the cache. The number of reads and writes that referenced a page which was already present in the cache. The number of reads and writes that referenced a page that was not present in the cache. The number of pages that are currently available. The number of times that user processes have had to perform page replacement.

Physical Writes Page Reclaims Page Hits Page Misses Freelist Size User Replacements

Utilities Reference

315

Funnel Lock Statistics Heading


SHMID # Success Contention Contention ratio The shared memory ID. The number of successful funnel locks granted. The number of funnel locks that could not be granted. The percentage of contention to success.

Description

Example

The following illustrates a sample snapshot of the uperf dynamic screen report:
05/29/99 Physical Logging last sync: 05/29 13:38:13 # users: 14 #active ops: 2 tot # ops: state: NORMAL Tx Logging free blocks: 4953 (100%) # active tx: 33 /s3/pits/db/pits.db segkey 1 36 13:38:14 Shared Memory inuse free %free %used 329920 718556 68 32

Tx Per Second: TPS per user:

5.15 0.37

5.10e 0.35e

5.10e 0.35e

Cache Statistics Logical Reads 132 Physical Reads 10 Page Reclaims 0 Logical Writes 8 Physical Writes 0 Freelist Size 47 Page Hits 130 Page Misses 10 User Replacements 0 Time Program PID Caller Offender Status Errnum 13:38:05 shutdb 4439 database daem 0 0 process 4439 initiating shutdown 13:38:11 shutdb 4439 fmdmn vsusync 4439 0 database sync occurred 13:38:13 lgdmn 372 lgdmn /s3/pits/db/pi 0 0 automatic synchronization (shutdown) for database /s3/pits/db/pits.db

See Also

Unify DataServer: Managing a Database

316

Utilities Reference

upp
RHLI and Embedded SQL/A Preprocessor

Syntax

upp [-G] [-ddbname] [-Omaxcols=num] [-Omaxtbls=num] [ -s schema -S schema_ID ] [option ...] file ...

Arguments

-G

Turns on debugging mode. All output from the preprocessor is sent to stdout as well as to the .p file. Specifies the fully-qualified name of the database for which files are to be processed. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-d dbname

-Omaxcols=num Allocates space in the preprocessors descriptor table for columns, where num is the number of column entries to allocate space for. An entry is generated for each column referenced in the source file (specified by file). If the -Omaxcols option is not specified, upp uses the default value specified by the configuration variable MAXCOLS. -Omaxtbls=num Allocates space in the preprocessors descriptor table for tables, where num is the number of table entries to allocate space for. An entry is generated for each table referenced in the source file (specified by file). If the -Omaxtbls option is not specified, upp uses the default value specified by the configuration variable MAXTBLS. -s schema_name Schema name. Indicates that privileges associated with the specified schema should be used when performing name binding.
Utilities Reference 317

-S schema_ID Authorization ID of a schema. Indicates that privileges associated with the specified Authorization ID of a schema should be used when performing name binding. options Specifies that the specified load options are to be passed directly to the UNIX system C Loader, ld. Specifies the source program file name. Files are preprocessed in the order given. A .p file is generated for each source file.

file

Description

Typically, the Unify DataServer preprocessor upp, is implicitly invoked by the Unify DataServer C compiler ucc and performs compile-time name binding on the source files in the command line. However, to aid in debugging syntax errors, you can call upp directly so that you can examine the intermediate files that are produced. You pass in the source file names (file.c) as arguments. For each source file, upp generates an intermediate temporary data file named file.p, which has been preprocessed by the system C preprocessor, and which contains the source code with the expanded object names. The preprocessor that is used is determined by the UPPNAME configuration variable. The preprocessor also generates a temporary data file named file.u for each source file that requires compile-time name binding. The .u file contains information identifying the tables and columns referenced in the source file. The .p files can be examined directly to verify that the database object names are being expanded correctly. You must rename the resulting .p file(s) by using the format file.c. Then you must compile the files by calling cc, passing the .c files and any .u files as arguments. When upp is invoked by ucc, ucc renames the .p files so that they can be compiled by the UNIX or operating system C compiler, cc. ucc then invokes cc, passing in as arguments the .c files and any .u files. The loader uld must be used to load object files generated using either ucc or the combination of upp and cc. The exit status is 0 if the program(s) were successfully processed or 1 if processing failed.

318

Utilities Reference

See Also

ucc and uld utilities cc in your operating system documentation

Utilities Reference

319

volstats
Volume statistic collection

Syntax

volstats [-d dbname] [-v volume_name] [-V volume_ID]

Arguments

-d dbname

Specifies the fully-qualified database name of the database that contains the volumes. If any portion of this argument is omitted, configuration variables help determine the default database. The fully-qualified name format is described on page 145.

-v volume_name Specifies the name of the volume for which to display statistics. -V volume_ID Specifies the identifier of the volume for which to display statistics. If you do not specify the -v or -V option when you call volstats, volstats produces statistics about all the volumes in the database.

Description

The volstats utility displays statistics about database volumes. The volstats report contains the following information: Volume Name A mnemonic name for the volume. Volume ID A unique number that identifies the volume. The volume ID starts from the root volume: volume 1.

Volume File Name The complete path name of the database file where the volume resides. Option A message that indicates the volume processing options, for example, whether the volume is a device or a file (and the type of file).
Utilities Reference

320

Used Active Space The number of bytes of active space used by the volume. Available Active Space The number of bytes of active space not used by the volume. Total Active Space Used Active Space + Available Active Space. Allocated Inactive Space The number of bytes of inactive space allocated for the volume. Unallocated Inactive The number of bytes of inactive space that is not allocated for the volume. Maximum Volume Size The largest number of bytes the volume can allocate. Volume Offset: The offset in the volume file. Volume Page Size The number of bytes per volume I/O page. The following diagram illustrates volume use.

Unallocated inactive

Allocated inactive

Total active Used active


Available active Used active

Utilities Reference

Maximum volume size

Actual file size

321

For example, in a volume that has 2048 bytes used, perhaps only 1024 bytes actually contain data. The volume may have gaps that contain no data, as shown in the following diagram:
Volume space usage
Data Unused

You can compress volumes and reclaim unused space by rebuilding all the database tables.

Security

You must have DBA authority to execute the volstats utility.

Example

This is an example of a volstats report:


Volume Statistics Report ======================== Date: Thu Jan 14 11:18:07 1993 Data Base: /doc/home/examples/file.db ================================================================ Volume Name: volume_1 Volume ID: 1 Volume File Name: /doc/home/examples/file.db Option: Regular file Used Active Space: Available Active Space: Total Active Space: Allocated Inactive Space: Unallocated Inactive Space: Maximum Volume Size: Volume Offset: Volume Page Size: 1212416 bytes 0 bytes 1212416 bytes 73728 bytes Unlimited Unlimited 0 bytes 2048 bytes

================================================================ ...

See Also
322

btstats, htstats, lnkstats, tblstats


Utilities Reference

Вам также может понравиться