Вы находитесь на странице: 1из 325

Open Text Archive and Storage Services

Administration Guide
The guide describes the administration, monitoring and
maintenance of Open Text Archive and Storage Services and
introduces guidelines for troubleshooting.

AR090701-ACN-EN-6

Open Text Archive and Storage Services


Administration Guide
AR090701-ACN-EN-6
Rev.: 2011-Dec-20
This documentation has been created for software version 9.7.1.
It is also valid for subsequent software versions as long as no new document version is shipped with the product or is
published at https://knowledge.opentext.com.
Open Text Corporation
275 Frank Tompa Drive, Waterloo, Ontario, Canada, N2L 0A1
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Email: support@opentext.com
FTP: ftp://ftp.opentext.com
For more information, visit http://www.opentext.com

Copyright by Open Text Corporation, Open Text Inc.


Open Text Corporation is the owner of the trademarks Open Text, OpenText, The Content Experts, OpenText ECM Suite,
OpenText eDOCS, eDOCS, OpenText FirstClass, FirstClass, OpenText Exceed, OpenText HostExplorer, OpenText Exceed
OnDemand, OpenText Exceed 3D, OpenText Exceed Freedom, OpenText Exceed PowerSuite, OpenText Exceed XDK,
OpenText NFS Solo, OpenText NFS Client, OpenText NFS Server, OpenText NFS Gateway, OpenText Everywhere, OpenText
Real Time, OpenText Eloquent Media Server, OpenText Integrated Document Management, OpenText IDM, OpenText
DocuLink, Livelink, Livelink ECM, Artesia, RedDot, RightFax, RKYV, DOMEA, Alchemy, Vignette, Vizible, Nstein,
LegalKEY, Picdar, Hummingbird, IXOS, Alis Gist-in-Time, Eurocortex, Gauss, Captaris, Spicer, Genio, Vista Plus, Burntsand,
New Generation Consulting, Momentum Systems, DOKuStar, and RecoStar among others. This list is not exhaustive.
All other products or company names are used for identification purposes only, and are trademarks of their respective owners. All rights reserved.

Table of Contents

List of tables ............................................................................................. 13


List of Figures........................................................................................... 15
PRE

Introduction

17

i
ii
iii

About This Document............................................................................. 17


Further Information................................................................................. 18
Conventions ........................................................................................... 20

Part 1

Overview

23

Archive and Storage Services ................................................ 25

1.1
1.2
1.3
1.4

Basic Features of Archive and Storage Services .................................. 25


Flexibility for Different Business Processes ........................................... 25
The Main Components of Archive and Storage Services ...................... 26
Important Directories on the Archive Server .......................................... 27

Basic Principles of Archives .................................................. 29

2.1
2.2
2.3
2.4
2.4.1
2.4.2
2.4.3
2.4.4
2.4.5
2.5

Documents, Data and Logical Archives ................................................. 29


Content Capture and Storage ................................................................ 29
Content Retrieval ................................................................................... 30
Logical Archives ..................................................................................... 31
Disk Buffers............................................................................................ 33
Storage Devices..................................................................................... 33
Storage Scenarios.................................................................................. 34
Pools and Pool Types ............................................................................ 35
Caches ................................................................................................... 37
Jobs........................................................................................................ 37

Administration Client and the Main Objects of Archive and


Storage Services ..................................................................... 39

3.1
3.2
3.2.1
3.2.2
3.2.3

Administration Client .............................................................................. 39


Main Objects of Archive and Storage Services ..................................... 39
Infrastructure .......................................................................................... 40
Archives ................................................................................................. 40
Environment ........................................................................................... 41

AR090701-ACN-EN-6

Open Text Archive and Storage Services

iii

Table of Contents

iv

3.2.4

System ................................................................................................... 41

Part 2

Configuration

Setting Up the Infrastructure ..................................................45

4.1
4.1.1
4.1.2
4.2
4.2.1
4.2.2
4.2.3
4.2.4
4.2.5
4.2.6
4.2.7
4.3
4.3.1
4.3.2
4.3.3
4.3.4
4.3.5
4.4
4.5
4.6
4.6.1
4.6.2
4.6.3
4.6.3.1
4.6.3.2
4.6.4
4.6.4.1
4.6.4.2
4.6.4.3
4.6.4.4
4.7

Configuring Disk Volumes...................................................................... 45


Overview ................................................................................................ 45
Creating and Modifying Disk Volumes................................................... 46
Configuring Buffers ................................................................................ 47
Creating and Modifying a Disk Buffer .................................................... 47
Attaching a Disk Volume to a Disk Buffer .............................................. 49
Detaching a Volume from a Disk Buffer................................................. 49
Configuring the Purge Buffer Job........................................................... 50
Checking and Modifying Attached Disk Volumes .................................. 50
Synchronizing Servers ........................................................................... 51
Configuring Replicated Buffers .............................................................. 52
Configuring Caches ............................................................................... 52
Overview ................................................................................................ 52
Creating and Deleting Caches ............................................................... 54
Adding Hard Disk Volumes to Caches................................................... 54
Deleting Assigned Hard Disk Volumes .................................................. 55
Defining Priorities of Cache Volumes .................................................... 55
Installing and Configuring Storage Devices ........................................... 56
Configuring Hard Disk Based Storage Devices (Single File VI) ............ 57
Configuring Storage Devices with Optical Media (STORM) .................. 58
Attaching and Detaching Devices .......................................................... 58
Inserting a Single Volume ...................................................................... 58
Inserting Several Media at Once............................................................ 59
Offline Import.......................................................................................... 59
Testing Jukebox Slots ............................................................................ 60
Initializing Storage Volumes................................................................... 60
Automatic Initialization and Assignment ................................................ 61
Manual Initialization of Original Volumes............................................... 61
Manual Initialization of Backup Volumes ............................................... 61
Add Volume to Document Service ......................................................... 62
Checking Unavailable Volumes ............................................................. 62

Configuring Archives and Pools.............................................63

5.1
5.1.1
5.1.2
5.1.3
5.2
5.2.1
5.2.2
5.2.3
5.2.3.1
5.2.4

Logical Archives ..................................................................................... 63


Data Compression ................................................................................. 64
Single Instance....................................................................................... 65
Retention................................................................................................ 65
Creating and Configuring Logical Archives............................................ 67
Creating a Logical Archive ..................................................................... 68
Configuring the Archive Security Settings ............................................. 68
Configuring the Archive Settings............................................................ 70
Configuring the Server Priorities ............................................................ 71
Configuring the Archive Retention Settings ........................................... 72

Open Text Archive and Storage Services

43

AR090701-ACN-EN-6

Table of Contents

5.3
5.3.1
5.3.2
5.3.2.1
5.3.2.2
5.3.2.3
5.3.3
5.4

Creating and Modifying Pools ................................................................ 74


Creating and Modifying a HDSK (Write Through) Pool ......................... 74
Creating and Modifying Pools with a Buffer........................................... 75
Write At-once Pool (ISO) Settings ......................................................... 76
Write Incremental (IXW) Pool Settings .................................................. 78
Single File (VI, FS) Pool Settings........................................................... 80
Marking the Pool as Default................................................................... 81
Creating and Modifying Storage Tiers ................................................... 81

Configuring Jobs and Checking Job Protocol ...................... 83

6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8

Important Jobs and Commands............................................................. 83


Starting and Stopping the Scheduler ..................................................... 85
Starting and Stopping Jobs .................................................................... 86
Enabling and Disabling Jobs.................................................................. 86
Checking Settings of Jobs ..................................................................... 86
Creating and Modifying Jobs ................................................................. 87
Setting the Start Mode and Scheduling of Jobs..................................... 87
Checking the Execution of Jobs............................................................. 88

Configuring Security Settings ................................................ 91

7.1
7.2
7.2.1
7.2.1.1
7.2.1.2
7.2.1.3
7.2.2
7.2.2.1
7.2.2.2
7.2.2.3
7.2.3
7.2.4
7.3
7.3.1
7.3.2
7.4
7.4.1
7.4.2
7.5
7.5.1
7.5.2
7.6
7.7
7.8
7.9
7.9.1
7.9.1.1

Overview ................................................................................................ 91
SecKeys / Signed URLs......................................................................... 92
Configuring SecKeys on the Archive Server.......................................... 94
Activating SecKeys ................................................................................ 94
Enabling a Certificate ............................................................................. 95
Granting Privileges for a Certificate ....................................................... 95
Importing and Checking Certificates for Authentication......................... 96
Importing a Global Certificate for All Archives ....................................... 96
Importing a Certificate for a Single Archive ........................................... 97
Checking Certificates of an Archive ....................................................... 97
Using SecKeys from SAP ...................................................................... 98
Using SecKeys from Other Leading Applications and Components ..... 98
Secure HTTP Communication with SSL .............................................. 100
SSL Connection to Document Service................................................. 100
SSL Connection Using Tomcat Web Server........................................ 101
Encrypted Document Storage.............................................................. 101
Creating a System Key for Document Encryption ............................... 101
Activating Encryption for a Logical Archive.......................................... 102
Importing and Checking Encryption Certificates .................................. 102
Importing Encryption Certificates ......................................................... 102
Checking the Encryption Certificates ................................................... 103
Exporting and Importing the Key Store ................................................ 103
Analyzing Security Settings ................................................................. 105
Checksums .......................................................................................... 106
Timestamps.......................................................................................... 107
Importing a Certificate for Timestamp Verification............................... 110
Checking Certificates for Timestamp Verification ................................ 110

AR090701-ACN-EN-6

Administration Guide

Table of Contents

vi

7.9.2
7.9.3
7.9.4
7.9.5
7.10
7.10.1
7.10.2
7.10.2.1
7.10.2.2
7.10.3
7.10.3.1
7.10.3.2
7.10.3.3
7.10.3.4
7.11
7.11.1
7.11.2
7.11.3
7.11.4
7.11.5
7.11.6

Configuring ArchiSig Timestamps........................................................ 111


Migrating Existing Document Timestamps .......................................... 112
Renewing Timestamps of Hash Trees................................................. 112
Renewing Hash Trees ......................................................................... 113
Timestamp Server................................................................................ 114
Overview .............................................................................................. 114
Configuring Timestamp Server ............................................................ 114
Configuring Basic Settings with Timestamp Server Administration..... 114
Configuring Special Settings with Administration Client ...................... 118
Configuring Certificates and Signature Keys ....................................... 121
Generating a New Signature Key ........................................................ 121
Generating a New Request.................................................................. 123
Removing Certificates .......................................................................... 125
Adding New Certificates....................................................................... 125
Timestamp Server Administration........................................................ 126
Checking the Status and Restarting Timestamp Server...................... 127
Transmit Parameters ........................................................................... 128
Open Logfile......................................................................................... 128
Checking and Adjusting the Time ........................................................ 129
Checking the Current Signature Key and Certificates Configuration... 130
Checking the Location ......................................................................... 131

Configuring Users, Groups and Policies .............................133

8.1
8.2
8.3
8.4
8.4.1
8.4.2
8.4.3
8.5
8.5.1
8.5.2
8.6
8.6.1
8.6.2
8.6.3
8.7

Password Security and Settings .......................................................... 133


Concept................................................................................................ 134
Configuring Users and Their Rights..................................................... 135
Checking, Creating and Modifying Policies ......................................... 135
Available Rights to Create Policies ...................................................... 136
Checking Policies................................................................................. 136
Creating and Modifying Policies........................................................... 137
Checking, Creating and Modifying Users ............................................ 137
Checking Users.................................................................................... 137
Creating and Modifying Users.............................................................. 138
Checking, Creating and Modifying User Groups ................................. 139
Checking User Groups......................................................................... 139
Creating and Modifying User Groups .................................................. 139
Adding Users and Policies to a User Group ........................................ 140
Checking a User's Rights..................................................................... 140

Connecting to SAP Servers ..................................................143

9.1
9.2
9.3

Creating and Modifying SAP Systems ................................................. 143


Creating and Modifying SAP Gateways............................................... 145
Assigning a SAP System to a Logical Archive..................................... 146

10

Configuring Scan Stations ....................................................149

10.1
10.2
10.3

Scenarios and Archive Modes ............................................................. 149


Adding and Modifying Archive Modes ................................................. 151
Archive Mode Settings ......................................................................... 152

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Table of Contents

10.4
10.5
10.6
10.7
10.8

Adding Additional Scan Hosts.............................................................. 154


Adding a New Scan Host and Assigning Archive Modes .................... 154
Adding Additional Archive Modes ........................................................ 155
Changing the Default Archive Mode .................................................... 156
Removing Assigned Archive Modes .................................................... 156

11

Adding and Modifying Known Servers ................................ 157

11.1
11.2
11.3

Adding Known Servers ........................................................................ 157


Checking and Modifying Known Servers ............................................. 158
Synchronizing Servers ......................................................................... 158

12

Configuring Remote Standby Scenarios ............................. 161

12.1
12.1.1
12.1.2
12.2
12.2.1
12.2.2
12.3
12.3.1
12.3.2

Configuring Original Archive Server and Remote Standby Server ...... 162
Configuring the Original Archive Server............................................... 162
Configuring the Remote Standby Server ............................................. 162
Backups on a Remote Standby Server................................................ 165
ISO Volumes ........................................................................................ 165
IXW Volumes ....................................................................................... 166
Restoring of IXW or ISO Volumes ....................................................... 166
Restoring an Original IXW or ISO Volume........................................... 166
Restoring a Replicate of an IXW or ISO Volume ................................. 169

13

Configuring Archive Cache Services................................... 173

13.1
13.2
13.2.1
13.2.2
13.2.3
13.3
13.3.1
13.3.2
13.3.3
13.3.4

Restrictions Using Archive Cache Services......................................... 174


Configuring a Cache Server in the Environment ................................. 177
Adding a Cache Server to the Environment ........................................ 177
Modifying a Cache Server.................................................................... 178
Deleting a Cache Server ...................................................................... 178
Configuring Access Via a Cache Server.............................................. 179
Subnet Assignment of a Cache Server................................................ 179
Configuring Archive Access Via a Cache Server................................. 180
Adding and Modifying Subnet Definitions of a Cache Server .............. 181
Deleting an Assigned Cache Server .................................................... 182

Part 3

Maintenance

183

14

Handling Storage Volumes ................................................... 185

14.1
14.1.1
14.1.2
14.1.3
14.1.4
14.1.5
14.2
14.2.1
14.2.2

Finalizing Storage Volumes ................................................................. 185


Automatic Finalization of IXW Volumes ............................................... 185
Manually Finalizing IXW Volumes........................................................ 186
Manually Finalizing IXW Pools............................................................. 186
Checking the Finalization Status.......................................................... 187
Setting the Finalization Status Manually.............................................. 188
When the Retention Period Has Expired ............................................. 188
Checking for Empty Volumes and Deleting Them Manually ............... 190
Deleting Empty Volumes Automatically ............................................... 191

AR090701-ACN-EN-6

Administration Guide

vii

Table of Contents

viii

14.3
14.4
14.4.1
14.4.2
14.4.3
14.4.4
14.4.5
14.5
14.5.1
14.5.2
14.5.3
14.5.4
14.5.5
14.5.6
14.6

Exporting Volumes ............................................................................... 192


Importing Volumes ............................................................................... 193
Importing ISO Volumes........................................................................ 194
Importing Finalized and Non-finalized IXW Volumes........................... 194
Lost&Found for IXW Volumes.............................................................. 196
Importing Hard Disk Volumes .............................................................. 196
Importing GS Volumes for Single File (VI) Pool................................... 197
Consistency Checks for Storage Volumes and Documents ................ 198
Checking Database Against Volume ................................................... 198
Checking Volume Against Database ................................................... 199
Checking a Document.......................................................................... 200
Counting Documents and Components in a Volume........................... 201
Checking a Volume.............................................................................. 201
Comparing Backup and Original IXW Volume..................................... 202
Backup for Storage Systems ............................................................... 203

15

Finalizing and Backing Up of Optical Media ........................205

15.1
15.1.1
15.1.2
15.2
15.2.1
15.2.1.1
15.2.1.2
15.2.2
15.2.2.1
15.2.2.2

Managing Written Optical Media.......................................................... 205


Newly Written ISO Media..................................................................... 205
Removing Optical Media from Jukebox ............................................... 206
Backup and Recovery of Optical Media............................................... 206
Optical ISO Media ................................................................................ 207
Backup of ISO Volumes....................................................................... 207
Recovering of ISO Volumes................................................................. 208
IXW Volumes ....................................................................................... 209
Backup of IXW Volumes ...................................................................... 209
Restoring of IXW Volumes................................................................... 211

16

Backups and Recovery .........................................................213

16.1
16.1.1
16.1.2
16.2
16.3
16.3.1
16.3.2

Backup of the Database....................................................................... 214


Backing Up an Oracle Database.......................................................... 216
Backing Up MS SQL Server Databases .............................................. 216
Backup and Restoring of the Storage Manager Configuration ............ 216
Backup and Recovery of a Cache Server............................................ 216
Backup of Cache Server Data ............................................................. 216
Recovery of Cache Server Data .......................................................... 218

17

Utilities....................................................................................221

17.1
17.2

Starting Utilities .................................................................................... 222


Checking Utilities Protocols ................................................................. 222

Part 4

Migration

225

18

About Migration .....................................................................227

18.1
18.2
18.3

Features of Volume Migration.............................................................. 227


Restrictions .......................................................................................... 228
Migration to HDSK ............................................................................... 229

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Table of Contents

19

Setting Parameters of Volume Migration............................. 231

19.1
19.2

Setting Configuration Parameters of Volume Migration....................... 231


Setting Logging Parameters of Volume Migration ............................... 233

20

Preparing the Migration ........................................................ 235

20.1
20.2
20.3
20.4

Preparing for Local Migration............................................................... 235


Preparing for Remote Migration........................................................... 235
Preparing for Local Fast Migration of ISO Images............................... 237
Preparing for Remote Fast Migration of ISO Images........................... 237

21

Creating a Migration Job ...................................................... 239

21.1
21.2
21.3
21.4

Creating a Local Migration Job ............................................................ 239


Creating a Remote Migration Job ........................................................ 242
Creating a Local Fast Migration Job for ISO Volumes......................... 244
Creating a Remote Fast Migration Job for ISO Volumes..................... 245

22

Monitoring the Migration Progress ...................................... 249

22.1
22.2

Starting Monitoring ............................................................................... 249


States of Migration Jobs ...................................................................... 250

23

Manipulating Migration Jobs ................................................ 253

23.1
23.2
23.3
23.4

Pausing a Migration Job ...................................................................... 253


Continuing a Migration Job .................................................................. 253
Canceling a Migration Job ................................................................... 254
Renewing a Migration Job ................................................................... 254

24

Volume Migration Utilities..................................................... 257

24.1
24.2
24.3
24.4
24.5
24.6
24.7
24.8
24.9

Deleting a Migration Job ...................................................................... 257


Finishing a Migration Job Manually...................................................... 257
Modifying Attributes of a Migration Job................................................ 258
Changing the Target Pool of Write Jobs .............................................. 258
Determining Unmigrated Components................................................. 259
Switching Component Types of Two Pools ......................................... 259
Adjusting the Sequence Number for New Volumes............................. 260
Statistic About Components on Certain Volumes................................ 260
Collecting Diagnostic Information ........................................................ 260

Part 5

Monitoring

25

Everyday Monitoring of the Archive System ....................... 263

26

Monitoring with Notifications ............................................... 265

26.1
26.1.1

Creating and Modifying Event Filters ................................................... 265


Conditions for Events Filters ................................................................ 266

AR090701-ACN-EN-6

Administration Guide

261

ix

Table of Contents

26.1.2
26.2
26.2.1
26.2.2
26.3

Available Event Filters ......................................................................... 268


Creating and Modifying Notifications ................................................... 269
Notification Settings ............................................................................. 270
Using Variables in Notifications ........................................................... 272
Checking Alerts .................................................................................... 273

27

Using Monitor Web Client .....................................................275

27.1
27.1.1
27.1.2
27.1.3
27.1.4
27.1.5
27.1.6
27.2
27.2.1
27.2.2
27.2.3
27.2.4
27.2.5
27.2.6
27.2.7
27.2.8

First Steps and Overview..................................................................... 275


Starting Monitor Web Client ................................................................. 275
Monitor Web Client Window................................................................. 276
Setting the Refresh Interval ................................................................. 278
Adding and Removing Hosts ............................................................... 278
Configuring the Icon Type.................................................................... 279
Customizing Monitor Web Client.......................................................... 279
Component Status Display .................................................................. 280
DP Space ............................................................................................. 280
Storage Manager ................................................................................. 280
DocService (Document Service).......................................................... 281
DS Pools .............................................................................................. 281
DS DP Tools, DS DP Queues, DS DP Error Queues.......................... 282
Log Diskspace...................................................................................... 282
DP Tools, DP Queues, DP Error Queues ............................................ 283
Timestamp Service .............................................................................. 285

28

Auditing, Accounting and Statistics.....................................287

28.1
28.1.1
28.1.2
28.2
28.2.1
28.2.2
28.3

Auditing ................................................................................................ 287


Configuring Auditing............................................................................. 287
Accessing Auditing Information............................................................ 287
Accounting ........................................................................................... 290
Settings for Accounting ........................................................................ 290
Evaluating Accounting Data................................................................. 290
Storage Manager Statistics.................................................................. 293

Part 6

Troubleshooting

295

29

Basics .....................................................................................297

29.1
29.2
29.3
29.4
29.5

Avoiding Problems ............................................................................... 297


Viewing Installed Archive Server Patches ........................................... 297
Correcting Wrong Installation Settings ................................................ 298
Monitoring and Administration Tools ................................................... 299
Deleting Log Files ................................................................................ 299

30

Starting and Stopping of Archive and Storage Services ....301

30.1
30.2
30.3

Starting and Stopping Under Windows ................................................ 301


Starting and Stopping Under UNIX ...................................................... 302
Starting and Stopping Single Services with spawncmd....................... 303

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Table of Contents

30.4

Setting the Operation Mode of Archive and Storage Services ............ 304

31

Analyzing Problems .............................................................. 307

31.1
31.2
31.3
31.3.1
31.3.2
31.3.3
31.3.4

Spawner Log File ................................................................................. 307


Analyzing Processes with spawncmd .................................................. 307
Working with Log Files ......................................................................... 309
About Log Files .................................................................................... 309
Setting Log Levels................................................................................ 310
Log Settings for Archive and Storage Services Components (Except
STORM) ............................................................................................... 310
Log Levels and Log Files for the STORM............................................ 312

GLS

Glossary

313

IDX

Index

321

AR090701-ACN-EN-6

Administration Guide

xi

List of tables

Cache configuration (page 53)

Types of storage devices (page 56)

Default values of pool-independent archive settings (page 67)

Archive settings dependent on storage method (page 68)

Preconfigured jobs (page 83)

Pool-related jobs (page 84)

Other jobs (page 85)

Administrative WebServices (page 136)

Restrictions using Archive Cache Services (page 175)

Overview of utilities (page 221)

Fields in accounting files (page 291)

Job numbers and names of requests (page 291)

AR090701-ACN-EN-6

Open Text Archive and Storage Services

13

List of Figures
Figure 1-1: Main components of Archive and Storage Services on page 26
Figure 2-1: Content capture and storage on page 30
Figure 2-2: Content retrieval on page 31
Figure 2-3: Logical archives on page 32
Figure 2-4: Pool types and storage systems on page 36
Figure 3-1: Main objects of Archive and Storage Services on page 40
Figure 4-1: Filling the local cache on page 53
Figure 12-1: Remote Standby scenario on page 161
Figure 13-1: Archive Cache Services scenario on page 174
Figure 13-2: Example of subnet assignment of cache servers on page 179
Figure 16-1: Backups relevant areas on page 213

AR090701-ACN-EN-6

Open Text Archive and Storage Services

15

Preface

Introduction
Open Text Archive and Storage Services (short Archive and Storage Services)
provides a full set of services for content and documents. Archive and Storage
Services can either be used as an integral part of the Enterprise Library Services or
as stand-alone services in various scenarios. A server where Archive and Storage
Services are performed is called archive server.

i About This Document


Structure

This manual describes all jobs that are relevant after Archive and Storage Services
are installed on a server:
Overview on page 23
Read this part to get an introduction of Archive and Storage Services, the
architecture, the storage systems and basic concepts like logical archives and
pools. You find also a short introduction to the Administration Client and its
main objects.
Configuration on page 43
This part describes also the preparation of the system and the configuration of
Archive and Storage Services performed on an archive server: logical archives,
pools, jobs, security settings, connections to SAP and scan stations.
Maintenance on page 183
Here you find all tasks to keep the system running: how to prepare and handle
storage media, backups and recovery.
Migration on page 225
Here you find all information to migrate content from one storage platform to
another.
Monitoring on page 261
Read here how to monitor the system, how to simplify the monitoring by
configuration of notifications, how to get auditing, accounting and statistic data
and how to use Monitor Web Client monitoring utility.
Troubleshooting on page 295
This part provides support if problems occur and hints how you can avoid problems. It explains where to find the log files and how to find the cause of the problem. If fatal problems occur, you have to contact Open Text Customer Support.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

xvii

Introduction

Audience and
knowledge

This document is written for administrators of Archive and Storage Services, and for
the project managers responsible for the introduction of archiving. All readers share
an interest in administration tasks and have to ensure the trouble-free operation of
Archive and Storage Services. These are the issues dealt with in this manual. The
following knowledge is required to take full advantage of this document.

Familiarity with the relevant operation system Windows or UNIX.

A general understanding of TCP/IP networks, HTTP protocol, network and data


security, and the databases (ORACLE or MS SQL Server).

Additional knowledge of NFS file systems would be helpful.

Besides these technical backgrounds, a general understanding of the following


business issues is important:

the number and type of documents to be electronically archived each day or each
month

how often archived documents will be retrieved

are retrieval requests predictable or independent

for what period of time documents will be frequently accessed

the length of time for which documents must be archived

which archived documents are highly sensitive and might have to be updated
(personal files, for example).

On the basis of this information you can decide which scenario you are going to use
for archiving and how many logical archives you need to configure. You can
determine the size of disk buffers and caches in order to guarantee fast access to
archived data.

ii Further Information
This manual

Online help

Other manuals

xviii

This manual is available in PDF format and can be downloaded from the Open Text
Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/open/12331031). You can
print the PDF file if you prefer to read longer text on paper.
For all administration clients (Administration Client, Monitor Web Client,
Document Pipeline Info and configuration properties), online help files are
available. You can open the online help via help menu, help button, or F1.
In addition to this Administration Guide, use part 2 "Configuration Reference:
Archive and Storage Services, Document Pipeline, Monitor Server and Monitor Web
Client" in Open Text Administration Help - Runtime and Core Services (ELCS100100-HAGM) for a reference of all configuration properties.
To learn about Document Pipelines and their usage in document import scenarios,
see OpenText Document Pipelines - Overview and Import Interfaces (AR-CDP).

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Introduction

Open Text Online is a single point of access for the product information provided by
Open Text. Depending on your role, you have access to different scopes of
information (see below for details).
You can access Open Text Online via the Internet at http://online.opentext.com/ or
the support sites at http://support.opentext.com/.
The following information and support sources can be accessed through Open Text
Online:
Knowledge Center
Open Text's corporate extranet and primary site for technical support. It is the
official source for:

Open Text products and modules.

Documentation for all Open Text products.

Open Text Developer Network (OTDN): developer documentation and


programming samples for Open Text products.

Patches for Open Text products.

The following role-specific information is available:


Partners

Information on the Open Text Partner Program.

Programs and support for registered partners.

Business Users

Tips, help files, and further information from Open Text staff and other users
in one of the Open Text online communities

Administrators/developers

Feedback on
documentation

Downloads and patches

Documentation

Product information

Discussions

Product previews

If you have any comments, questions, or suggestions to improve our


documentation, contact us by e-mail at documentation@opentext.com.

AR090701-ACN-EN-6

Administration Guide

xix

Introduction

iii Conventions
Read the following conventions before you use this documentation.
Typography

In general, this documentation uses the following typographical conventions:


New terms
This format is used to introduce new terms, emphasize particular terms,
concepts, long product names, and to refer to other documentation.
User interface
This format is used for elements of the graphical user interface (GUI), such as
buttons, names of icons, menu items, names of dialog boxes, and fields.
Filename
command
sample data

This format is used for file names, paths, URLs, and commands in the command
line. It is also used for example data, text to be entered in text boxes, and other
literals.
Note: If a guide provides command line examples, these examples may
contain special or hidden characters in the PDF version of the guide (for
technical reasons). To copy commands to your application or command
line, use the HTML version of the guide.
Key names
Key names appear in ALL CAPS, for example:
Press CTRL+V.
<Variable name>
The brackets < > are used to denote a variable or placeholder. Enter the correct
value for your situation, for example: Replace <server_name> with the name of
the relevant server, for example serv01.
Hyperlink
Weblink (e.g. http://www.opentext.com)
These formats are used for hyperlinks. In all document formats, these are active
references to other locations in the documentation (hyperlink) and on the Internet (Weblink), providing further information on the same subject or a related
subject. Click the link to move to the respective target page. (Note: The hyperlink
above points to itself and will therefore produce no result).
Crossreferences

The documentation uses different types of cross-references:


Internal cross-references
Clicking on the colored part of a cross-reference takes you directly to the target
of the reference. This applies to cross-references in the index and in the table of
contents.

xx

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Introduction

External cross-references
External cross-references are references to other manuals. For technical reasons,
these external cross-references often do not refer to specific chapters but to an
entire manual. If a document is available in HTML format, external references
can be active links though, that directly lead you to the corresponding section in
the other manual.1
Tip: Tips offer information that make your work more efficient or show
alternative ways of performing a task.
Note: Notes provide information that help you avoid problems.
Important
If this important information is ignored, major problems may be
encountered.

Caution
Cautions contain very important information that, if ignored, may cause
irreversible problems. Read this information carefully and follow all
instructions!

This applies, if target and source document are shipped together, e.g. on a product or documentation CD-ROM.

AR090701-ACN-EN-6

Administration Guide

xxi

Part 1
Overview

Chapter 1

Archive and Storage Services


1.1 Basic Features of Archive and Storage Services
Archive and Storage Services provides a complete set of services for content and
documents. These services incorporate:

Store and retrieve content

Content lifecycle

Storage virtualization

Caching and cache servers

Single instance archiving

Long-term preservation and readability

secKeys and timestamps

Compression and encryption

Retention handling

Backup and replication

Disaster recovery

High availability

1.2 Flexibility for Different Business Processes


Depending on the business process, the content type and the storage devices,
Archive and Storage Services provides different techniques to store and access
documents. This guarantees optimal data and storage resource management. Large
or distributed Enterprise Library Services implementations may consist of several
archive servers performing Archive and Storage Services. To support disaster
recovery, servers can be replicated. Additional archive cache servers, which perform
Archive Cache Services, can speed up the access to the archived documents. Archive
Cache Services are used in distributed environments with low network bandwidth
(optional).

AR090701-ACN-EN-6

Open Text Archive and Storage Services

25

Chapter 1 Archive and Storage Services

1.3 The Main Components of Archive and Storage


Services
The following figure shows the main components of Archive and Storage Services
and its environment.

Figure 1-1: Main components of Archive and Storage Services


Applications
Documents or content are delivered by applications or services to an archive server
via Archive Services or Archive Link. Retrieval requests are also sent by applications
to get documents back from the archive server.
Archive and Storage Services
Archive and Storage Services incorporates the following components for storing,
managing and retrieving documents and data:

26

Document Service (DS), handles the storage and retrieval of documents and
components.

Storage Manager (STORM), manages and controls the storage devices.

Administration Server, provides the interface to the Administration Client


which helps the administrator to create and maintain the environment of archive
servers, including logical archives, storage devices, pools, etc.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

1.4

Important Directories on the Archive Server

Administration Tools
To administer, configure and monitor the components mentioned above, you can
use the following tools:

Open Text Administration is the tool to create logical archives and to perform
most of the administrative work like user management and monitoring. See also
Important Directories on the Archive Server on page 27.

Monitor Web Client is used to monitor information regarding the status of


relevant processes, the file system, the size of the database and available
resources. This information is gathered by the Monitor Server from Archive and
Storage Services. See also Using Monitor Web Client on page 275.

Timestamp Administration is used to configure Timestamp Server. See


Timestamp Server Administration on page 126.

Document Pipeline Info is used to monitor the processes in the Document


Pipeline.

Storage Devices
Various types of storage devices offered by leading storage vendors can be used by
Archive and Storage Services for long-time archiving. See Storage Devices on
page 33.

1.4 Important Directories on the Archive Server


During the installation, several directories are created and the default settings can be
modified. Within this manual, the following variables are used for these directories.
You should replace these variables with the values that are specified on your
system.
<OT install AS>
Directory used for Archive and Storage Services program files.
Windows default: C:\Program Files\Open Text\Archive Server x.x.x\
UNIX default: /opt/opentext/ArchiveServerSoftware_x_x_x/
<OT config AS>
Directory used for Archive and Storage Services configuration files.
Windows default: C:\Documents and Settings\All Users\Application
Data\Open Text\Archive Server x.x.x\config\

UNIX default: /opt/opentext/ArchiveServerConfig_x_x_x/


<OT logging>
Directory used for Archive and Storage Services log files.
Windows default: C:\Documents and Settings\All Users\Application
Data\Open Text\var\LogDir\

UNIX default: /var/adm/opentext/log/

AR090701-ACN-EN-6

Administration Guide

27

Chapter 1 Archive and Storage Services

<OT var>
Directory used for Archive and Storage Services variables.
Windows default: C:\Documents and Settings\All Users\Application
Data\Open Text\var\

UNIX default: /var/adm/opentext/


<OT install SPAWNER>
Directory used for SPAWNER program files.
Windows default: %COMMON FILES%\Opent Text\Spawner\bin
UNIX default: /opt/opentext/spawner/

28

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Chapter 2

Basic Principles of Archives


2.1 Documents, Data and Logical Archives
Documents and data to be archived may consist of a number of components.
Examples are documents (main component) with notes and annotations or an email
document, which consists of an information header, the message body and possible
attachments. Within this guide, content is used to label all components belonging
together. Normally, all content components are stored together on the same type of
medium. However, it is also possible to separate the components and store them on
different media. For example, you can store documents on an optical, and the notes
on a hard disk. Documents are identified by a unique ID. The leading application
uses this ID for content retrieval. Archive and Storage Services delivers all
components belonging to this ID to the leading application.
Archive and Storage Services only stores the content of documents. The metadata
describing the business context of the documents are stored in Enterprise Librarys
metadata repository or leading application. The link between the metadata and the
content is the unique ID mentioned above.
Archive and Storage Services represents a large virtual storage system, which can be
used by various applications. All documents that belong to a business process can
be grouped together by the concept of a logical archive. In general, a logical archive
is a collection of documents that have similar properties.
On a single archive server where Archive and Storage Services are running, a
multitude of logical archives can be created. Often, shortly archive is used instead
of logical archive.

2.2 Content Capture and Storage


The following description shows a usual way to capture and store content.
Depending on your requirements, variations of this description are possible.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

29

Chapter 2 Basic Principles of Archives

Figure 2-1: Content capture and storage


1.

The application sends the content to a logical archive created on an archive


server.

2.

Content is stored temporarily in the disk buffer.

3.

Content is copied to the associated storage platform for long-time archiving. The
time scheduling is configured in the Write job. If a cache is used, the content is
copied simultaneously to the cache. This can also be done by the scheduled
purge buffer job.

4.

If configured, the content is also copied to the back-up storage device.

5.

When at least one copy of the document has successfully been written to the
long-term storage, the document can be deleted from the disk buffer.

2.3 Content Retrieval


The following description shows a usual way to retrieve content. Depending on
your requirements, variations of this description are possible.

30

Open Text Archive and Storage Services

AR090701-ACN-EN-6

2.4

Logical Archives

Figure 2-2: Content retrieval


1.

Content is requested by a client. For this, the client sends the unique document
ID and archive ID to Archive and Storage Services.

2.

Archive and Storage Services checks whether the content consists of more
components and where the components are stored.

3.

If the content is still stored in the buffer or in the cache, it is delivered directly to
the client.

4.

If the content is already archived on the storage device, Archive and Storage
Services sends a request to the storage device, gets the content and leads it
forward to the application. Content is returned in chunks, so the client does not
have to wait until the complete file is read. That is important for large files or if
the client only reads parts of a file.

2.4 Logical Archives


Archive and Storage Services is storing the data in a well-organized way. The logical
organization unit is the logical archive. You can organize documents in different
logical archives according to the following criteria:

Metadata belonging to the content

Leading application

Document lifecycle or the retention period

Archiving and cache strategy

Storage system and media types

AR090701-ACN-EN-6

Administration Guide

31

Chapter 2 Basic Principles of Archives

Security requirements for documents

Customer relations (for ASPs)

The logical archive does not determine where and the way the content is archived.
The archive settings define the general aspects of data handling during archiving,
retrieval, and at the end of the document lifecycle.
Important settings are:

compression

single instance archiving

caching

restrictions to ensure document security (signatures, certificates, SSL, encryption,


timestamps)

compliance mode

retention settings

Below you find an overview of the main components of logical archives.

Figure 2-3: Logical archives


To create a logical archive you have to configure:

32

Pool(s) to specify the storage platform and to assign the buffer(s) to the
designated storage platform(s), see also Pools and Pool Types on page 35.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

2.4

Logical Archives

Buffer(s) and disk volumes to store incoming content temporarily, see also Disk
Buffers on page 33.

Storage devices and storage volumes for long-time archiving of content, see also
Installing and Configuring Storage Devices on page 56.

Cache to accelerate content retrieval. Only necessary if slow storage devices are
used, see also Caches on page 37.

Retention period for content, see also Retention on page 65.

Compression and encryption settings, see also Data Compression on page 64


and Encrypted Document Storage on page 101.

Security settings and certificates, see also Configuring the Archive Security
Settings on page 68.

A cache server if used, see also Configuring Archive Cache Services on


page 173.

2.4.1 Disk Buffers


The buffer (or disk buffer) is a hard disk volume where the content is physically
collected until the Write job writes it to the final storage. In ISO pools, the
documents are collected until the amount of data is sufficient to write an ISO image.
The Write job regularly checks the amount of data and writes the image, if there is
sufficient data in the buffer. In other pools, the Write job writes all data that has
been arrived in the buffer since the last run of the job. Sufficient free disk space must
be available in the buffer in order to accommodate new incoming documents. The
documents that have already been written to the storage media must therefore be
deleted from the disk buffer at regular intervals. This can only be done if a copy of
the document has successfully been stored on the long-term storage. This is usually
done by the Purge Buffer job.
Documents can be fast retrieved as soon as they are in the disk buffer. The disk
buffer works as read cache in this case. Retrieval time may increase if the content is
written to the final storage platform.
See also:

Configuring Buffers on page 47

Configuring Disk Volumes on page 45

2.4.2 Storage Devices


Various types of storage devices offered by leading storage vendors can be used by
Archive and Storage Services for long-time archiving:

CAS: Content Addressed Storage

NAS: Network Attached Storage

HSM: Hierarchical Storage Management

AR090701-ACN-EN-6

Administration Guide

33

Chapter 2 Basic Principles of Archives

SAN: Storage Area Network

Opticals:

DVD: Digital Versatile Disk

UDO: Ultra Density Optical

WORM: Write Once Read Many

Archive and Storage Services primarily supports storage devices that offer WORM
functionality, retention handling, or HSM functionality. Depending on their type,
the storage devices are connected via STORM, VI (vendor interface) or API
(application programming interface).
See also:

Installing and Configuring Storage Devices on page 56

Pools and Pool Types on page 35

Creating and Modifying Pools on page 74

2.4.3 Storage Scenarios


Regarding the archiving of and access to individual documents over its lifecycle, we
differentiate between single file storage and container file storage. Single file
storage means that documents are archived individually on the storage platform.
Container file storage indicates that the documents are bundled in containers like
ISO images or blobs.
Below you find criteria for single file storage and ISO images.
Single file storage

Small or medium amount of data

Large files in COLD scenarios

Document requires individual treatment

Lifecycle of document not known or depends on metadata

Individual deletion of documents on the end of the lifecycle required

More administration effort

Time-consuming migration

ISO images

34

Large amount of content

More than one million documents or more than 4 GB data per day

Very small files

Open Text Archive and Storage Services

AR090701-ACN-EN-6

2.4

Same document type

Same lifecycle

Bulk deletion at the end of the lifecycle

Less administration effort

Simple backup or migration

Partial read access to documents

Logical Archives

See also:

Installing and Configuring Storage Devices on page 56

Pools and Pool Types on page 35

Creating and Modifying Pools on page 74

2.4.4 Pools and Pool Types


At least one pool belongs to each logical archive. A pool points to a certain type of
physical storage devices that are written in the same way. Components are assigned
to the pool as component types (known as application types in former archive server
versions). A special application type is Migration that is used for document
migration within the archive.
The same storage platform can be used in different archives with different pool
types. The following pool types are currently available:
ISO pool, Write at once
In an ISO pool, a number of documents is written to the physical storage media
at once as ISO image. Each ISO image builds one ISO volume. An optical storage
media can contain one or two ISO volumes, depending on the type of media
(single or double side). The storage volumes are either hard disks providing the
WORM feature (HD-WO) or optical volumes (DVD and UDO or WORM in
jukeboxes). These systems are managed as virtual or physical jukeboxes in the
Administration Client. ISO pools require a disk buffer.
IXW pool, Write incremental
In an IXW pool, documents are written incrementally to storage media. Supported
storage media are optical media, UDOs and WORMs placed in jukeboxes. Each
side of a medium represents a volume. The IXW file system information
manages the physical location of the documents on the volume. When an IXW
volume has been filled with documents, it can be finalized. Then the archived
documents are managed by the ISO file system of STORM, and the index
information is deleted from the IXW file system information. Finalized IXW
volumes behave like ISO volumes, but distinguish from ISO images in that only
an ISO header exists on the volume, e.g. Bulk Migration is not supported for
finalized IXW volumes.
Documents are written as single files to the volume. They cannot be deleted from
finalized volumes which are read-only volumes. Only logical deletion from non-

AR090701-ACN-EN-6

Administration Guide

35

Chapter 2 Basic Principles of Archives

finalized volumes is possible, as physical deletion of data is not possible from


optical WORMs. IXW volumes require a disk buffer.
FS pool, Single file
The FS pool (FS = File System interface) points to mounted hard disk volumes of
an HSM, NAS or SAN system over the network. FS pools support single file
storage. They require a disk buffer.
VI pool, Single file
The VI pool (VI = Vendor interface) is connected to the storage system via the
API of the storage vendor. VI pools support single file storage. They require a
disk buffer. This storage scenario is sometimes also referred to as GS
(Generalized Store) scenario.
HDSK pool, Write through
In an HDSK (HDSK = hard disk) pool, documents are stored directly to the
storage, which can be a local file system directory or a local SAN system. HDSK
pools support single file storage. It is the only pool type that works without a
buffer. No WORM functionality is available.
Note: As HDSK pools do not use a buffer, they are not intended for use in
productive archive systems. Use them only for test purposes.
The following figure illustrates the dependencies between pool types and storage
systems.

Figure 2-4: Pool types and storage systems


See also:

36

Creating and Modifying Pools on page 74

Open Text Archive and Storage Services

AR090701-ACN-EN-6

2.5

Jobs

Installing and Configuring Storage Devices on page 56

2.4.5 Caches
Caches are used to speed up the read access to documents. Archive and Storage
Services can use several caches: the disk buffer, the local cache volumes and a cache
server performing Archive Cache Services. The local cache resides on the archive
server and can be configured. The local cache is recommended to accelerate retrieval
actions especially with optical storage devices. A cache server performing Archive
Cache Services is intended to reduce and speed up the data transfer in a WAN. It is
installed on its own host in a separate subnet.
See also:

Configuring Caches on page 52

Configuring Disk Volumes on page 45

Configuring Archive Cache Services on page 173

2.5 Jobs
Jobs are recurrent tasks, which are automatically started according to a time
schedule or when certain conditions are met. This allows, for example, that
temporarily stored content is transferred automatically from the disk buffer to the
storage device. See also Configuring Jobs and Checking Job Protocol on page 83.

AR090701-ACN-EN-6

Administration Guide

37

Chapter 3

Administration Client and the Main Objects of


Archive and Storage Services
3.1 Administration Client
Administration Client is used to configure Archive and Storage Services and to
perform most of your administrative work:

administering users and rights

creating logical archives and pools

administering devices and volumes

defining disk buffers

planning and monitoring jobs

configuring server connections (to other archive servers, to cache servers, to SAP
servers, etc.)

inserting volumes

defining the settings for archive modes

configuring events and notifications

The structure of this documentation corresponds to the structure of the program. If


you need to find information quickly concerning a particular window, press F1 to
open the associated context online help.

3.2 Main Objects of Archive and Storage Services


In this section you find an overview and a short description of the main objects of
Archive and Storage Services. Cross-references are leading to detailed descriptions
of the different objects.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

39

Chapter 3 Administration Client and the Main Objects of Archive and Storage Services

Figure 3-1: Main objects of Archive and Storage Services

3.2.1 Infrastructure
Within this object, you configure the required infrastructure objects to enable the
usage with logical archives.
Buffers
Documents are collected in disk buffers before they are finally written to the
storage medium. To create disk buffers, see Configuring Buffers on page 47.
To get more information about buffer types, see Disk Buffers on page 33.
Caches
Caches are used to accelerate the read access to documents. To create caches, see
Configuring Caches on page 52.
Devices
Storage devices are used for long-time archiving. To configure storage devices,
see Installing and Configuring Storage Devices on page 56.
Disk Volumes
Disk volumes are used for buffers and pools. To configure disk volumes, see
Configuring Disk Volumes on page 45.

3.2.2 Archives
Within this object, you create logical archives and pools, you can define replicated
archives for remote standby scenarios and you can see external archives of known
servers.

40

Open Text Archive and Storage Services

AR090701-ACN-EN-6

3.2

Main Objects of Archive and Storage Services

Original Archives
Logical archives of the selected server. To create and modify archives, see
Configuring Archives and Pools on page 63.
Replicated Archives
Shows replicated archives, see Logical Archives on page 63.
External Archives
Shows external archives of known servers, see Logical Archives on page 63.

3.2.3 Environment
Within this object, you configure the environment of an archive server. For example,
cache servers must first be configured in the environment if it should be assigned to
a logical archive.
Cache Servers
Cache servers can be used to accelerate content retrieval in a slow WAN. See
Configuring Archive Cache Services on page 173
Known Servers
Known servers are used for replicating archives in remote standby scenarios. See
Adding and Modifying Known Servers on page 157.
SAP Servers
The configuration of SAP gateways and systems to connect SAP servers to
Archive and Storage Services. See Connecting to SAP Servers on page 143.
Scan Stations
The configuration of scan stations and archive modes to connect scan stations to
Archive and Storage Services. See Configuring Scan Stations on page 149.

3.2.4 System
Within this object, you configure global settings for the archive server. You also find
all jobs and a collection of useful utilities.
Alerts
Displays alerts of the Admin Client Alert type. See Checking Alerts on
page 273. To receive alerts in the Administration Client, configure the events and
notifications appropriately. See, Monitoring with Notifications on page 265.
Events and Notifications
Events and notifications can be configured to get information on predefined
server events. See Monitoring with Notifications on page 265.
Jobs
Jobs are recurrent tasks which are automatically started according to a time
schedule or when certain conditions are met, e.g. to write content from the buffer
to the storage platform. A protocol allows the administrator to watch the
successful execution of jobs. See Configuring Jobs and Checking Job Protocol
on page 83.

AR090701-ACN-EN-6

Administration Guide

41

Chapter 3 Administration Client and the Main Objects of Archive and Storage Services

Key Store
The certification store is used to administer encryption certificates, security keys
and timestamps. See Importing and Checking Encryption Certificates on
page 102.
Policies
Policies are a combination of rights which can be assigned to user groups. See
Checking, Creating and Modifying Policies on page 135.
Storage Tiers
Storage tiers designate different types of storage. See Creating and Modifying
Storage Tiers on page 81.
Users and Groups
Administration of users and groups. See Checking, Creating and Modifying
Users on page 137 and Checking, Creating and Modifying User Groups on
page 139.
Utilities
Utilities are tools which are started interactively by the administrator, see
Utilities on page 221.

42

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Part 2
Configuration

Chapter 4

Setting Up the Infrastructure


Before you can start configuring the archive system, in particular the logical
archives, their pools and jobs, you have to prepare the infrastructure on which the
system is based.
Proceed as follows:
1.

Create and configure disk volumes at the operating system level to use it as
buffer, cache or storage device.

2.

Configure the storage device for long-time archiving and set up the connection
to the archive server.

3.

In the Administration Client:

Add prepared disk volumes for various uses as buffers or local storage
devices (HDSK).

Create disk buffers and attach hard disk volumes.

Create caches and specify volume paths.

Check whether the storage device is usable.

4.1 Configuring Disk Volumes


4.1.1 Overview
Hard disk volumes are used for disk buffers, for local caches and as local storage
devices. At first, you create these volumes at operating system level. The number
and size depends on many factors and is usually defined together with Open Text
experts or partners when the installation is prepared. Important factors are:

Leading application and scenario

Number and size of documents to be archived and accessed, per time unit

Frequency of read access

If the volume is used as disk buffer:


Pool and media type, in particular if ISO images are written.
The buffer must be large enough to accommodate the entire storage capacity of
the ISO image, and in addition, the amount of data that has to be stored in the
buffer between two Write jobs.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

45

Chapter 4 Setting Up the Infrastructure

If the volume is used as cache:


If documents are retrieved after archiving, e.g. in Early Archiving scenarios, they
should stay on the hard disk for a while. The cache volume must be large
enough to store documents for the required time. You can configure and
schedule the Purge_Buffer job to copy documents automatically to the cache
(see Configuring Caches on page 52).

If the volume is used as storage device:


Hard disk volumes can be used for NAS (Network Attached Storage) systems
and as local storage device (HDSK pool). Using HDSK pools is only
recommended for test purposes. Ensure that the volume is large enough to store
your test documents.

4.1.2 Creating and Modifying Disk Volumes


The hard disks must be partitioned at the operating system level first. These disk
volumes can be added to the Administration Client to be used by Archive and
Storage Services. This process is called creating. After creating, the disk volumes can
be used as buffer, pool or local storage device of a logical archive.
1.

Create the volumes at the operating system level.

2.

Start Administration Client.

3.

Select Disk Volumes in the Infrastructure object of the console tree.

4.

Click New Volume in the action pane. The New Disk Volume window opens.

5.

Enter the settings:


Volume name
Unique name of the volume.
Mount path
Mount path of the volume in the file system. The mount path is a drive
under Windows and a volume directory under UNIX.
Click Browse to open the directory browser. Select the designated directory
and click OK to confirm.
If you enter the directory path manually, ensure that a backslash is inserted
in front of the directory name if you are using volume letters (e.g. e:\vol2).
Volume class
Select the storage medium or storage system to ensure correct handling of
documents and their retention.
Hard Disk
Hard disk volume that provides WORM functionality or that can be used
as disk buffer. Documents are written from the buffer to the volume
without additional attributes. Use this volume class for buffers.
Hard Disk based read-only system
Local hard disk volume read-only, documents are written from the buffer
to the volume and the read-only attribute is set.

46

Open Text Archive and Storage Services

AR090701-ACN-EN-6

4.2

Configuring Buffers

Network Appliance Filer with Snaplock


Documents are written from the buffer to the corresponding storage
system with NetApp-specific setting of the retention period. This volume
class is usually used as storage device with pools, not as buffer.
SUN Sam FS and StorEdge 5310 NAS
Documents are written from the buffer to the corresponding storage
system with SUN specific setting of the retention period. This volume
class is usually used as storage device with pools, not as buffer.
6.

Click Finish.
Create as many hard disk volumes as you need.

Renaming disk
volumes

To rename a disk volume, select it in the result pane and click Rename in the action
pane.
Further steps:

Creating and Modifying a Disk Buffer on page 47

Creating and Modifying a HDSK (Write Through) Pool on page 74

Creating and Modifying Pools with a Buffer on page 75

Write Incremental (IXW) Pool Settings on page 78

4.2 Configuring Buffers


Disk buffers (short: buffers) are required for all pool types except for local HDSK
(Write through) pools. Documents are collected in the buffer before they are finally
written to the storage medium by the Write job.
Preconditions

The hard disks must be partitioned at the operating system level and then created in
Administration Client. See Creating and Modifying Disk Volumes on page 46.

4.2.1 Creating and Modifying a Disk Buffer


Proceed as follows:
1.

Select Buffers in the Infrastructure object in the console tree.

2.

Click New Original Disk Buffer in the action pane.

3.

Enter the settings:


Disk buffer name
Name of the disk buffer. The name cannot be modified later.
Purge job
Name of the Purge_Buffer job.

AR090701-ACN-EN-6

Administration Guide

47

Chapter 4 Setting Up the Infrastructure

Min. free space


Minimum available storage space (%). The Purge_Buffer job deletes data
from the buffer until the required percentage of storage space is available.
This applies to every hard disk volume that is assigned to the buffer.
If it is not possible to delete sufficient documents from the disk buffer
because these have not yet been written to storage media, the Purge_Buffer
job is terminated without a message and the required minimum amount of
storage space is not available. You can check the free space in the disk
buffers using Monitor Web Client (see Using Monitor Web Client on
page 275).
Purge documents older than ... days
Specifies after which period of time - after being written to a storage medium
- documents are removed from the disk buffer.
Specifies a time period, after which documents are removed from the disk
buffer. The time period starts after the documents are written to a storage
medium.
Cache documents before purging
Ensures that documents are always fast accessible on a fast hard disk (buffer
or cache).
See also Configuring Caches on page 52.
Note: If both conditions Purge documents older than ... days and Cache
documents before purging are specified, the job runs in a way which
satisfies both conditions to the greatest possible extent. Documents that are
older than n days are also deleted even if the required storage space is
available. Conversely, documents that are more recent than n days are
deleted until the required percentage of storage space is free.

Modifying a disk
buffer

Deleting a disk
buffer

48

4.

Click Next and read the information carefully.

5.

Click Finish to create the disk buffer.

6.

Attach a hard disk volume to the disk buffer.


See Attaching a Disk Volume to a Disk Buffer on page 49.

7.

Schedule the Purge_Buffer job. The command and the arguments are entered
automatically and can be modified later. See Setting the Start Mode and
Scheduling of Jobs on page 87.

To modify a disk buffer, select it and click Properties in the action pane. Proceed in
the same way as when creating a disk buffer. The name of the disk buffer and the
Purge_Buffer job cannot be changed.
To delete a disk buffer, select it and click Delete in the action pane. A disk buffer
can only be deleted if it is not assigned to a pool.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

4.2

Configuring Buffers

4.2.2 Attaching a Disk Volume to a Disk Buffer


A disk buffer needs at least one disk volume to be usable. By and by, the archive
system grows, and the initial configuration of buffers might become too small for a
buffer. To adjust the configuration, you can attach additional volumes to the disk
buffer.
Replicated volumes are attached to a replicated buffer on the remote standby server
in the same way.
Proceed as follows:
1.

Select Buffers in the Infrastructure object in the console tree.

2.

Select the designated disk buffer in the top area of the result pane.

3.

Click Attach Volume in the action pane. A window with all available volumes
opens.

4.

Select an existing volume. The volume must have been created previously, see
Creating and Modifying Disk Volumes on page 46.

5.

Click OK to attach the volume.

See also:

Creating and Modifying Disk Volumes on page 46

Creating and Modifying a Disk Buffer on page 47

4.2.3 Detaching a Volume from a Disk Buffer


If a re-configuration of disk buffers is required, sometimes it is necessary to detach a
volume from a disk buffer. This is the case when you want to reduce the size of the
disk buffer or move resources to another disk buffer because the amount of data to
be archived has increased considerably. When the volume has been detached, it can
be attached to another buffer. A volume does not receive any more data when it is
not attached to a buffer.
Note: If a buffer is attached to a pool, it must have at least one attached hard
disk volume. Thus, the last hard disk volume cannot be detached.
Proceed as follows:
1.

Select Buffers in the Infrastructure object in the console tree.

2.

Select the designated disk buffer in the top area of the result pane.

3.

Select the volume to be detached in the bottom area of the result pane.

4.

Click Detach Volume in the action pane.

5.

Confirm with OK to detach the volume.

AR090701-ACN-EN-6

Administration Guide

49

Chapter 4 Setting Up the Infrastructure

4.2.4 Configuring the Purge Buffer Job


If documents are not immediately deleted from the disk buffer after being written to
a storage medium, they must be removed from the buffer at regular intervals. For
example, in IXW pools, the documents always remain in the buffer for security
reasons, or the disk buffer is used as a type of cache. Documents are removed from
the disk buffer using the Purge_Buffer job. This job is created when a disk buffer is
created.
Proceed as follows:
1.

Select Buffers in the Infrastructure object in the console tree.

2.

Select the designated disk buffer in the top area of the result pane.

3.

Click Edit Purge Job in the action pane.

4.

Enter the settings:


Job name
The job name is set during buffer creation and cannot be changed.
Command
The command is set to Purge_Buffer during buffer creation.
Arguments
The argument is set to the buffer's name during buffer creation.
Start mode
Configures whether the job starts at a certain time or after a previous job was
finished. See also Setting the Start Mode and Scheduling of Jobs on
page 87.

5.

Click Next.

6.

Enter the settings for the selected start mode.

7.

Click Finish.

See also:

Creating and Modifying Jobs on page 87.

Setting the Start Mode and Scheduling of Jobs on page 87

4.2.5 Checking and Modifying Attached Disk Volumes


This function can be used to check the status of a volume, e.g. if it is online. For
maintenance, volumes can be set to write locked or locked to avoid access.
Proceed as follows:
1.

50

Select Buffers in the Infrastructure object in the console tree.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

4.2

Configuring Buffers

2.

Select the Original Disk Buffers tab or the Replicated Disk Buffers tab,
according to the type of buffer you want to check or modify.

3.

Select the designated disk buffer in the top area of the result pane.

4.

Select the volume you want to check in the bottom area of the result pane.

5.

Click Properties in the action pane. A window with volume information opens.
Volume name
The name of the volume
Type
Original or replicated
Capacity (MB)
Maximum capacity of the volume
Free (MB)
Free capacity of the volume
Last Backup or Last Replication
Date, when the last backup or the last replication was performed. Depends
on the type of the volume.
Host
Specifies the host on which the replicated volume resides if the disk buffer is
replicated

6.

Modify the volume status if necessary. To do this, select or clear the status. The
settings that can be modified depends on the volume type.
Full, Offline
These flags are set by Document Service and cannot be modified.
Write locked
No more data can be copied to the volume. Read access is possible; write
access is protected.
Locked
The volume is locked. Read or write access is not possible.
Modified
Is automatically selected, if the write component (WC) performs a write
access to a HDSK volume. If cleared manually,Modified is selected with the
next write access again.

7.

Click OK.

4.2.6 Synchronizing Servers


The Synchronize Servers function transfers settings from known servers to the local
server. This is useful if settings on a known server are changed (e.g. replicated
archives, pools or buffers).

AR090701-ACN-EN-6

Administration Guide

51

Chapter 4 Setting Up the Infrastructure

Thus you can update:

Settings of replicated archives

Settings of replicated buffers

Encryption certificates

Timestamp certificates

System keys

Proceed as follows:
1.

Select Buffers in the Infrastructure object or select Archives in the in the


console tree.

2.

Click Synchronize Servers in the action pane.

3.

Click OK to confirm. The synchronization is started.

4.2.7 Configuring Replicated Buffers


Buffers of replicated archives can also be replicated if necessary.
Proceed as follows:
1.

Select Known Servers in the Environment object in the console tree.

2.

Select the designated disk buffer in the top area of the result pane.

3.

Select the Disk Buffer you want to replicate in the bottom area of the result pane.

4.

Click Replicate in the action pane.

5.

Enter a name for the replicated disk buffer, click Next.

6.

Click Finish.

4.3 Configuring Caches


4.3.1 Overview
Caches are used to speed up the read access to documents. The local cache resides
on the archive server and is recommended to accelerate retrieval actions especially
with optical storage devices. To use a local cache, it must be assigned to a logical
archive.
A cache must have at least one assigned hard disk volume. It is also possible to
assign more disk volumes to a cache and to configure their priority.
Note: Do not mix up the local cache and cache servers performing Archive
Cache Services. See also Configuring Archive Cache Services on page 173).

52

Open Text Archive and Storage Services

AR090701-ACN-EN-6

4.3

Configuring Caches

The local cache can be filled on different ways:

when a document is retrieved for reading,

while documents are written to the final storage medium (Write job),

when the buffer is purged (Purge_Buffer job).

Figure 4-1: Filling the local cache


Global cache
If no cache path is configured and assigned to a logical archive, the global cache is
used. The global cache is usually created during installation but there is no volume
assigned. To use the global cache a volume must be assigned. See Adding Hard
Disk Volumes to Caches on page 54.
Depending on the time when you want to cache documents, you select the
appropriate configuration setting:
Table 4-1: Cache configuration
Enable caching for the
logical archive

Caching option in the archive configuration, see Configuring


the Archive Settings on page 70

Caching when the


document is written

If the Write job is performed, documents are also written to the


cache.

Caching when the


buffer is purged

Cache documents before purging option in the disk buffer


properties. See Creating and Modifying a Disk Buffer on
page 47.

See also:

Adding Hard Disk Volumes to Caches on page 54

Creating and Deleting Caches on page 54

Defining Priorities of Cache Volumes on page 55

AR090701-ACN-EN-6

Administration Guide

53

Chapter 4 Setting Up the Infrastructure

4.3.2 Creating and Deleting Caches


If you want to assign a local cache to a logical archive, you create a cache and assign
one or more volumes to it.
Proceed as follows:
1.

Create the volumes for the caches on the operating system level.

2.

Start the Administration Client.

3.

Select Caches in the Infrastructure object in the console tree.

4.

Click New Cache in the action pane.

5.

Enter the Cache name and click Next.

6.

Enter the Location of the hard disk volume.

7.

Click Finish.
Note: If you want to change the priority of assigned hard disk volumes, see
Defining Priorities of Cache Volumes on page 55.

Deleting a cache

To delete a cache, select it and click Delete in the action pane. It is not possible to
delete a cache which is assigned to a logical archive. The global cache cannot be
deleted either.
See also:

Adding Hard Disk Volumes to Caches on page 54

Defining Priorities of Cache Volumes on page 55

4.3.3 Adding Hard Disk Volumes to Caches


A cache must have at least one assigned hard disk volume. The global cache is
usually created during installation but not the corresponding volume. You can
modify the initial configuration of the global cache by adding or deleting volumes.

Caution
Be aware that your cache content gets invalid if you change the volume
priority.

Proceed as follows:

54

1.

Select Caches in the Infrastructure object in the console tree.

2.

Select the designated cache in the top area of the result pane. In the bottom area
of the result pane, the assigned hard disk volumes are listed.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

4.3

Configuring Caches

3.

Click Add Cache Volume in the action pane.

4.

Click Browse to open the directory browser. Select the designated Location of
the hard disk volume and click OK to confirm.

5.

Click Finish to add the new cache volume.


Note: If you want to change the priority of hard disk volumes, see Defining
Priorities of Cache Volumes on page 55.

See also:

Configuring Caches on page 52

Defining Priorities of Cache Volumes on page 55

4.3.4 Deleting Assigned Hard Disk Volumes


Note: A cache must have at least one assigned hard disk volume. Thus, the last
assigned hard disk volume cannot be deleted.
Proceed as follows:
1.

Select Caches in the Infrastructure object in the console tree.

2.

Select the designated cache in the top area of the result pane. In the bottom area
of the result pane, the assigned hard disk volumes are listed.

3.

Select the hard disk volume you want to delete.

4.

Click Delete in the action pane.

5.

Click OK to confirm.
Note: If you want to change the priority of hard disk volumes, see Defining
Priorities of Cache Volumes on page 55.

See also:

Configuring Caches on page 52

Defining Priorities of Cache Volumes on page 55

4.3.5 Defining Priorities of Cache Volumes


If there is more than one hard disk volume assigned to a cache, the priority of the
single volumes can be defined.

Caution
Be aware that your cache content gets invalid if you change the volume
priority.

AR090701-ACN-EN-6

Administration Guide

55

Chapter 4 Setting Up the Infrastructure

Proceed as follows:
1.

Select Caches in the Infrastructure object in the console tree.

2.

Select the designated cache in the top area of the result pane. In the bottom area
of the result pane the assigned hard disk volumes are listed.

3.

Click Change Volume Priorities in the action pane. A window to change the
priorities of the volumes opens.

4.

Select a volume and click the designated arrow button to increase or decrease
the priority.

5.

Click Finish.

4.4 Installing and Configuring Storage Devices


To use storage devices with logical archives they must be installed first at operating
system level.
Consider the following guides for the installation of the differed storage devices (see
Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744073/3551
166/customview.html?func=ll&objId=3551166)):

Supported media, jukeboxes and storage systems: Hardware Release Notes

STORM Configuration Guide

Installation guides storage platforms

The configuration of storage devices depends on the storage system and the storage
type. If you are not sure how to install your storage device, contact Open Text
Customer Support.
After installation the storage devices are administered in Devices in the
Infrastructure object in the console tree. There are two main types of devices
possible:

Optical storage devices managed by STORM.

Hard disk based storage devices (GS) connected with API.


Note: NAS and Local hard disk devices are administered in Disk Volumes in
the Infrastructure object in the console tree (see Configuring Disk Volumes
on page 45).

Table 4-2: Types of storage devices

56

Storage

Possible pooltypes

Administration

NAS

Write at-once (ISO)

Infrastructure > Devices

Single file (FS)

Infrastructure > Disk Volumes

Open Text Archive and Storage Services

AR090701-ACN-EN-6

4.5

Storage

CAS
SAN
Opticals
Local hard disk

Configuring Hard Disk Based Storage Devices (Single File VI)

Possible pooltypes

Administration

Single file (VI)

Infrastructure > Devices

Write at-once (ISO)

Infrastructure > Devices

Single file (VI)

Infrastructure > Devices

Write at-once (ISO)

Infrastructure > Devices

Write at-once (ISO)

Infrastructure > Devices

Write incremental (IXW)

Infrastructure > Devices

Write through (HDSK)

Infrastructure > Disk Volumes

Important
Although you can configure most storage systems for container file storage
as well as for single file storage, the configuration is completely different.

4.5 Configuring Hard Disk Based Storage Devices


(Single File VI)
After installing the storage device, it appears in Disk Volumes in the Infrastructure
object. To use the storage device, volumes must be created. These volumes can be
attached to pools (see Creating and Modifying Pools on page 74).
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the designated device in the top area of the result pane.

3.

Click New Volume in the action pane.

4.

Enter settings:
Volume name
Unique name of the volume.
Base directory
Base directory, which was defined with storage system with system-specific
tools, during installation.

5.

AR090701-ACN-EN-6

Click Finish to create the new volume.

Administration Guide

57

Chapter 4 Setting Up the Infrastructure

4.6 Configuring Storage Devices with Optical Media


(STORM)
After installing the storage device, it appears in Devices in the Infrastructure object.
To use the storage device, it must be attached. Volumes must be inserted and
initialized, if this is not done during installation. These volumes can be attached to
pools (see Creating and Modifying Pools on page 74).
Note: To determine the name of the STORM server, select Devices in the
Infrastructure object in the console tree. The name of the STORM server is
displayed in brackets behind the device name. E.g., WORM(STORM1).

4.6.1 Attaching and Detaching Devices


Detached and new devices are made available to the archive by means of attaching.
In the event of maintenance and repair work, devices have to be detached
beforehand, i.e. logged off from the archive. Only then can they be turned off.
Attaching devices
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the designated device in the top area of the result pane.

3.

Click Attach in the action pane.

It is now possible to access the device. The status is set to Attached.


Detaching devices
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the designated device in the top area of the result pane.

3.

Click Detach in the action pane.

This device can no longer be accessed and can be turned off. The status is set to
Detached.

4.6.2 Inserting a Single Volume


IXW and ISO media are inserted as a volume in the same way.
Tip: Label blank media if necessary before inserting them in the jukebox,
label backup media as well

58

Open Text Archive and Storage Services

AR090701-ACN-EN-6

4.6

Configuring Storage Devices with Optical Media (STORM)

Proceed as follows:
1.

Insert the medium into the jukebox.

2.

Select Devices in the Infrastructure object in the console tree.

3.

Select the jukebox where you inserted the medium in the top area of the result
pane.

4.

Click Insert Volume in the action pane.


The new volume is listed in the bottom area of the result pane.
The status is -blank- .

4.6.3 Inserting Several Media at Once


Inserting a single optical medium with Insert may take some time because of the
test of the medium. To insert several media at once, you use one of these methods:

offline import

testing jukebox slots

4.6.3.1 Offline Import


Offline import means: you insert several media with Insert Volume Without Import
and test them later with the Import Untested Media utility.
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the jukebox where you inserted the media in the top area of the result
pane.

3.

Click Insert Volume Without Import in the action pane.


The new volumes are listed in the bottom area of the result pane.
The status is -notst- (not tested). The media are known to the Storage
Manager, but they can not be used to store data.

4.

Click Import Untested Media in the action pane.

5.

Click Yes to start the import.


The utility tests and imports all volumes with the status -notst-. A protocol
window shows the progress and the result of the import. After that, the media
that have been successfully imported can be used to store data.
To check the protocol later on, see Checking Utilities Protocols on page 222.

AR090701-ACN-EN-6

Administration Guide

59

Chapter 4 Setting Up the Infrastructure

4.6.3.2 Testing Jukebox Slots


If you have inserted or removed any media without using the commands Insert
Volume or Eject Volume, you must perform a slot test. This entails checking which
media are in the specified slots, and testing of new media.
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree. All available
devices are listed in the top area of the result pane.

2.

Select the designated jukebox. The attached volumes are listed in the bottom
area of the result pane.

3.

Click Test Slots in the action pane.

4.

Enter the numbers of the slots to be tested.


Use the following entry syntax:

5.

Specifies slot 7

3,6,40

Specifies slots 3, 6, and 40.

37

Specifies slots 3 to 7 inclusive

2,20-45

Specifies slot 2 and slots 20 to 45 inclusive

Click OK.
A protocol window shows the progress and the result of the slot test. To check
the protocol later on, see Checking Utilities Protocols on page 222.

4.6.4 Initializing Storage Volumes


Every volume requires a name, and it must be assigned to a pool and known to the
Document Service database. Volumes that are written in ISO pools automatically get
a name and assigned to a pool when the volume is written. The original and backup
volumes are assigned the same name. Identically named ISO volumes are
automatically assigned to the correct pool. In contrast, storage media that are used
in IXW pools have to be initialized and assigned to a pool. You can perform the
initialization automatically or manually.

Caution
Under Windows, writing signatures to media with the Windows Disk
Manager is not allowed. These signatures make the medium unreadable for
the archive.

60

Open Text Archive and Storage Services

AR090701-ACN-EN-6

4.6

Configuring Storage Devices with Optical Media (STORM)

4.6.4.1 Automatic Initialization and Assignment


When you set up and configure an IXW pool, you can define that the associated
media will be initialized automatically. In the pool configuration you specify a name
pattern for the media names. The initialized media are automatically assigned to the
corresponding pool.
Details:

Write Incremental (IXW) Pool Settings on page 78

Pools and Pool Types on page 35

4.6.4.2 Manual Initialization of Original Volumes


Volumes with the status -blank- have not yet been initialized. If you do not use
automatic initialization, you must initialize each volume manually and then assign
it to a pool.
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the jukebox where you inserted the media in the top area of the result
pane.

3.

Select a volume with the -blank- status in the bottom area of the result pane.

4.

Click Initialize Original in the action pane. The Init Volume window opens.

5.

Enter the Volume name.


The maximum length is 32 characters. You can only use letters (no umlauts),
digits and underscores. Give a unique name to every volume in the entire
network. This is a necessary precondition for the replication strategy in which
the replicates of archives and volumes must have the same name as the
corresponding originals. The following name structure is recommended:
<archive-name>_<pool-name>_<serial-number>_<side>.

6.

Click OK to initialize the volume.

7.

Assign the volume to the designated pool (see Creating and Modifying Pools
on page 74).
Note: WORM or UDO volumes, which are manually initialized, must be added
to the document service before they can be attached to a pool (see Add
Volume to Document Service on page 62).

4.6.4.3 Manual Initialization of Backup Volumes


IXW volumes with the status -blank- have not yet been initialized. If you do not
use automatic initialization, you must initialize each volume manually and then

AR090701-ACN-EN-6

Administration Guide

61

Chapter 4 Setting Up the Infrastructure

assign it to a pool. If the volume should be a backup volume it must be assigned to


the original volume.
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the jukebox where you inserted the media in the top area of the result
pane.

3.

Select a volume with the -blank- status in the bottom area of the result pane.

4.

Click Initialize Backup in the action pane. The Init Backup Volume window
opens.

5.

Select the original volume and click OK to initialize the backup volume.

4.6.4.4 Add Volume to Document Service


WORM or UDO volumes are automatically added to the document service after
initialization. Volumes must only be added manually, if there are already data
stored on it (e.g. disaster recovery).
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the jukebox where you inserted the media in the top area of the result
pane.

3.

Select a volume that does not have the -blank- status in the bottom area of the
result pane.

4.

Click Add Volume to Document Service in the action pane.

4.7 Checking Unavailable Volumes


If a document is requested that is stored on an offline medium, the requestor gets a
corresponding message. In addition, an entry is created in Devices (Unavailable
Volumes tab) in the Infrastructure object in the console tree. The administrator can
check how often this volume was requested. If needed, a removed volume can be
inserted again to enable access to the content on the volume (see Inserting a Single
Volume on page 58).
To check unavailable volumes, proceed as follows:

62

1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the Unavailable Volumes tab in the result pane to list all unavailable
devices.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Chapter 5

Configuring Archives and Pools


Before you can work effectively with Archive and Storage Services, you have to
perform some configuration steps:

create and configure logical archives

create storage tiers

create and configure pools

schedule and configure jobs

configure security settings

configure the storage system

When you configure the archive system, you often have to name the configured
element. Make sure that all names follow the naming rule:
Naming rule for archive components
Archive component names must be unique throughout the entire archive
network. No umlauts or special characters may be used for the names of
archive components. This includes names of servers, archives, pools and
volumes. We recommend using only numerals, standard international
letters when assigning names to archive components. Archive and pool
names together may be a maximum of 31 characters in length since the
Document Service forms an internal pool name of the form <Archive
name>_<Pool name>, which may be a maximum of 32 characters in length.

5.1 Logical Archives


The logical archive is the logical unit for well-organized long-term data storage.
Within Administration Client, three groups of logical archive types are available:

Original Archives
Logical Archives which are created on the actual administered (local) server.

Replicated Archives
Replications of original logical archives. These archives are located and
configured on known servers for remote standby scenarios. Thus, document
retrieval is possible although the access to the original archive is disconnected
(see Configuring Remote Standby Scenarios on page 161).

AR090701-ACN-EN-6

Open Text Archive and Storage Services

63

Chapter 5 Configuring Archives and Pools

External Archives
Logical archives of known servers. These archives are located on known servers
and can be reached for retrieval (see Adding and Modifying Known Servers
on page 157).

For each original archive, you give a name and configure a number of settings:

Encryption, compression, blobs and single instance affect the archiving of a


document.

Caching and cache servers affect the retrieval of documents.

Signatures, SSL and restrictions for document deletion define the conditions for
document access.

Timestamps and certificates for authentication ensure the security of documents.

Compliance mode, retention and deletion define the end of the document
lifecycle.

Some of these settings are pure archive settings. Other settings depend on the
storage method, which is defined in the pool type. The most relevant decision
criterion for their definition is single file archiving or container archiving.
Note on IXW pools
Volumes of IXW pools are regarded as container files. Although the documents
are written as single files to the medium, they cannot be deleted individually,
neither from finalized volumes (which are ISO volumes) nor from nonfinalized volumes using the IXW file system information.
Of course, you can use retention also with container archiving. In this case, consider
the delete behavior that depends on the storage method and media (see When the
Retention Period Has Expired on page 188).

5.1.1 Data Compression


In order to save storage space, data compression is activated by default for all new
archives. You can deactivate compression for individual archives, see Configuring
the Archive Settings on page 70.
Formats to
compress

Pools with buffer

All important formats including email and office formats are compressed by default.
You can check the list and add additional formats in Runtime and Core Services >
Configuration > Archive Server >
AS.DS.COMPONENT.COMPRESSION.COMPR_SETUP.COMPR_TYPES.row1
to .rowN.
For pools using a disk buffer, the Write job compresses the data in the disk buffer
and then copies the compressed data to the medium. After compressing a file, the
job deletes the corresponding uncompressed file.
If ISO images are written, the Write job checks whether sufficient compressed data
is available after compression as defined in Minimum amount of data to write. If so,
the ISO image is written. Otherwise, the compressed data is kept in the disk buffer

64

Open Text Archive and Storage Services

AR090701-ACN-EN-6

5.1

Logical Archives

and the job is finished. The next time the Write job starts, the new data is
compressed and the amount of data is checked again.
HDSK pool

When you create an HDSK pool, the Compress_<Archive name>_<Pool name> job is
created automatically for data compression. This job is activated by default.

5.1.2 Single Instance


You can configure a logical archive in a way that requests to archive the same
component do not result in a copy of the component on the archive server. The
component is archived only once and then referenced. This method is called Single
Instance Archiving (SIA) and it saves disk space. It is mainly used if a large number
of emails with identical attachments have to be archived.
By default, Single Instance Archiving is disabled. You can enable it, for example, for
email archives, see Configuring the Archive Settings on page 70.
Important

Open Text strongly recommends not using single instance in


combination with retention periods for archives containing pools for
single file archiving (FS, VI, HDSK).

If you want to use SIA together with retention periods, consider Retention on page 65.

If necessary, you can exclude components types (known as application types in


former archive server versions) from Single Instance Archiving in Runtime and
Core Services > Configuration > Archive Server >
AS.DS.COMPONENT.SIA.SIATYPE_SETUP.SIA_TYPES.row1 to .rowN. MS
Exchange and Lotus Notes emails are excluded by default because although they are
unique the attachments are archived with SIA.
SIA and ISO
images

Be careful when using Single Instance Archiving and ISO images: Emails can consist
of several components, e.g. logo, footer, attachment, which are handling be Single
Instance Archiving. When using ISO images, these components can be distributed
over several images. When reading an email, several ISO images must be accessed
to read all the components in order to recompose the original email. Caching for
frequently used components and proper parameter settings will improve the read
performance.

5.1.3 Retention
Various regulations require the storage of documents for a defined retention period.
During this time, documents must not be modified nor deleted. When the retention
period is expired, documents can be deleted mainly for two reasons:

to free storage space and thus to save costs,

to get rid of documents that might cause liability of the company.

AR090701-ACN-EN-6

Administration Guide

65

Chapter 5 Configuring Archives and Pools

To facilitate compliance with regulations and meet the demand of companies,


Archive and Storage Services can handle retention of documents in cooperation
with the leading application and the storage subsystem. The leading application
manages the retention of documents, and Archive and Storage Services executes the
requests or passes them to the storage system. Thus, retention is handled top down
and assures that a document cannot be deleted or modified during its retention
period. Notes and annotations can be added, they are add-ons and do not change
the document itself.
All components that are defined as add-ons and that can be modified during the
retention period are listed in Runtime and Core Services > Configuration >
Archive Server > AS.DS.HTTP.ADDONS.ADDON_NAMES.row1 to .rowN.
The retention period - more precisely the expiration date of the retention period - is
a property of a document and is stored in the database and additionally if possible
together with the document on the storage medium. The document gets the
retention period in one of the following ways:

the client of the leading application sends the retention period explicitly

the retention period is set for the logical archive within Archive and Storage
Services

if both are given, the leading application has priority

If supported by the storage subsystem, the retention period is propagated to it.


Changes of the retention period in the archive settings do not influence the archived
documents.
When the retention period has expired, Archive and Storage Services allows the
client to delete the document. The leading application must send the deletion
request. Two retention independent settings can prevent deletion: document
deletion settings for the logical archive (see Document deletion on page 69) and the
maintenance level of Archive and Storage Services (see Setting the Operation Mode
of Archive and Storage Services on page 304). The deletion process has two aspects:

66

Delete the document logically, that means: Delete the information on the
document from the archive database so that retrieval is not possible any longer.
Only the information that the document was deleted is kept. This step is
executed as soon as the delete request arrives.

Delete the document physically from the storage media. The time of this action
depends on the storage method:

Documents that are stored as single files can be deleted immediately.

Documents that are stored in containers (ISO images, blobs, finalized and
non-finalized IXW volumes) can be deleted physically only when the
retention period of all documents in the container has expired and all
documents are deleted logically. The Delete_Empty_Volumes job checks for
such volumes and removes them, if the underlying storage system does not
prevent it.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

5.2

Creating and Configuring Logical Archives

Notes:

If you use retention for archives with Single Instance Archiving (SIA), make
sure that documents with identical attachments are archived within a short
timeframe and the documents in one archive have similar retention periods.
See also: Single Instance on page 65.

You cannot export volumes containing at least one document with nonexpired retention, or import volumes that are logically empty.

As regulations may change in the course of time, you can adapt the
retention period of documents by means of a complete document
migration, see Migration on page 225.

See also:

Configuring the Archive Retention Settings on page 72

When the Retention Period Has Expired on page 188

5.2 Creating and Configuring Logical Archives


On each archive server one ore more logical archives can be created. For this
proceed the following main steps:
1.

Creating a Logical Archive on page 68

2.

Configuring the Archive Security Settings on page 68

3.

Configuring the Archive Settings on page 70

4.

Configuring the Archive Retention Settings on page 72

5.

Creating and Modifying Storage Tiers on page 81

6.

Creating and Modifying Pools on page 74

Table 5-1: Default values of pool-independent archive settings


Setting

Default value

Caching

Off

Compression

On

Encryption

Off

Timestamps

Off

AR090701-ACN-EN-6

Administration Guide

67

Chapter 5 Configuring Archives and Pools

Table 5-2: Archive settings dependent on storage method


Setting

Default value

Value for single file archiving

Value for container


archiving

pool types: HDSK, Single


file (FS), Single file (VI)

pool types: ISO, IXW

Blobs

Off

Off

On (possible)

Single instance

Off

Off

On (possible)

Retention

Off

On (possible)

Off (recommended)

5.2.1 Creating a Logical Archive


First a logical archive must be created. After this, you can configure the different
settings of the archive.
Proceed as follows:
1.

Select Original Archives in the Archives object in the console tree.

2.

Click New Archive in the action pane. The window to create a new logical
archive opens.

3.

Enter archive name and description.


Archive name
Unique name of the new logical archive. Consider the Naming rule for
archive components on page 63.
In the case of SAP applications, the archive name consists of two
alphanumeric characters (only uppercase letters and digits).
Description
Brief, self-explanatory description of the new archive.

4.

Click Next and read the information carefully.

5.

Click Finish to create the new archive.

5.2.2 Configuring the Archive Security Settings


In the Security tab of the properties dialog, you specify the settings for SecKeys,
SSL. You also specify whether document deletion is allowed.
Proceed as follows:

68

1.

Select the logical archive in the Original Archives object of the console tree.

2.

Click Properties in the action pane. The property window of the archive opens.

3.

Select the Security tab. Check the settings and modify it, if needed.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

5.2

Creating and Configuring Logical Archives

Authentication (SecKey) required to


This setting is required if the archive system is configured to support signed
URLs (SecKeys) and the archive is used by a leading application using URLs
with SecKeys.
The settings determine the access rights to documents in the selected archive
which were archived without a document protection level, or if document
protection is ignored. The document protection level is defined by the
leading application and archived with the document. It defines for which
operations on the document a valid SecKey is required.
See also: Protection levels on page 93 and SecKeys / Signed URLs on
page 92
Select the operations that you want to protect. Only users with a valid
SecKey can perform the selected operations. If an operation is not selected,
everybody can perform it.
SSL
Specifies whether SSL is used in the selected archive for authorized,
encrypted HTTP communication between the Imaging Clients, archive
servers, cache servers and Document Pipelines (see Secure HTTP
Communication with SSL on page 100).

Use: SSL must be used.

Don't use: SSL is not used.

May use: The use of SSL for the archive is allowed. The behavior
depends on the clients' configuration parameter HTTP UseSSL (see also
the Livelink Archive Windows Viewer and Livelink DesktopLink Configuration Parameters (CL-RCP) manual).
Open Text Imaging Java Viewer does not support SSL.

Document deletion
Here you decide whether deletion requests from the leading application are
performed for documents in the selected archive, and what information is
given. You can also prohibit deletion of documents for all archives of the
archive server. This central setting has priority over the archive setting.
See also: Setting the Operation Mode of Archive and Storage Services on
page 304.
Deletion is allowed
Documents are deleted on request, if no maintenance mode is set and the
retention period is expired.
Deletion Causes error
Documents are not deleted on request, even if the retention period is
expired. A message informs the administrator about deletion requests.
Deletion is ignored
Documents are not deleted on request, even if the retention period is
expired. No information is given.

AR090701-ACN-EN-6

Administration Guide

69

Chapter 5 Configuring Archives and Pools

4.

Click OK to resume.

5.2.3 Configuring the Archive Settings


In the Settings tab of the properties dialog you specify how documents are handled
in the archive.
Proceed as follows:
1.

Select the logical archive in the Original Archives object of the console tree.

2.

Click Properties in the action pane. The property window of the archive opens.

3.

Select the Settings tab. Check the settings and modify it, if needed.
Compression
Activates data compression for the selected archive.
See also: Data Compression on page 64
Encryption
Activates the data encryption to prevent that unauthorized persons can
access archived documents.
See also: Encrypted Document Storage on page 101.
Blobs
Activates the processing of blobs (binary large objects).
Very small documents are gathered in a meta document (the blob) in the disk
buffer and are written to the storage medium together. The method
improves performance. If a document is stored in a blob, it can be destroyed
only when all documents of this blob are deleted. Thus, blobs are not
supported in single file storage scenarios and should not be used together
with retention periods.
Single instance
Enables single instance archiving.
See also: Single Instance on page 65.
Delayed archiving
Select this option, if the documents should remain in the disk buffer until the
leading application allows Archive and Storage Services to store them on
final storage media.
Example: The document arrives in the disk buffer without a retention period
and the leading application will provide the retention period shortly after.
The document must not be written to the storage media before it gets the
retention period. To ensure this processing, enable the Event based
retention option in the Edit Retention dialog box, see Configuring the
Archive Retention Settings on page 72.
Cache enabled
Activates the caching of documents to the DS cache at read access.

70

Open Text Archive and Storage Services

AR090701-ACN-EN-6

5.2

Creating and Configuring Logical Archives

Cache
Pull down menu to select the cache path. Before you can assign a cache path,
you must create it. (See Creating and Deleting Caches on page 54 and
Configuring Caches on page 52).
ArchiSig Timestamps with strict timestamp verification
The ArchiSig timestamps are verified. If the timestamp is not valid or does
not exist, the administrator is informed and the document is not delivered to
the client.
ArchiSig Timestamps with relaxed timestamp verification
The ArchiSig timestamps are verified. If the timestamp is not valid, the
administrator is informed and the document is delivered in spite of this.
ArchiSig Timestamps with no timestamp verification
No ArchiSig timestamp verification
No use of timestamps (not chooseable after timestamp activation)
Deactivates the assignment of timestamps to documents at all.
Important
To ensure consistent usage of timestamps, you cannot enable this
setting after timestamp verification was enabled. See also:
Timestamps on page 107.
4.

Click OK to resume.

5.2.3.1 Configuring the Server Priorities


If you use several servers for an archive, you have to specify the sequence used to
search for documents in the selected archive. The server at the top of this list is
accessed first. If access is refused, the request is routed to the second server in the
list. This enables you to specify that a server first searches in its own replicated
archives before searching in the original archive on the original server or vice versa.
Configuring the server priorities is necessary in case of using replicated or external
archives; see Configuring the Remote Standby Server on page 162.
Proceed as follows:
1.

Select the logical archive in the Original Archives, Replicated Archives, or


External Archives object of the console tree.

2.

Click Change Server Priorities in the action pane.

3.

In the Change Server Priorities window, select the server(s) to add from the
Related servers list on the left.
Click the

AR090701-ACN-EN-6

button to move the selected server(s) to the Set priorities list.

Administration Guide

71

Chapter 5 Configuring Archives and Pools

Note: You can use up to three servers.


4.

Use the arrows on the right to define the order of the servers: Select a server and
or
to move the server up or down in the list, respectively.
click the
If you want to remove a server from the priorities list, select the server to
button.
remove and click the

5.

Click Finish.

5.2.4 Configuring the Archive Retention Settings


In the Retention tab of the properties dialog, you specify the compliance behavior
and document lifecycle requirements. When the retention period of a document is
expired and deletion is not otherwise prohibited, Archive and Storage Services
accepts and executes deletion requests from the leading application.
Proceed as follows:
1.

Select the logical archive in the Original Archives object of the console tree.

2.

Click Properties in the action pane. The property window of the archive opens.

3.

Select the Retention tab. Check the settings and modify it, if needed.
No retention
Use this option if the leading application does not support retention, or if
retention is not relevant for documents in the selected archive. Documents
can be deleted at any time if no other settings prevent it.
No retention read only
Like No retention, but documents cannot be changed.
Retention period of x days
Enter the retention period in days. The retention period of the document is
calculated by adding this number of days to the archiving date of the
document. It is stored with the document.
Event based retention
This method is used if a retention period is required but at the time of
archiving, it is unknown when the retention period will start. The leading
application must send the retention information after the archiving request.
When the retention information arrives, the retention period is calculated by
adding the given period to the event date. Until the document gets it
calculated retention period it is secured with maximum (infinite) retention.
You can use the option in two ways:
Together with the Delayed archiving option
The leading application sends the retention period separately from and
shortly after the archiving request (for example, in Extended ECM for
SAP Solutions). The documents should remain in the disk buffer until
they get their retention period. They are written to final storage media

72

Open Text Archive and Storage Services

AR090701-ACN-EN-6

5.2

Creating and Configuring Logical Archives

together with the calculated retention period when the leading


application requests it. To ensure this scenario, enable the Delayed
archiving option in the Settings tab, see Configuring the Archive
Settings on page 70. Regarding storage media and deletion of
documents, the scenario does not differ from that with a given Retention
period of x days.
Without Delayed archiving
The retention period is set a longer time after the archiving request, and
the document should be stored on final storage media during this time.
For example, in Germany, personnel files of employees must be stored
for 5 years after the employee left the company. The files are immediately
archived on storage media, and the retention period is set at the leaving
date. This scenario is only supported for archives with HDSK pool or
Single File (VI) pool (if supported by the storage system). In all other
pools, the documents would be archived with infinite retention, and the
retention period cannot be changed after archiving (only with migration).
For the same reason, do not use blobs in this scenario.
Infinite retention
Documents in the archive never can be deleted. Use this setting for
documents that must be stored for a very long time.
Compliance
Compliance mode enabled If this mode is enabled, documents cannot be
deleted even by the DS administrator before the retention period has expired. Documents with unknown retention period (event based retention before event) cannot be deleted either. The auditing of the document lifecycle is
activated (see Configuring Auditing on page 287).
If this mode is disabled, an administrator can delete documents with internal
commands even if the retention period has not expired.
Important
To ensure consistent usage of the compliance mode, you can only
enable this setting but not disable it later.
Purge
Destroy (unrecoverable) This additional option is only relevant for
archives with hard disk storage. If enabled, the system at first overwrites the
file content several times and then deletes the file.
4.

AR090701-ACN-EN-6

Click OK to resume.

Administration Guide

73

Chapter 5 Configuring Archives and Pools

Important
Documents with expired retention period are only deleted, if:

document deletion is allowed, see Configuring the Archive Security


Settings on page 68, and

no maintenance mode is set, see Setting the Operation Mode of Archive


and Storage Services on page 304.

See also:

Retention on page 65

When the Retention Period Has Expired on page 188

5.3 Creating and Modifying Pools


At least one pool belongs to each logical archive. A pool contains physical storage
volumes for long time storage. These volumes are written in the same way. The
physical storage media are assigned to the pool either automatically or manually.
The procedure for creating and configuring a pool depends on the pool type. The
main differences in the configuration are:

Usage of a disk buffer. All pool types, except the HDSK (write through) pools,
require a buffer.

Settings of the Write job. The Write job writes the data from the buffer to the
final storage media. For all pool types, except the HDSK pool, a Write job must
be configured.
Note: Consider that the component types of a pool (known as application
types in former archive server versions) are displayed for information, but
cannot be changed (read only).

To determine the pool type that suits the scenario and the storage system in use,
read the Hardware Release Notes (see Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744073/3551
166/customview.html?func=ll&objId=3551166)).
For more information on pools and pool types, see Pools and Pool Types on
page 35.

5.3.1 Creating and Modifying a HDSK (Write Through) Pool


The HDSK (write through) pool is the only pool that works without a buffer. Each
document is directly written to the storage media, in this case a local hard disk
volume or SAN system. Thus, no Write job must be configured. Before you can
create a pool, create the logical archive, see Creating and Configuring Logical
Archives on page 67.

74

Open Text Archive and Storage Services

AR090701-ACN-EN-6

5.3

Creating and Modifying Pools

Note: HDSK pools are not intended for use in productive archive systems, but
for test purposes and special requirements. Use not more than one HSDK pool.
Proceed as follows:
1.

Select Original Archives in the Archives object in the console tree.

2.

Select the designated archive in the console tree.

3.

Click New Pool in the action pane. The window to create a new pool opens.

4.

Enter a unique, descriptive Pool name. Consider the naming conventions, see
Naming rule for archive components on page 63.

5.

Select Write through (HSDK) and click Next.

6.

Select a Storage tier (see Creating and Modifying Storage Tiers on page 81).
The name of the associated compression job is created automatically.

7.

Click Finish to create the pool.

8.

Select the pool in the top area of the result pane and click Attach Volume. A
window with all available hard disk volumes opens (see Creating and
Modifying Disk Volumes on page 46).

9.

Select the designated disk volume and click OK to attach it.

Scheduling the
compression job

To schedule the associated compression job, select the pool and click Edit Compress
Job in the action pane. Configure the scheduling as described in Configuring Jobs
and Checking Job Protocol on page 83.

Modifying a
HDSK pool

To modify pool settings, select the pool and click Properties in the action pane. Only
the assignment of the storage tier can be changed.

5.3.2 Creating and Modifying Pools with a Buffer


All pool types that use a disk buffer are created in the same way. The only
differences are the settings of the Write job. This section describes the main steps to
create pools. The special settings for the Write job are described in separate sections.
Proceed as follows:
1.

Select Original Archives in the Archives object in the console tree.

2.

Select the designated archive in the console tree.

3.

Click New Pool in the action pane. The window to create a new pool opens.

4.

Enter a unique (per archive), descriptive Pool name. Consider the naming
conventions, see Naming rule for archive components on page 63

5.

Select the designated pool type and click Next.

AR090701-ACN-EN-6

Administration Guide

75

Chapter 5 Configuring Archives and Pools

6.

Enter additional settings according to the pool type:

Write At-once Pool (ISO) Settings on page 76

Write Incremental (IXW) Pool Settings on page 78

Single File (VI, FS) Pool Settings on page 80

7.

Click Finish to create the pool.

8.

Select the pool in the top area of the result pane and click Attach Volume. A
window with all available hard disk volumes opens (see Creating and
Modifying Disk Volumes on page 46).

9.

Select the designated disk volume and click OK to attach it.

10. Schedule the Write job, see Configuring Jobs and Checking Job Protocol on
page 83.
Modifying a pool

To modify pool settings, select the pool and click Properties in the action pane.
Depending on the pool type you can modify settings or assign another buffer.
Important
You can assign another buffer to the pool. If you do so, make sure that:

all data from the old buffer is written to the storage media,

the backups are completed,

no new data can be written to the old buffer.

Data that remains in the buffer will be lost after the buffer change.

5.3.2.1 Write At-once Pool (ISO) Settings


Below you find the settings for the configuration of write at-once pools.
Storage Selection
Storage tier
Select the designated storage tier (see Creating and Modifying Storage Tiers on
page 81).
Buffering
Used disk buffer
Select the designated buffer (see Configuring Buffers on page 47).

76

Open Text Archive and Storage Services

AR090701-ACN-EN-6

5.3

Creating and Modifying Pools

Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see Configuring Jobs and Checking Job Protocol on page 83.
Original jukebox
Select the original jukebox.
Volume Name Pattern
Defines the pattern for creating volume names.
$(PREF)_$(ARCHIVE)_$(POOL)_$(SEQ) is set by default. $(ARCHIVE) is the
placeholder for the archive name, $(POOL) for the pool name and $(SEQ) for an
automatic serial number. The prefix $(PREF) is defined in Runtime and Core
Services > Configuration > Archive Server >
AS.ADMS.JOBS.AUTOINIT.ADMS_PART_PREFIX. You can define any
pattern, only the placeholder $(SEQ) is mandatory. You can also insert a fixed
text. The initialization of the medium is started by the Write job.
Click Test Pattern to view the name planned for the next volume based on this
pattern.
Allowed media type
Here you specify the permitted media type. ISO pools support:
DVD-R

Which DVD-R types are supported you find in the Hardware Release
Notes (see Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744
073/3551166/customview.html?func=ll&objId=3551166)).

WORM

Which WORM types are supported you find in the Hardware Release
Notes (see Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744
073/3551166/customview.html?func=ll&objId=3551166)).

HD-WO

HD-WO is the media type supported with many storage systems. An HD-WO
medium combines the characteristics of a hard disk and WORM: fast access to documents and secure document storage. Enter also the maximum
size of an ISO image in MB, separated by a colon:

For some storage systems, the maximum size is not required, refer to the
documentation of your storage system (see Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744
073/3551166/customview.html?func=ll&objId=3551166)).

Number of volumes
Number of ISO volumes to be written in the original jukebox. This number
consists of the original and the backup copies in the same jukebox. For virtual
jukeboxes (HD-WO media), the number of volumes must always be 1, as
backups must not be written to the same medium in the same storage system.

AR090701-ACN-EN-6

Administration Guide

77

Chapter 5 Configuring Archives and Pools

Minimum amount of data


Minimum amount of data to be written in MB. At least this amount must have
been accumulated in the disk buffer before any data is written to storage media.
The quantity of data that you select here depends on the media in use. For HD-WO
media type, the value must be less than the maximum size of the ISO image that
you entered in the Allowed media type field.
Backup
Backup enabled
Enable this option if the volumes of a pool are to be backed up locally in a
second jukebox of this archive server. During the backup operation, the
Local_Backup jobs only considers the pools for which backup has been enabled.
See also: Backup of ISO Volumes on page 207
Exception
For a local backup of optical ISO media, the Write job is already configured
in such a way that multiple ISO media are written in the same jukebox. The
Backup option is not required.
Backup jukebox
Select the backup jukebox. For virtual jukeboxes with HD-WO media, we
strongly recommend to configure the original and backup jukeboxes on
physically different storage systems.
Number of backups
Number of backup media that is written in the backup jukebox. For virtual
jukeboxes (HD-WO media), the number of backups is restricted to 1.
Number of drives
Number of write drives that are available on the backup jukebox. The setting is
only relevant for physical jukeboxes.
See also:

Creating and Modifying Pools with a Buffer on page 75

Pools and Pool Types on page 35

5.3.2.2 Write Incremental (IXW) Pool Settings


Below you find the settings for the configuration of write incremental pools.
Storage Selection
Storage tier
Select the designated storage tier (see Creating and Modifying Storage Tiers on
page 81).

78

Open Text Archive and Storage Services

AR090701-ACN-EN-6

5.3

Creating and Modifying Pools

Buffering
Used disk buffer
Select the designated buffer (see Configuring Buffers on page 47).
Initializing
Auto initialization
Select this option if you want to initialize the IXW media in this pool
automatically, see also Initializing Storage Volumes on page 60.
Original jukebox
Select the original jukebox.
Volume Name Pattern
Defines the pattern for creating volume names.
$(PREF)_$(ARCHIVE)_$(POOL)_$(SEQ) is set by default. $(ARCHIVE) is the
placeholder for the archive name, $(POOL for the pool name and $(SEQ) for an
automatic serial number. The prefix $(PREF) is defined in Runtime and Core
Services > Configuration > Archive Server >
AS.ADMS.JOBS.AUTOINIT.ADMS_PART_PREFIX. You can define any
pattern, only the placeholder $(SEQ) is mandatory. You can also insert a fixed
text. The initialization of the medium is started by the Write job.
Click Test Pattern to view the name planned for the next volume based on this
pattern.
Allowed media type
The media type is always WORM, for both WORM and UDO media.
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see Configuring Jobs and Checking Job Protocol on page 83.
Number of drives
Number of write drives that are available on the original jukebox.
Auto finalization
Select this option if you want to finalize the IXW media in this pool
automatically, see also Finalizing Storage Volumes on page 185.
Filling level of volume: ... %
Defines the filling level in percent at which the volume should be finalized. The
Storage Manager automatically calculates and reserves the storage space
required for the ISO file system. The filling level therefore refers to the space
remaining on the volume.
and last write process: ... days
Defines the number of days since the last write access.

AR090701-ACN-EN-6

Administration Guide

79

Chapter 5 Configuring Archives and Pools

Backup
Backup enabled
Enable this option if the volumes of a pool are to be backed up locally in a
second jukebox of this archive server. During the backup operation, the
Local_Backup jobs only considers the pools for which backup has been enabled.
Backup jukebox
Select the backup jukebox.
Number of backups
Number of backup media that is written in the backup jukebox.
Number of drives
Number of write drives that are available on the backup jukebox. The setting is
only relevant or physical jukeboxes.
See also:

Creating and Modifying Pools with a Buffer on page 75

Pools and Pool Types on page 35

5.3.2.3 Single File (VI, FS) Pool Settings


Below you find the settings for the configuration of single file pools.
Storage Selection
Storage tier
Select the designated storage tier (see Creating and Modifying Storage Tiers on
page 81).
Buffering
Used disk buffer
Select the designated buffer (see Configuring Buffers on page 47).
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see Configuring Jobs and Checking Job Protocol on page 83.
Documents written in parallel
Number of documents that can be written at once.
See also:

80

Creating and Modifying Pools with a Buffer on page 75

Pools and Pool Types on page 35

Open Text Archive and Storage Services

AR090701-ACN-EN-6

5.4

Creating and Modifying Storage Tiers

5.3.3 Marking the Pool as Default


The default pool is only used if no application type or no storage tier is assigned to
the content.
Proceed as follows:
1.

Select Original Archives in the Archives object in the console tree.

2.

Select the designated archive in the console tree.

3.

Select the pool, which should be the default pool, in the top area of the result
pane.

4.

Click Set as Default Pool in the action pane and click OK to confirm.

5.4 Creating and Modifying Storage Tiers


Tiered storage is the assignment of different categories of data to different types of
storage media in order to reduce storage cost. Categories may be based on levels of
protection needed, performance requirements, frequency of use and other
considerations. Since assigning data to particular media may be an ongoing and
complex activity, some vendors provide software for automatically managing the
process based on a company-defined policy.
Example 5-1: Some storage tiers examples

Business-critical
Description: Important to the enterprise, reasonable performance, good
availability

Accessible Online Data


Description: Low access

Nearline Data
Description: Rare access, large volumes

Proceed as follows:
1.

Select Storage Tiers in the System object. The present storage tiers are listed in
the result pane.

2.

Click New Storage Tier in the action pane.

3.

Enter name and a short description of the storage tier.

4.

Click Finish.

AR090701-ACN-EN-6

Administration Guide

81

Chapter 5 Configuring Archives and Pools

Modifying
storage tiers

To modify a storage tier, select it and click Properties in the action pane. Proceed in
the same way as when creating a storage tier.
See also:

82

Creating and Modifying Pools on page 74

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Chapter 6

Configuring Jobs and Checking Job Protocol


A job is a recurrent task that is automatically started according to a time schedule or
when certain conditions are met. Jobs related to an archive server are set up during
installation of Archive and Storage Services. Pool and cache server jobs (Write,
Purge_Buffer and Copy_Back) are configured when the pool is created or a cache
server is attached to a logical archive. The successful execution of jobs can be
checked in a protocol.

6.1 Important Jobs and Commands


The tables list all pre-configured jobs and commands for user-defined jobs.
Table 6-1: Preconfigured jobs
Name

Command

Description

SYS_EXPIRE_ALERTS

Alert_Cleanup

Deletes notifications of the


alert type that are older
than a given number of hours.
The default is 48 hours and
can be changed in: Runtime
and Core Services > Configuration > Archive Server >
AS.ADMS.JOBS.ADMS_AL
RT_EXPIRE .

SYS_CLEANUP_ADMAUDIT

Audit_Sweeper

Deletes administrative audit


information that are older
than a given number of days,
see Auditing or
SYS_CLEANUP_ADMAUDIT
job on page 290. Do not activate this job if you use the
auditing feature.

Local_Backup

backup

Writes the backup of a volume to a local backup jukebox, for all pools where the
Backup option is enabled.

Compress_Storm_Statistics

compress_storm_stati
stics

Compresses the statistic files


written by STORM, see Storage Manager Statistics on
page 293 .

AR090701-ACN-EN-6

Open Text Archive and Storage Services

83

Chapter 6 Configuring Jobs and Checking Job Protocol

Name

Command

Description

Organize_Accounting_Data

organizeAccData

Archives or deletes old accounting data, see Accounting on page 290.

SYS_CLEANUP_PROTOCOL

Protocol_Sweeper

Deletes old job protocol entries, see also Checking the


Execution of Jobs on page 88.

Delete_Empty_Volumes

delete_empty_volumes

Deletes volumes that contain


only deleted documents
whose retention period has
expired in Document Service
and STORM.

SYS_REFRESH_ARCHIVE

Refresh_Archive_Info

Synchronizes the configuration information of the known


archive servers.

Save_Storm_Files

save_storm_files

Performs a backup of STORM


configuration files and the
IXW file system information,
see Backup and Restoring of
the Storage Manager Configuration on page 216.

Synchronize_Replicates

synchronize

Replicates the data in a remote standby scenario.

Purge_Expired

purge_expired

Deletes abandoned files from


storage, which are listed in
the ds_to_be_deleted table,
by executing dsPurgeExp -r
now. The files in this table are
logically deleted but not yet
physically deleted. Works
only for GS and HDSK/HSM
volumes.

Table 6-2: Pool-related jobs

84

Command

Description

Write_CD

Writes data from disk buffer to storage media as ISO images, belongs
to ISO pools.

Write_WORM

Writes data incrementally from disk buffer to WORM and UDO, belongs to IXW pools.

Write_GS

Writes single files from disk buffer to a storage system through the
interface of the storage system (vendor interface), belongs to Single
File (VI) pools.

Write_HDSK

Writes single files from disk buffer to the file system of an external
storage system, belongs to Single File (FS) pools.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

6.2

Starting and Stopping the Scheduler

Command

Description

Purge_Buffer

Deletes the contents of the disk buffer according to conditions, see


Configuring Buffers on page 47.

backup_pool

Performs the backup of all volumes of a pool.

Compress_HDSK

Compresses the data in an HDSK pool.

Table 6-3: Other jobs


Command

Description

Copy_Back

Transfers cached documents from the cache server to the archive


server. The Copy_Back job is disabled by default and must only be
enabled for archive servers with enabling write back mode. See
Configuring Archive Cache Services on page 173. By default,
documents not older than three days are transferred. A message
appears if there are older documents remaining. The default setting
can be modified by changing the job settings.
Add the argument: -i <days> to set the interval.
Typically, the job is scheduled to start in times of low network traffic.

Migrate_Volumes

Controls the operation of the Migration service that performs media


migration, see Migration on page 225.

compare_backup_
worms

Checks one or more backup IXW volumes. Enter the volume name(s)
as argument. You can use the * wildcard. If no argument is set, all
backup IXW volumes in all jukeboxes are compared.

hashtree

Builds the hash trees for ArchiSig timestamps, see ArchiSig timestamps on page 107.

pagelist

Creates the index information for SAP print lists (pagelist).

start<DPname>

Starts the Document Pipelines for the import scenarios:


import content (documents/data) with extraction of attributes

from content (CO*),


import content (documents/data) and attributes (EX*),

import forms (FORM).


See OpenText Document Pipelines - Overview and Import Interfaces (ARCDP) for more information.

6.2 Starting and Stopping the Scheduler


After installation, the scheduler is running by default. The jobs are started
depending on their settings (see Setting the Start Mode and Scheduling of Jobs on
page 87). If the scheduler is stopped, all started jobs are continued and finished but
no other jobs are started until the scheduler is started again. To start and stop certain
jobs, see Starting and Stopping Jobs on page 86.
Proceed as follows:

AR090701-ACN-EN-6

Administration Guide

85

Chapter 6 Configuring Jobs and Checking Job Protocol

1.

Select Jobs in the System object in the console tree.

2.

Depending on the actual status of the scheduler click Start Scheduler or Stop
Scheduler in the action pane to change the status. The actual status is displayed
in the first line of the jobs tab.

6.3 Starting and Stopping Jobs


Jobs can also be started and stopped manually if necessary.
Proceed as follows:
1.

Select Jobs in the System object in the console tree.

2.

Select the Jobs tab in the top area of the result pane. The jobs are listed.

3.

Select the job you want to start or stop.

4.

Depending on the actual status of the job, click Start or Stop in the action pane
to change the status of the job.

6.4 Enabling and Disabling Jobs


Jobs can be disabled to avoid their execution. Some jobs are disabled by default and
must be enabled manually if necessary.
Proceed as follows:
1.

Select Jobs in the System object in the console tree.

2.

Select the Jobs tab in the top area of the result pane. The jobs are listed.

3.

Select the job you want to enable or disable.

4.

Click Enable or Disable in the action pane to change the status of the job.

6.5 Checking Settings of Jobs


Proceed as follows:

86

1.

To check, create, modify and delete jobs, select Jobs in the System object in the
console tree.

2.

Select the Jobs tab in the top area of the result pane. The jobs are listed.

3.

Select the job you want to check. The latest message of this job is listed in the
bottom area of the result pane.

4.

Click Edit to check details of the job. See also Creating and Modifying Jobs on
page 87.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

6.6

Creating and Modifying Jobs

6.6 Creating and Modifying Jobs


Most of the jobs are created automatically. For example, pool-related jobs (Write,
Purge_Buffer and Copy_Back ) are configured when the pool is created. These jobs
can be modified later if necessary. Jobs can also be created manually to start jobs
automatically, e.g. the Alert_Cleanup job which is not archive or pool-related.
Proceed as follows:
1.

Select Jobs in the System object in the console tree.

2.

Select the Jobs tab in the top area of the result pane.

3.

Click New Job in the action pane. The wizard to create a new job opens.

4.

Enter a name for the new job. Select the command and enter the arguments
depending on the job.
Name
Unique name of the job that describes its function so that you can distinguish
between jobs having the same command. Do not use blanks and special
characters. You cannot modify the name later.
Command
Select the job command to be executed. See also Important Jobs and
Commands on page 83.
Argument
Entries can expand the selected command. The entries in the Arguments
field are limited to 250 characters. See also Important Jobs and Commands
on page 83.

Modifying jobs

5.

Select the start mode of the job and click Next.

6.

Depending on the start mode, define the scheduling settings or the previous job.
See also Setting the Start Mode and Scheduling of Jobs on page 87.

7.

Click Finish to complete.

To modify a job, select it and click Edit in the action pane. Proceed in the same way
as when creating a job.

6.7 Setting the Start Mode and Scheduling of Jobs


The start mode and the scheduling must be defined when you add or edit an job. A
wizard supports you to define the proper settings, see also Creating and Modifying
Jobs on page 87.
A job can be started:

at a certain time,

when another job is finished,

AR090701-ACN-EN-6

Administration Guide

87

Chapter 6 Configuring Jobs and Checking Job Protocol

when another job is finished with a certain return value,

at a certain time when an job has finished.

Start Mode
Specification of the start mode. Check the mode to define specific settings.
Scheduled
If you use this start mode, you can define the start time of the job, specified
by month, day, hour and minute. Thus, you can define daily, weekly and
monthly jobs or define the repetition of jobs by setting a frequency (hours or
minutes).
After previous job finished
If you use this start mode, you can specify the type of action that is to be
performed before the job is started. You can select between successfully
starting of the Administration Server and other jobs.
The return value indicates the result of a job run. If an job finishes
successfully, it usually returns the value 0. To start a job only when the
previous job finished successfully, enter 0 into the Return Value field.
If you use the Time Frame option, you can specify a time period within the
execution of the job is allowed.
General recommendations for job scheduling

Distribute the jobs over the 24-hour-day.

Jobs accessing the database on the same server must not collide, for example, the
Write jobs, Local_Backup job and Purge_Buffer jobs.

Monitor the job messages and check the time period the jobs take. Adapt the job
scheduling accordingly.

Scheduling for jobs using jukeboxes

Jobs accessing jukebox drives must not collide: different Write jobs,
Local_Backup, Synchronize_Replicates (Remote Standby Server) and
Save_Storm_Files.

Only one drive is used for Write jobs on WORM/UDO. Therefore, only one
WORM/UDO can be written at a time. That means, only one logical archive can
be served at a time.

Backup jobs need two drives, one for the original, one for the backup media.

6.8 Checking the Execution of Jobs


Jobs are processes that are started automatically in accordance with a predefined
schedule, e.g. jobs for writing storage media or for performing backups. Many of
these jobs run usually at night when Archive and Storage Services and network load
is low. Every day, you must check whether the jobs run correctly.

88

Open Text Archive and Storage Services

AR090701-ACN-EN-6

6.8

Checking the Execution of Jobs

The entries in the job protocol are regularly deleted by the SYS_CLEANUP_PROTOCOL
job that usually runs weekly. You can modify the maximum age and number of
protocol entries in
Runtime and Core Services > Configuration > Archive Server >
AS.ADMS.JOBS.ADMS_PROTOCOL_MAX_AGE and
AS.ADMS.JOBS.ADMS_PROTOCOL_MAX_SIZE.
Checking the last message of a job
Proceed as follows:
1.

Select Jobs in the System object in the console tree.

2.

Select the Jobs tab in the top area of the result pane.

3.

Select the job you want to check.


The latest message of the job is listed in the bottom area of the result pane.

Checking jobs protocols


Proceed as follows:
1.

Select Jobs in the System object in the console tree.

2.

Select the Protocol tab in the top area of the result pane. All protocol entries are
listed. Protocol entries with a red icon are terminated with an error. Green icons
identify jobs that have run successfully.

3.

Select a protocol entry to see detailed messages in the bottom area of the result
pane.

4.

Solve the problem.

5.

Restart the job.

6.

Check whether the execution was successful.

The following table lists the properties of a protocol entry:


Time

Date and time when the job was started.

Name

User-specific name of the job.

ID

Execution identification of the job instance. The number appears on job initialization and is repeated on job execution.

Status

INFO indicates that the job was completed successfully.


ERROR indicates that the job was terminated with an error.

Command

System command and arguments executed by the job.

AR090701-ACN-EN-6

Administration Guide

89

Chapter 6 Configuring Jobs and Checking Job Protocol

Message

Message generated by Archive and Storage Services. It provides more detailed information about how the job was terminated in case of an error.

Clearing protocol list


Proceed as follows:
1.

Select Jobs in the System object in the console tree.

2.

Select the Protocol tab in the top area of the result pane. All protocol entries are
listed.

3.

Click Clear protocol list in the action pane.


All protocol entries are deleted.

90

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Chapter 7

Configuring Security Settings


7.1 Overview
Archive and Storage Services provides several methods to increase security for data
transmission and data integrity:

SecKeys / Signed URLs, for verification of URL requests (see SecKeys / Signed
URLs on page 92).

Secure HTTP communication with SSL (see Secure HTTP Communication with
SSL on page 100).

Encrypted document storage (see Encrypted Document Storage on page 101).

Checksums to recognize and reveal unwanted modifications to the documents


on their way through the archive (see Checksums on page 106).

Timestamps to ensure that documents were not modified unnoticed in the


archive (see Timestamps on page 107 and Timestamp Server on page 114).

The administration of the encryption certificates, the system keys and the
timestamps is done in the Key Store in the System object of the console tree.
More information

You can find more information on security topics in the Security folder in the
Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/open/15491557).
Configuration settings concerning security topics are described in more detail in
part 2 "Configuration Reference: Archive and Storage Services, Document Pipeline,
Monitor Server and Monitor Web Client" in Open Text Administration Help - Runtime
and Core Services (ELCS100100-H-AGM) under Document Service, Key Store
backup/restore tool and Timestamp Server.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

91

Chapter 7 Configuring Security Settings

Important

Changing security settings


Security settings are usually configured during installation and initial
configuration. If you would like to change this configuration later, we
strongly recommend contacting Open Text Customer Support before.

Protecting from computer viruses


To archive clean documents, you must protect the documents from
viruses before archiving. Archive and Storage Services does not perform
any checks for viruses. To ensure error-free work of Archive and Storage
Services, locations where documents are stored temporarily, like disk
buffer volumes, cache volumes and Document Pipeline directories, must
not be scanned by any anti-virus software while Archive and Storage
Services is using them.

ArchiveLink and certificates


Signed ArchiveLink connections between external applications and Enterprise
Library Services require that the subject CN of the certificate and the name of the
application for Enterprise Library Services are identical.
This can be achieved in two ways:

You can define the name of the application and configure the certificate
correspondingly (For example, if you set up a whole new system).

You can gather the application ID (name of the application) from the certificate;
see the procedure below.

To obtain the application name from a certificate:


1.

Start Administration Client.

2.

In the console tree, expand Archiving and Storage and log on to the archive
server.

3.

Select the Archives > Original Archives > <archive to connect> node.

4.

In the result pane, from the Certificates tab, select the imported certificate.

5.

In the action pane, click View Certificate.

6.

From the Subject entry, note or copy the value after CN=
Use this value as the application ID when creating the application type
(Libraries > <Library Server> > Customization > Applications).

7.2 SecKeys / Signed URLs


SecKeys

92

Archive and Storage Services supports verification of SecKeys for HTTP


communication. A SecKey is an additional parameter in the URL of the archive

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.2

SecKeys / Signed URLs

access. It contains a digital signature and a signature time and date. The requesting
system creates a signature for the relevant parameters in the URL and the expiration
time and signs it with a private key. Archive and Storage Services verifies the
signature with the public key and only accepts requests with a valid signature and if
the SecKey's expiration time is not expired.
Certificates

The certificates with public keys are related to the logical archives. You can use
different keys for different archives if you have more than one leading application
or document types with different security requirements. You can also use one
certificate for several or all archives.

Remote Standby

In Remote Standby environment, the Synchronize_Replicates job replicates the


certificates for authentication. Only enabled certificates are copied. The certificate on
the Remote Server is disabled after synchronization, enable it as described in the
procedure Enabling a Certificate on page 95.

Protection levels

Whether SecKeys are verified or not is defined at several levels.


For Archive and Storage Services, i.e. all archives on the archive server:
In Administration Client: Runtime and Core Services > Configuration >
Archive and Storage Services >
AS.DS.SECURITY.GLOBAL_HTTP.SEC_ARCHIVESECURITYMODE
If NO_SEC is set, Archive and Storage Services does not verify the SecKeys,
other settings are not relevant.
See also: Activating SecKeys on page 94.
For the archive:
Security settings in the archives properties window (see Configuring the
Archive Security Settings on page 68).
For the document:
The leading application can archive the document together with the document
protection level. It defines for which actions on the document (create, read,
update, delete) a valid SecKey is required.
By default, the document protection has higher priority than the archive protection.
The administrator can reverse the priority by enabling it in Administration Client:
Runtime and Core Services > Configuration > Archive and Storage Services >
AS.DS.SECURITY.GLOBAL_HTTP.SEC_DEFAULTPROTECTIONMODE.

Caution
Do not use the Ignore Document Protection setting on a working server!
Take care to enable the Signature required to settings for the archives.
Otherwise, protected documents can be used without a valid SecKey.
Main tasks

The administrator must send or import the certificate with the public key to the
archive server. This procedure depends on the requesting leading application or
component. On the archive server, the administrator must configure the usage of
SecKeys:

AR090701-ACN-EN-6

Administration Guide

93

Chapter 7 Configuring Security Settings

Configuring SecKeys on the Archive Server on page 94

Importing and Checking Certificates for Authentication on page 96

Using SecKeys from SAP on page 98

Using SecKeys from Other Leading Applications and Components on page 98

7.2.1 Configuring SecKeys on the Archive Server


To use SecKeys on the archive server, the following main steps are necessary:

Import a certificate for authentication (see Importing and Checking Certificates


for Authentication on page 96)
or
send a certificate to the archive server from other applications (see Using
SecKeys from Other Leading Applications and Components on page 98)
or
send a certificate to the archive server from SAP (see Using SecKeys from SAP
on page 98).

Activate the SecKeys (see Activating SecKeys on page 94).

Enable the certificate (see Enabling a Certificate on page 95).

Changing privileges of a certificate if needed (see Granting Privileges for a


Certificate on page 95).

Configure the security settings for the logical archives (see Configuring the
Archive Security Settings on page 68).

7.2.1.1 Activating SecKeys


Proceed as follows:

94

1.

Select Configuration > Archive and Storage Services in the Runtime and Core
Services object in the console tree.

2.

Select AS.DS.SECURITY.GLOBAL_HTTP.SEC_ARCHIVESECURITYMODE
and click Set in the action pane.

3.

Select SEC_R3L2 (PKCS7 signatures) and click Finish.


See also: Protection levels on page 93.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.2

SecKeys / Signed URLs

7.2.1.2 Enabling a Certificate


Important
In case you are using Archive Cache Server, consider that a re-initialization
in secure environments can only work if the current certificates are available
on the cache server. To avoid problems, the Update documents security
setting must be deselected before certificates are enabled. See step 4.

Proceed as follows:
1.

Select Key Store in the System object of the console tree.

2.

Select the Encryption Certificates tab in the result pane.

3.

Check the fingerprint and view the certificate you have imported or sent (see
Checking the Encryption Certificates on page 103).

4.

If a cache server is assigned to the logical archive:


a.

Select Original Archives in the Archives object of the console tree.

b.

Select the logical archive in the console tree.

c.

Click Properties in the action pane and select the Security tab.

d. De-select temporarily Update documents.


5.

Select the encryption certificate and click Enable in the action pane.

7.2.1.3 Granting Privileges for a Certificate


The privileges of a certificate can be modified for each logical archive. Thus,
privileges for certificates can be restricted for special requirements. For example, a
scan station may not be allowed to delete documents. Thus, the privilege delete
documents must not be set in the certificate that is used to communicate with the
scan station.
Important
Any change made to the settings, has an impact on all archives where the
certificate is used!
To change privileges
Proceed as follows:
1.

Select Original Archives in the Archives object of the console tree.

2.

Select the logical archive in the console tree.

AR090701-ACN-EN-6

Administration Guide

95

Chapter 7 Configuring Security Settings

3.

Select the Certificates tab in the result pane. All imported certificates are listed.

4.

Select the designated certificate and click Change Privileges in the action pane.

5.

Select the privileges you want to assign to the certificate. The following
privileges are available:

Read documents

Create documents

Update documents

Delete documents

Pass by
This privilege is only evaluated in Enterprise Library Services scenarios. Pass
by must be set for the certificate of the

Archive Storage Provider


Enterprise Library Proxy Web Services (if used)
Rendition Web Services (if used)

Pass by must not be set for all other kinds of client certificates, e.g. SAP.
6.

Click OK to confirm changes.

7.2.2 Importing and Checking Certificates for Authentication


To use certificate authentication for logical archives, the designated certificates must
be imported and enabled.

7.2.2.1 Importing a Global Certificate for All Archives


A global certificate can be imported and assigned to all logical archives (global) at
once. Global certificates are valid for all logical archives also for archives that will
be created later on. A global certificate can only be enabled or disabled generally.
Proceed as follows:

96

1.

Select Original Archives in the Archives object in the console tree.

2.

Click Import Global Certificate for Authentication in the action pane. A


window to specify the certificate opens.

3.

Enter a new ID or select an existing ID if you want to replace an existing


certificate.

4.

Click Browse to open the file browser for the archive server filesystem and
select the designated Certificate. Click OK to resume.

5.

Click OK to start the import.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.2

SecKeys / Signed URLs

A protocol window shows the progress and the result of the import. To check
the protocol later on, see Checking Utilities Protocols on page 222.
6.

Select a logical archive in the console tree and select the Certificates tab in the
result pane. All available certificates are listed.

7.

Check the certificate (fingerprint). See, Checking Certificates of an Archive on


page 97.

8.

Select the imported global certificate and click enable in the action pane. The
global certificate is activated for all archives.

7.2.2.2 Importing a Certificate for a Single Archive


A certificate can also be imported to a single logical archive.
Proceed as follows:
1.

Select Original Archives in the Archives object in the console tree.

2.

Select the designate logical archive in the console tree and click Import
Certificate for Authentication in the action pane. A window to specify the
certificate opens.

3.

Enter a new ID or select an existing ID if you want to replace an existing


certificate.

4.

Click Browse to open the file browser for the archive server filesystem and
select the designated Certificate. Click OK to resume.

5.

Click OK to start the import.


A protocol window shows the progress and the result of the import. To check
the protocol later on, see Checking Utilities Protocols on page 222.

6.

Select the Certificates tab in the result pane. All certificates or the logical archive
are listed.

7.

Check the certificate (fingerprint). See, Checking Certificates of an Archive on


page 97.

8.

Select the imported certificate and click enable in the action pane to activate the
certificate.

7.2.2.3 Checking Certificates of an Archive


Before you enable a certificate, you should check it carefully.
Proceed as follows:
1.

AR090701-ACN-EN-6

Select the designated archive in Original Archives in the Archives object in the
console tree.

Administration Guide

97

Chapter 7 Configuring Security Settings

2.

Select the Certificates tab in the result pane. All certificates or the logical archive
are listed.

3.

Select the designated certificate and click View Certificate in the action pane.

4.

Check the general informations and the certification path.


General
This tab provides detailed information to identify the certificate
unambiguously: the certificate's issuer, the duration of validity, and the
fingerprint.
Certification Path
Here you can follow the certificate's path from the root to the current
certificate. A certificate can be created from another certificate. The path
shows the complete derivation chain. You can also view the parent certificate
information from here.

7.2.3 Using SecKeys from SAP


SecKeys can be used if the SAP Content Server HTTP Interface 4.5 (ArchiveLink 4.5) is
used for communication between the SAP system and the archive server.
Before verification is possible, the SAP system must send the public key to the
archive server as a certificate with the OAHT transaction. There, you enter the target
archive server and the archives for which the certificate is valid.
To verify the authenticity of the transmitted certificate, the system administrators of
the SAP system and the archive server compare the fingerprints of the sent and the
received certificates. If the fingerprints match, the archive administrator enables the
certificate (see Enabling a Certificate on page 95).

7.2.4 Using SecKeys from Other Leading Applications and


Components
SecKeys can also be used to secure communication between Transactional Content
Processing, Imaging: Enterprise Scan and Archive and Storage Services. Some client
programs of Archive and Storage Services, for example Document Pipeline, also
support SecKeys.
The certificate is sent to the archive server with the putCert command or imported
with the Import Certificate for Authentication utility (see Importing and Checking
Certificates for Authentication on page 96). You can use the certtool utility
(command line) to create a certificate, or to generate a request to get a trusted
certificate.
You find a description of the certtool utility in the Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/open/12331031).
If you have to manage a large number of certificates, make sure that the AuthIDs
and the distinguished names of the certificates are unique.

98

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.2

SecKeys / Signed URLs

Proceed as follows:
1.

Create a certificate with the certtool utility (command line), or create the
request and send it to a trust center. The <key>.pem file contains the private key
and is used to sign the URL. <cert>.pem contains the public key that the
archive server uses to verify the signatures.

2.

Store the certificate and the private key on the server of your leading application
(see the corresponding Administration Guide for details).
For Enterprise Scan and client programs of Archive and Storage Services, store
the certificates in the directories defined in the file <OT config>\Pipeline\config\setup\common.setup. The entry Client Private Key File defines the
directory for the key.pem file and the entry Client Certificate File for the
cert.pem file. The directory <OT config AS>\seckey\ is entered by default.
Correct the path, if necessary, and add the file names.
By storing the certificates in the file system, they are recognized by Enterprise
Scan and the client programs.
Important
For security reasons, limit the read permission for these directories to
the system user (Windows) or the archive user (UNIX).

3.

To import the certificate with the utility, see Importing and Checking
Certificates for Authentication on page 96.
Repeat this step, if you want to use the certificate for several archives.

4.

To send the certificate with the putcert command, do the following:


a.

Open a command line, enter the following command and press ENTER:
C:\>dsh -h <host>

<host> is the name of your archive server.


The following prompt is displayed: command: _
b.

Enter the following command and press ENTER:


setAuthId -I <myserver>

<myserver> is the name of your leading application server.


c.

Enter the following command and press ENTER:


putCert -a <archive> -f <file>

For the <archive> variable, enter the logical archive on the archive server for
which the certificate is relevant. Replace the <file> variable with the name of
the certificate, i.e. cert.pem.
If you need the certificate for several archives, call the command again for
each archive.
d. Quit the program with exit.

AR090701-ACN-EN-6

Administration Guide

99

Chapter 7 Configuring Security Settings

5.

Enable the certificate (see Enabling a Certificate on page 95).

7.3 Secure HTTP Communication with SSL


By using SSL (Secure Sockets Layer) for HTTP communication, authorized and
encrypted access to the archive is possible via the network.

7.3.1 SSL Connection to Document Service


SSL can be used to secure communication between clients and Document Service,
including cache servers. This method can be set up individually for each archive.
Note: You can use the ixoscert.pem test certificate that is delivered with the
installation. For security reasons, you should create your own certificate, or
apply for a certificate at a trust center.
Proceed as follows:
1.

Create a certificate or request for certificate with the certtool utility (command
line).
You find a description in the certtool utility folder in the Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/open/15491558).

2.

Copy the generated file (SSLkeycert.pem) to <OT config AS>/setup/.

Caution
Do not overwrite the ixoscert.pem file; otherwise, the server will not
be able to decrypt encrypted documents anymore!
3.

Set the path to your PEM file in <OT config AS>\Http.Setup.


If certificate and key are in the pem file:
# SSL configuration
SSLCertificateFile $(IXOS_SRV_CONFIG)/setup/<PemFileWithKeyAndCertificate.pem>

If certificate and key are in different pem files:


# SSL configuration
SSLCertificateFile $(IXOS_SRV_CONFIG)/setup/<PemFileWithCertificate.pem>
SSLCertificateKeyFile $(IXOS_SRV_CONFIG)/setup/<PemFileWithKey.pem>

4.

100

Activate SSL communication for the logical archives (see Configuring the
Archive Security Settings on page 68).

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.4

Encrypted Document Storage

7.3.2 SSL Connection Using Tomcat Web Server


Administration Server and the Archive Administration client communicate through
the Tomcat Web server. This connection is secured with SSL. The certificate is
delivered with the keystore. If you want to use your own certificate, see
http://jakarta.apache.org/tomcat/tomcat-5.5-doc/ssl-howto.html.

7.4 Encrypted Document Storage


Document data, in particular critical data, can be stored on the storage device in an
encrypted manner. Thus, the documents cannot be read without an archive system
and a key for decryption.
Document encryption is performed during the transfer of the documents from the
buffer to the storage device by the Write job. Documents in the buffer remain
unencrypted.
For document encryption, a symmetric key (system key) is used. You create this key
initially. The system key is stored in the archive server's keystore. It is encrypted on
the archive server with the archive server's public key and can then only be read
with the help of the archive server's private key. RSA is used to exchange the system
key between the archive server and the backup server.
HDSK pools (write through)
HDSK pools do not use a buffer. To encrypt documents use the designated
Compress_ job, see Data Compression on page 64.
Note: HDSK pools are not intended for use in productive archive systems. Use
them only for test purposes.

7.4.1 Creating a System Key for Document Encryption


Proceed as follows:

Caution
Be sure to store this key securely, so that you can re-import it if necessary.
If the key gets lost, the documents that were encrypted with it can no
longer be read!
Do not delete any key if you set a newer one as current. It is still used for
decryption.
1.

Select Key Store in the System object of the console tree.

2.

Select the System Keys tab in the result pane.

3.

Click Generate System Key in the action pane. A new key is generated.

AR090701-ACN-EN-6

Administration Guide

101

Chapter 7 Configuring Security Settings

4.

Export the new system key with the recIO command line tool and store it at a
safe place (see Exporting and Importing the Key Store on page 103).

5.

Make a backup of the key/certificate pair used by recIO to encrypt the


keystore:
Copy the <OT config AS>/config/setup/ixoscert.pem file and store it
alongside with the output of recIO from the preceding step.

6.

Select the created system key and click Set as current key. A key can only be set
as current key if it is successfully exported.
New documents are encrypted now with the current key, while decryption
always uses the appropriate key.

The Synchronize_Replicates job updates the keys and certificates first, before it
synchronizes the documents. The system keys are transmitted encrypted.
If you do not want to transmit the system keys through the network, you can also
export them from the original server to an external data medium and re-import
them on the backup server (see Exporting and Importing the Key Store on
page 103).

7.4.2 Activating Encryption for a Logical Archive


Encryption can be activated for each archive individually. By default, it is disabled.
Proceed as follows:
1.

Select Original Archives in the Archives object of the console tree.

2.

Select the logical archive in the console tree.

3.

Click Properties in the action pane.

4.

Select the Settings tab in the properties window (see also Configuring the
Archive Settings on page 70).

5.

Select Encryption and click OK.

7.5 Importing and Checking Encryption Certificates


Encryption certificates are used to encrypt the key store itself and for
communication between known servers. For security reasons, Open Text
recommends to obtain and import your own certificate instead of using the
delivered one.

7.5.1 Importing Encryption Certificates


With the Set Encryption Certificates utility, you replace the server key and the
certificate that is used to encrypt the key store. With a new certificate, you can reencrypt the key store.

102

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.6

Exporting and Importing the Key Store

Proceed as follows:
1.

Select Key Store in the System object in the console tree.

2.

Select the Encryption Certificates tab in the result pane. All available certificates
are listed.

3.

Click Set Encryption Certificates in the action pane.

4.

Enter the path and the complete file name of the certificate or click Browse to
open the file browser. Select the designated Certificate and click OK to confirm.

5.

Click OK to set the certificate.

6.

Check the protocol whether the certificate is successfully imported, see


Checking Utilities Protocols on page 222.

7.5.2 Checking the Encryption Certificates


If you want to use an imported certificate also for the encrypted data transfer to a
known server (remote standby, backup), you must enable it (see Enabling a
Certificate on page 95). Enabling is not necessary if you use it only for key store
encryption. Before you enable a certificate, check it carefully.
Proceed as follows:
1.

Select Key Store in the System object of the console tree.

2.

Select the Encryption Certificates tab in the result pane.

3.

Select the certificate to check and click View Certificate in the action pane.

4.

Check the general informations and the certification path.


General
This tab provides detailed information to identify the certificate
unambiguously: the certificate's issuer, the duration of validity, and the
fingerprint.
Certification Path
Here you can follow the certificate's path from the root to the current
certificate. A certificate can be created from another certificate. The path
shows the complete derivation chain. You can also view the parent certificate
information from here.

7.6 Exporting and Importing the Key Store


The contents of the key store (all keys) of an archive server can be exported and
imported with the recIO command line tool. The program must be executed
directly on the archive server.

AR090701-ACN-EN-6

Administration Guide

103

Chapter 7 Configuring Security Settings

recIO <command> [<options>]

The following commands are available:


L
Lists the contents of the key store (without the keys themselves) in a table.
The user must log on.
Example:
sunny:~> /usr/ixos-archive/bin/recIO L
recIO 5.0 (C) 2001 IXOS Software AG built May 14 2001
Please authenticate!
User
:dsadmin
Password :
idx ID
c x created
imported
origin
--------------------------------------------------------------------------1 EA03BDAF9ABB85A1 1 1 2001/01/18 17:26:01 ----/--/-- --:--:-- sunny
2 1EE312C064A27F73 0 1 2000/11/03 14:28:08 2001/05/14 15:14:52 hausse
3 3C5DE677C3707700 0 0 2001/01/05 17:52:57 2001/05/14 15:14:52 emma

E
Exports the contents of the key store. Use the export in particular to store the
system keys for document encryption.
The user must log on and specify a path for the export files. The option -t NN:MM
splits the contents of the key store into several different files (MM; maximum 8).
At least NN files must be reimported in order to restore the complete key store.
Example:
sunny:~> /usr/ixos-archive/bin/recIO E -t 3:5
recIO 5.0 (C) 2001 IXOS Software AG built May 14 2001
Please authenticate!
User
:dsadmin
Password :
Writing keystore with 3 system-keys to 5 token-files (3 required to restore)
Token[1/5] (default = /floppy/ixoskey.pem )
File (CR to accept above) : p1.pem
Token[2/5] (default = /floppy/ixoskey.pem )
File (CR to accept above) : p2.pem
Token[3/5] (default = /floppy/ixoskey.pem )
File (CR to accept above) : p3.pem
Token[4/5] (default = /floppy/ixoskey.pem )
File (CR to accept above) : p4.pem
Token[5/5] (default = /floppy/ixoskey.pem )
File (CR to accept above) : p5.pem

V
Verifies the contents of the key store against the exported files.
The user must log on and specify the path for the exported data. Then the
exported data is compared with the key store on the archive server.
Example:
sunny:~> /usr/ixos-archive/bin/recIO V
recIO 5.0 (C) 2001 IXOS Software AG built May 14 2001
Please authenticate!
User
:dsadmin
Password :
Token[1/?] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/ixoskey.pem)

104

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.7

Analyzing Security Settings

File (CR to accept above) : p3.pem


key 1 : 1EE312C064A27F73 : OK
key 2 : BEEB5213EF5FFABF : OK
key 3 : 10C8D409E585E43B : OK

D
Displays the information on the exported files. The information is shown in a
table.
Example:
sunny:~> /usr/ixos-archive/bin/recIO D
recIO 5.0 (C) 2001 IXOS Software AG built May 14 2001
Token[1/?] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p3.pem
created
origin
idx ID
--------------------------------------------------1 EA03BDAF9ABB85A1 2001/01/18 17:26:01 sunny
2 1EE312C064A27F73 2000/11/03 14:28:08 hausse
3 BEEB5213EF5FFABF 2000/11/08 09:26:36 emma

I
Imports the saved key store.
The user must log on and specify the path for the exported data. The data in the
key store is restored, encrypted with the archive server's public key and sent to
the administration server. The results are displayed. Keys already contained in
the archive server's store are not overwritten.
Example:
sunny:~> /usr/ixos-archive/bin/recIO V
recIO 5.0 (C) 2001 IXOS Software AG built May 14 2001
Please authenticate!
User
:dsadmin
Password :
Token[1/?] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/ixoskey.pem)
File (CR to accept above) : p3.pem
ID:BEEB5213EF5FFABF created:2000/11/08 09:26:36 origin:emma
Key already exists
ID:276CBED602BDFC25 created:2001/01/18 12:09:32 origin:arthomasa
Key successfully imported

7.7 Analyzing Security Settings


To get an overview of the security settings for a particular logical archive, use the
Analyze Security Settings utility.
Proceed as follows:
1.

AR090701-ACN-EN-6

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

Administration Guide

105

Chapter 7 Configuring Security Settings

2.

Select the Analyze Security Settings utility in the result pane and click Run in
the action pane.

3.

Enter the name of the archive in the Archive to analyze field.

4.

Click Run.

A window with all security settings opens.


Important
For security reasons, you can deactivate this function: In Runtime and Core
Services > Configuration > Archive Server >
AS.DS.SECURITY.GLOBAL_HTTP.SEC_ALLOW_ANALYZE_SEC.
See also:

Utilities on page 221

Checking Utilities Protocols on page 222

7.8 Checksums
Checksums are used to recognize and reveal unwanted modifications to the
documents on their way through the archive. The checksums are not signed, as the
methods used to reveal modifications are directed towards technical failures and not
malicious attacks.
The Enterprise Scan generates checksums for all scanned documents and passes
them on to the Document Service. The Document Service verifies the checksums and
reports errors that occur (see Monitoring with Notifications on page 265). On the
way from the Document Service to the STORM, the documents are provided with
checksums as well, in order to recognize errors when writing to the media.
The leading application, or some client, can also send a timestamp instead of the
checksum. The verification can check timestamps as well as checksums. The
certificates for those timestamps must be known to the archive server and enabled,
before the timestamp checksums can be verified (see Importing a Certificate for
Timestamp Verification on page 110).
You can activate the use of checksums for Document Pipelines on the local server,
defined in the file <OT config>\Pipeline\config\setup\common.setup. Set the
entry Use checksum in DS communication to on.

106

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.9

Timestamps

7.9 Timestamps
Timestamps

Using timestamps, you can verify that documents have not been altered since
archiving time. An additional Timestamp Server is required for this (see
Configuring the Archive Settings on page 70). Creating a timestamp means: The
computer calculates a unique number - a cryptographic checksum or hash value from the content of the document. The timestamp server adds the time and signs the
checksum with the private key. The signature is stored together with the document
component. When a document is requested, Archive and Storage Services verifies
whether the component was modified after storage by looking at the signature. It
needs the public key of the timestamp server certificate for verification. The
Windows Viewer and Java Viewer can display the verification result. Archive and
Storage Services can use timestamps in two ways:

Document timestamps (old)

ArchiSig timestamps

Document
timestamps

Each document component gets a timestamp when it arrives in the archive - more
precisely, when it arrives in the disk buffer and is known to the DS. This (old)
method requires a huge amount of timestamps and can be very expensive,
depending on the number of documents. Thus, it is available only for archives that
used timestamps in former archive server versions. You can migrate these
timestamps to ArchiSig timestamps.

ArchiSig
timestamps

With ArchiSig timestamps, the timestamps are not added per document, but for
containers of documents represented by hash trees:

AR090701-ACN-EN-6

Administration Guide

107

Chapter 7 Configuring Security Settings

A job builds the hash tree that consists of hash values of as many documents as
configured, and signs with the timestamp. Thus, you can collect, for example, all
documents of a day in one hash tree. Only one timestamp per hash tree is required.
The verification process needs only the document and the hash chain leading from
the document to the timestamp but not the whole hash tree:

ArchiSig timestamps are less expensive and can be easily renewed. Open Text
recommends to use this method.
Renewal of
timestamps

108

Electronically signed documents can loose their validity in the course of time,
because the availability and verifiability of certificates is limited (depend on the
regional laws) and the key lengths, certificates as well as cryptographic and hash
algorithms may become unsafe. Therefore, you can renew the timestamps for longterm stored documents. You should renew the timestamps, before:

the certificate is invalid,

the key length is unsafe,

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.9

the cryptographic algorithm is unsafe,

the public key method is unsafe.

Timestamps

You need only one new timestamp per hash tree. No access to the documents is
necessary. At the current standard of knowledge, the timestamp should be updated
every 5 years.
Renewal of hash
tree

Configuration

If documents must be retained a very long time (more than 20 years), the hash
algorithm that is used to calculate the hash values may become unsafe. In this rare
case, the hash tree must be renewed: The system reads the documents and calculates
new hash values and a new hash tree with a new hash algorithm, and signs the new
tree with a time stamp. This procedure is very time-consuming (see Renewing
Hash Trees on page 113).
You can set up signing documents with timestamps and the verification of
timestamps including the response behavior for each archive (see Configuring the
Archive Settings on page 70). Consider the recommendations given above.
Important
Once you have decided to use ArchiSig timestamps, you cannot go back to
document timestamps.
If you use both methods in parallel, the document timestamp secures the document
until the hash tree is built and signed. As this time period is short, an inexpensive
timestamp is sufficient for the documents, while the hash tree gets a timestamp
created with a certificate of an accredited provider. This trusted certificate is used
for verification.

Certificates

An archive server gets the certificates required for verification on different ways:
Timeproof timestamp server and IXOS timestamps
The certificate is automatically stored on the Administration Server during the
first signing process. Thus, the certificates are only shown in the Security tab
after several documents have been signed. If you want the certificates to be
shown before the signing starts, enter in the command line:
For Document timestamps: dsSign -t
For ArchiSig timestamps: dsHashTree -T
Other timestamp servers like AuthentiDate
You import the certificate with the Import Certificate for Timestamp
Verification utility.
See Importing a Certificate for Timestamp Verification on page 110.
After import, check the fingerprint and enable the certificate.

Remote Standby

In Remote Standby environment, the Synchronize_Replicates job replicates the


timestamp certificates. Only enabled certificates are copied. The certificate on the
Remote Server is automatically enabled after synchronization.

AR090701-ACN-EN-6

Administration Guide

109

Chapter 7 Configuring Security Settings

7.9.1 Importing a Certificate for Timestamp Verification


With the Import Certificate For Timestamp Verification utility, you can import
certificates for timestamp servers like AuthentiDate.
Proceed as follows:
1.

Select Key Store in the System object in the console tree.

2.

Select the Timestamps Certificates tab in the result pane. The current available
timestamps are listed.

3.

Click Import Certificate For Timestamp Verification in the action pane.

4.

Enter a new ID or select an existing ID if you want to replace an existing


certificate.

5.

Click Browse to open the file browser and select the designated Certificate.
Click OK to resume.

6.

Click OK to start the import.


A protocol window shows the progress and the result of the import. To check
the protocol later on, see Checking Utilities Protocols on page 222.

7.

Check the certificate if it is correct (fingerprint). See, Checking Certificates for


Timestamp Verification on page 110.

8.

Select the certificate in the result pane. Click Enable in the action pane to
activate the certificate.

7.9.1.1 Checking Certificates for Timestamp Verification


Before you enable a certificate, you should check it carefully.
Proceed as follows:
1.

Select Key Store in the System object of the console tree.

2.

Select the Timestamp Certificates tab in the result pane.

3.

Select the certificate to check and click View Certificate in the action pane.

4.

Check the general informations and the certification path.


General
This tab provides detailed information to identify the certificate
unambiguously: the certificate's issuer, the duration of validity, and the
fingerprint.
Certification Path
Here you can follow the certificate's path from the root to the current
certificate. A certificate can be created from another certificate. The path

110

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.9

Timestamps

shows the complete derivation chain. You can also view the parent certificate
information from here.

7.9.2 Configuring ArchiSig Timestamps


Proceed as follows:
1.

Select Configuration in the Runtime and Core Services object in the console
tree.

2.

Select Archive Server.

3.

Enter settings:
Minimum number of components per hash tree:
AS.DS.COMPONENT.ARCHISIG.TS_MINCNT
The number of document components that are required to build a new hash
tree. In other words, this is the minimum number of document components
signed with one timestamp. For a rough rule of thumb, you can enter 2/3 of
your daily average number of document components to get one hash tree
per day.
Port / Hostname of the timestamp server:
AS.DS.COMPONENT.ARCHISIG.TS_HOST
AS.DS.COMPONENT.ARCHISIG.TS_PORT
Enter the name and the port of your timestamp server.

4.

Check the other values. Usually, you can use the default values.

5.

Select the Timestamp Certificates Tab in Key Store in the System object of the
console tree.

6.

Select the Timestamp Certificates you want to use and click Enable in the action
pane.

7.

In the Archives object of the console tree. Create a new archive with the name
ATS and a pool to define where the hash trees are stored.

8.

For each archive that uses timestamps:

9.

a.

In the Archives object of the console tree, select the archive.

b.

Click Properties in the action pane.

c.

Select ArchiSig timestamps with your preferred verification mode.

In Jobs in the System object of the console tree, create jobs to build the hash
trees. You need one job for each archive that uses timestamps.
See also: Configuring Jobs and Checking Job Protocol on page 83.
Command
hashtree

AR090701-ACN-EN-6

Administration Guide

111

Chapter 7 Configuring Security Settings

Arguments
Archive name
Scheduling
If you use ArchiSig timestamps, schedule a nightly job. If the hash trees are
written to a storage system, make sure that the job is finished before the
Write job starts.

7.9.3 Migrating Existing Document Timestamps


You can migrate existing document timestamps into hash trees and sign the tree
with a timestamp. Thus, you can significantly reduce the number of timestamps
required for timestamp renewal.
Important
You can migrate document timestamps only once! Never disable ArchiSig
timestamps after starting migration.

Proceed as follows:
1.

Configure as described in Configuring ArchiSig Timestamps on page 111.

2.

In a command line, call the timestamp migration tool for each pool to be
migrated:
dsReSign p <pool name>

3.

Call the hash tree creation tool for each archive with migrated timestamps:
dsHashTree <archive name>

The tools calculate hash values from the existing timestamps, build hash trees and
get a timestamp for each tree.

7.9.4 Renewing Timestamps of Hash Trees


If the timestamp of a hash tree is going to be no longer valid, you can resign the tree
with a new timestamp.
Proceed as follows:
1.

Configure a new certificate on your timestamp server, make sure that is


available for the archive server and enable it in the Timestamp Certificates Tab
in Key Store in the System object of the console tree
Details: Timestamps on page 107.

2.

In a command line, enter:


dsHashTree show names

112

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.9

Timestamps

3.

In the resulting list, find the distinguished subject name(s) of your timestamp
service (subject of the service's certificate).

4.

In a command line, enter:


dsHashTree -a <ArchiveName> -s <DistinguishedNameOfOldCertificate>

The process finds all timestamps that were created with the certificate indicated in
the command. It calculates hash values for the timestamps and builds new hash
trees. Each hash tree is signed with a new timestamp.

7.9.5 Renewing Hash Trees


In rare cases, the hash algorithm that was used to calculate the hash values may
become unsafe. You can rebuild the hash tree with a new algorithm. This is a very
time-consuming process. To keep the documents safe while renewing the hash tree,
always two hash trees are built in parallel with different algorithms. If one hash
algorithm should become unsafe, you can use the other hash tree for verification
while renewing the unsafe hash tree.
Proceed as follows:
1.

To create the hash trees for the new documents, open a command line and
execute the command
> dsHashTree -m 1 [archive] for each logical archive using ArchiSig.

2.

Select Configuration in the Runtime and Core Services object in the console
tree.

3.

Select Archive Server.

4.

Change the unsafe hash algorithm (main or additional).


AS.DS.COMPONENT.ARCHISIG.TS_HASHALG
or
AS.DS.COMPONENT.ARCHISIG.TS_HASHALG2

5.

Restart the Spawner.

6.

In the Archives object of the console tree, create a new archive with the name
ATSC and a pool (HDSK).

7.

In a command line, enter for each archive:


dsReHashTree <OldHashAlgorithm> <NewHashAlgorithm> <archive name>

The process reads and re-hashes all documents in the specified archive and creates
new hash trees in the ATS archive. It writes the information required for verification
to the attrib.atr files of the documents and stores the updated files in the ATSC
archive. Both archives are necessary to verify the timestamps.

AR090701-ACN-EN-6

Administration Guide

113

Chapter 7 Configuring Security Settings

7.10 Timestamp Server


7.10.1 Overview
To put a timestamp on every document, Archive and Storage Services needs a
service to request timestamps from for each document. This can be a special
hardware device or Open Text Timestamp Server. Because special hardware is very
expensive, this program allows you to use the timestamping features of Archive and
Storage Services at no cost. However, it does not provide the same high securitylevel of a hardware device.
Timestamp Server is installed and configured together with Archive and Storage
Services. It handles the incoming requests, creates the timestamps and sends the
reply. It runs as an Archive and Storage Services component.
After the installation of Archive and Storage Services, Timestamp Server is ready to
use with a default signature key and certificate. Regardless it is recommended to
create your own keys and certificates (see Configuring Certificates and Signature
Keys on page 121).
Timestamp
Server
Administration

Timestamp Server Administration allows you to configure Timestamp Server,


generate signature keys and certificate requests and to view information about
Timestamp Server's status. See Timestamp Server Administration on page 126.
Select Programs > Open Text > Enterprise Library Services > Timestamp Control
Client to start it.
In environments where an automatic initialization after the start of Timestamp
Server is vital, the auto-initialization mode can be used. All necessary information
must be written into the configuration: the paths to the certificates and the signature
key, including the passphrase. However this method provides no security against an
intruder with read access to the server configuration.

7.10.2 Configuring Timestamp Server


After installation, basic settings of Timestamp Server are pre-set. You can configure
the settings if necessary.
The basic settings of Timestamp Server can be configured with Administration
Client or with Timestamp Server Administration.

7.10.2.1 Configuring Basic Settings with Timestamp Server


Administration
Proceed as follows:
1.

Start Timestamp Server Administration and click Options.


A window to check and modify the parameters which control the behavior of
Timestamp Server and the environment for Timestamp Server Administration

114

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.10 Timestamp Server

opens. Changes made in this window will not be used until Timestamp Server is
restarted.

AR090701-ACN-EN-6

Administration Guide

115

Chapter 7 Configuring Security Settings

2.

Enter settings and click OK.


To restart Timestamp Server open a command line and enter
spawncmd restart timestamp

Location
Every timestamp must contain information about the document and the
current time. Timestamps in the SigI-A4 format must also contain
information about Timestamp Server's geographic location. Supply your
location in a suitable format like <city>, <country>. The minimum length of
this string is 3 characters.
Server
This is the hostname of the computer on which Timestamp Server runs.
Port
The one and only communication interface of Timestamp Server is a TCP
port. Timestamp requests sent to this address will be processed if Timestamp
Server is running and configured. Therefore, you must specify the port
number. The default value is 32001; any number between 1 and 32767 might
work unless another process is using that port. Ports up to 1024 can only be
used if Timestamp Server runs with root privileges. When in doubt, contact
your system administrator.
Warning
A notification will be sent a given number of hours before the timeout is
reached. The status of the Timestamp service icon in Monitor Web Client
will change to warning. A setting of 0 disables this feature. See also
Creating and Modifying Notifications on page 269.
Time display
The main dialog retrieves the time from Timestamp Server and displays it
permanently. It can show the time as GMT (Greenwich Mean Time), or as a
local time representation, or both formats at the same time.
Signature Key File
For a full configuration, you can leave this entry empty for now. If you want
to do a quick start, select the file <OT config AS>/timestamp/stampkey.pem. The passphrase for this key file is ixos.
Change Passphrase
You can change the passphrase, which protects the signature key. If you
change the passphrase, the key file will be re-written.
Note: Any older copy of that file will still be usable with the old
passphrase.

116

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.10 Timestamp Server

Timeout
Because the internal clock of a computer has limited precision, this setting
provides a possibility to set a timeout period in hours after which
Timestamp Server refuses to timestamp incoming requests. The timeout
counter is reset every time you transmit the signing key as described in
Timestamp Server Administration on page 126. A timeout setting of 0 will
disable this feature and leave the server running unlimited.
Administration
If Timestamp Server is installed on a windows platform, Timestamp Server
Administration can be installed on the same machine. Otherwise, it can be
installed on a remote computer to do the administration via remote access.
Configuration requests will only be accepted by Timestamp Server if the
remote host is specified in this line. Multiple hostnames and IP addresses
must be separated by semicolons (;). If no host is supplied, only local
administration is possible.
Allow remote administration from any host
This is not recommended! Selecting this check box causes Timestamp Server
to accept configuration requests from any host. Only use this for debugging
or experimental purposes!
Timestamp Policy
Timestamps in the PKIX format (RFC 3161) contain an object identifier
(OID), which defines a timestamp policy. Leave the default value
(1.3.6.1.5.7.7.2) unless you know exactly what you need.
Notification
A given time in days before the first of all certificates expires, Timestamp
Server starts sending one notification a day to remind the administrator.
Passphrase(!)
This entry is needed for auto-initialization. If you enter a passphrase here, it
will be stored in Timestamp Server's configuration in an encrypted format.
At startup time, Timestamp Server can read and decrypt this passphrase and
use it to decode the signature key and initialize itself.

AR090701-ACN-EN-6

Administration Guide

117

Chapter 7 Configuring Security Settings

Hash Algorithm
If a certain hash algorithm is specified here, Timestamp Server will use that
algorithm to create the signatures. The default setting is same as in TS
request which causes Timestamp Server to use the same hash algorithm for
the signature as the one specified in the timestamp request it receives from
Archive and Storage Services.
Protocol file location
The path of the protocol file location.
Note: The path for the protocol file must exist or no protocol file will be
written. When starting up, Timestamp Server reads the last serial
number issued and continues timestamping with the next serial
number. If no logfile exists, Timestamp Server would begin with serial
number 1 to assign timestamps after each startup.
Maximum size
A maximum file size in kilobytes can be specified here. The protocol file will
be renamed to <filename>.old if its size exceeds the given value. A
previous old-file will be overwritten. If a size of 0 is specified, the protocol
file will grow infinitely.

7.10.2.2 Configuring Special Settings with Administration Client


Proceed as follows:
1.

Start Administration Client.

2.

Select Configuration in the Runtime and Core Services object in the console
tree.

3.

Select Archive Server.

4.

Enter settings and click OK.


General Installation Variables
These read-only variables show information about the installation.
Timestamp Service Configuration
File for the timestamp protocol
AS.TSTP.IXTKERNEL_VARS.TSTP_PROTOCOL_FILE
For each issued timestamp, an entry is made in this file.
Maximum size of the protocol-file
AS.TSTP.IXTKERNEL_VARS.TSTP_MAX_KB
A maximum file size in kilobytes can be specified here. The protocol file
is renamed to <filename>.old if its size exceeds the given value. A
previous old-file will be overwritten. If a size of 0 is specified the protocol
file will grow infinitely.

118

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.10 Timestamp Server

Host to accept configuration requests from


AS.TSTP.IXTKERNEL_VARS.TSTP_ADMIN_HOSTS
Timestamp Server Administration can initialize Timestamp Server on this
server from a different computer. Configuration requests will only be
accepted from a remote host if it is specified in this line. Multiple
hostnames and IP-addresses must be separated by semicolons (;). If no
host is supplied, only local initialization is possible.
Allow remote administration from any host
AS.TSTP.IXTKERNEL_VARS.TSTP_PUBLIC_ADMIN
This is not recommended! Selecting this checkbox causes Timestamp
Server to accept configuration requests from any host. Only use this for
debugging or experimental purposes!
TCP port for Timestamp Server
AS.TSTP.IXTKERNEL_VARS.TSTP_SERVER_PORT
The one and only communication interface of the running Timestamp
Server is a TCP port. Timestamp requests sent to this address will be
processed if Timestamp Server is running and configured. Therefore, you
must specify the port number. The default value is 32001; any number
between 1 and 32767 might work unless another process is using that
port. Ports up to 1024 can only be used if Timestamp Server runs with
root privileges. When in doubt, contact your system administrator.
Timeout
AS.TSTP.IXTKERNEL_VARS.TSTP_ACK_INTERVAL
Because the internal clock of a computer has limited precision, this
setting provides a possibility to set a timeout period in hours after which
the server refuses to timestamp incoming requests. The timeout counter
is reset every time you transmit the signing key as described in
Timestamp Server Administration on page 126. A timeout setting of 0
will disable this feature and leave the server running unlimited.
When to warn before the timeout is reached
AS.TSTP.IXTKERNEL_VARS.TSTP_ACK_WARN
A notification will be sent to the Notification Server a given number of
hours before the timeout is reached. The status of the Timestamp service
icon in Monitor Web Client will change to 'warning'. A setting of 0
disables this feature.
Note: You can configure the Notification Server in the Archive
Administration in the Notifications tab.
Days to warn before a certificate expires
AS.TSTP.IXTKERNEL_VARS.TSTP_CERT_EXPIRE_WARN
A given time in days before the first of all certificates expires, Timestamp
Server starts sending one notification a day to remind the administrator.

AR090701-ACN-EN-6

Administration Guide

119

Chapter 7 Configuring Security Settings

Policy OID for IETF timestamps


AS.TSTP.IXTKERNEL_VARS.TSTP_POLICY_OID
Timestamps in the PKIX format (RFC 3161) contain an object identifier
(OID) which defines a timestamp policy. Leave the default value
(1.3.6.1.5.7.7.2) unless you know exactly what you need.
Enforce usage of the following hash-algorithm for TS Signatures
AS.TSTP.IXTKERNEL_VARS.TSTP_FORCE_HASH_ALG
If a certain hash algorithm is specified here, Timestamp Server will use
that algorithm to create the signatures. The default setting is same as in
TS request which causes Timestamp Server to use the same hash
algorithm for the signature as the one specified in the timestamp request
it receives from Archive and Storage Services.
Configuration for Autostart
Location
AS.TSTP.AUTOSTART_VARS.TSTP_LOCATION
Every timestamp must contain information about the document and the
current time. Timestamps in the SigI-A4 format must also contain
information about Timestamp Server's geographic location. Supply your
location in a suitable format like <city>, <country>. The minimum
length of this string is 3 characters.
Path to the private key file
AS.TSTP.AUTOSTART_VARS.TSTP_SIGNATURE_KEY
The location of the signature key file (in PEM format).
Plaintext Passphrase for the private key
AS.TSTP.AUTOSTART_VARS.TSTP_PLAIN_PASSPHRASE
The passphrase with which the signature key is protected. The
passphrase for the sample key is ixos. This setting is deprecated because it
stores the passphrase without encryption. You should use Passphrase
for the private key instead.
Passphrase for the private key
AS.TSTP.AUTOSTART_VARS.TSTP_KEY_PASSPHRASE
The passphrase with which the signature key is protected. The
passphrase for the sample key is ixos. The input you give in this box will
be encrypted.
Note: Only one of the two above items must be specified. If both are
given, the server tries the unencrypted passphrase first.
Path to the certificate <n>
AS.TSTP.AUTOSTART_VARS.TSTP_CERTIFICATE<nn>
The certificate hierarchy beginning with the root authority.

120

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.10 Timestamp Server

Script for Monitor Web Client


What kind of Timestamp Server the script should expect
AS.TSTP.IXTWATCH_VARS.IXTWATCH_TS_SYSTEM
Monitor Web Client can display the status of either Timestamp Server,
the timeproof TSS80 system or the AuthentiDate timestamping system.
Hostname of Timestamp Server
AS.TSTP.IXTWATCH_VARS.TSTP_HOST
The name of the computer where the script tries to contact Timestamp
Server. This can be a remote machine. If this item is not set, localhost is
used instead.
Log file configuration
These settings specify the level of detail written in the log files. They apply to
the components ixTkernel (Timestamp Server), ixTstamp (Timestamp
Server Administration) and ixTwatch (the adapter for Monitor Web Client).

7.10.3 Configuring Certificates and Signature Keys


After the installation of Archive and Storage Services, Timestamp Server is ready to
use with default signature keys and certificates. However, it is recommended to
create your own signature keys and certificates. Timestamp Server needs certificates
that fit into a hierarchy to run properly.
Proceed as follows to use your own signature keys and certificates:
1.

Generate new signature keys (see Generating a New Signature Key on


page 121).

2.

Generate a request file (see Generating a New Request on page 123).

3.

Apply for a certificate at a trust center.

4.

Add new certificates (see Adding New Certificates on page 125).

5.

Import the certificate in the Key Store to use it (see Importing a Certificate for
Timestamp Verification on page 110).

7.10.3.1 Generating a New Signature Key


Timestamp Server needs a signature key-pair to work properly. This key-pair
consists of a private key, used to sign the timestamps, and a public key, used to
verify the timestamps. The public key is published in an X.509 certificate. The
private key must be kept secret and will therefore be encrypted. It is stored in
PKCS#1 format.

AR090701-ACN-EN-6

Administration Guide

121

Chapter 7 Configuring Security Settings

Proceed as follows:
1.

Start Timestamp Server Administration and click Certificates.


The Certificates window opens.

2.

Click Generate keys. The Generate new key pair window opens.

3.

Enter settings:
Passphrase
Enter the passphrase twice. This passphrase will be used to encrypt the keypair before storing it in a file.

122

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.10 Timestamp Server

Caution
The program can decrypt the key-pair only if you supply the
passphrase, so do not forget it. Timestamp Server cannot create
timestamps without it. The usual good advice for password selection
and handling applies: use a difficult password, do not write it down!
Key length
At least 1024 bits are recommended. Longer keys increase security and
validity time of the issued timestamps, but they also increase the time
needed to sign and verify those timestamps.
RSA/DSA
Selects the signature algorithm for which the key will be generated. RSA is
recommended since not all trust centers support DSA.
4.

Click Start to generate the key. This may take several minutes depending on the
key length and your machine's computing power. Generating a 2048 bit DSA
key on a P133 can take almost one hour!

After key generation, you will be asked where to store the key. You are basically free
to select the location. Two locations make special sense:

Auto-initialization

In the <OT config AS>/timestamp/ directory. Easy to find but also readable by
an attacker.

On a memory stick or a floppy disk. The floppy disk can be removed and stored
in a secure place. However, it is needed every time the key-pair is sent to Timestamp Server, i.e. every time you start Timestamp Server and every time the
timeout expires.

If your Timestamp Server runs in on a machine different from the one where you
run Timestamp Server Administration, you must copy the file containing the key to
a directory on the machine where Timestamp Server runs. This is typically the <OT
config AS>/timestamp/ directory. Then you can configure Timestamp Server to
use the signature key from that file in the configuration as described in
Configuration for Autostart on page 120.

7.10.3.2 Generating a New Request


You must apply for a certificate for Timestamp Server's public key at a trust center.
This is usually done by submitting a PKCS#10 request. In the Generate certificate
signing request dialog, you can supply the required information and generate a
PKCS#10 request. The fields Country, Organization and Common Name are
mandatory; Organizational Unit, State / Province, Location and Email are optional.
Proceed as follows:
1.

AR090701-ACN-EN-6

Start Timestamp Server Administration and click Certificates.

Administration Guide

123

Chapter 7 Configuring Security Settings

2.

Click Generate Request. The Generate certificate signing request window


opens.

3.

Enter settings. The fields Country, Organization and Common Name are
mandatory. Common Name should be the fully qualified hostname of
Timestamp Server. Organizational Unit, State / Province, Location and Email
are optional.

4.

Click Generate Request to start.


If you have not used your passphrase since you started Timestamp Server
Administration, you will be asked for the passphrase now. If you stored the keypair on a memory stick or a floppy disk, make sure that the disk is inserted. The
program needs the private key to sign the certificate request.

5.

Enter a filename and save the file. The contents of the file should look
something like this:
-----BEGIN CERTIFICATE REQUEST----MIICaDCCAiQCAQEwYzELMAkGA1UEBhMCREUxGTAXBgNVBAoTEElYT1MgU09GVFdB
UkUgQUcxDjAMBgNVBAsTBVRTMDAxMQ8wDQYDVQQHEwZNdW5pY2gxGDAWBgNVBAMT
...
I/ofikRvFV+fnw/kkddqr7VdNMH2oOHlozmgADALBgcqhkjOOAQDBQADMQAwLgIV
AJPkQtYi7uSSA3II6xeG6ucxJNz0AhUAh3acSLKnILYwnqdR7Vz8/R0b53s=
-----END CERTIFICATE REQUEST-----

6.

124

Use the request in the file to apply for a certificate at a trust center in a PEM file
format.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.10 Timestamp Server

7.10.3.3 Removing Certificates


Not used certificates can be removed. You must also remove certificates before you
add new ones.
Proceed as follows:
1.

Start Timestamp Server Administration and click Certificates.

2.

Select the certificate that should be removed.

3.

Click Remove Certificate.

4.

Click Yes to confirm.

7.10.3.4 Adding New Certificates


If you have created your own keys and you applied for a certificate at a trust center
and you already have it available in a PEM file format, you must supply these to
Timestamp Server.
A certificate contains a user's or server's public key and is therefore needed to verify
digital signatures. Timestamp Server supports requests for those certificates needed
to verify the digital signature in a timestamp and, recursively, also to verify any
digital signature in the certificates used for the verification. Typically, there are two
or three certificates:

The trust center certificate (CA)

The Timestamp Server certificate

or

The Root Authority certificate (root)

The trust center certificate (CA)

The Timestamp Server certificate


Note: If your Timestamp Server runs in auto-initialization mode on a machine
different from the one where you run Timestamp Server Administration, you
must copy the files containing your certificates to a directory on the machine
where Timestamp Server runs. This is typically the <OT config
AS>/timestamp/ directory. Then you can make a link in the configuration as
described in Configuration for Autostart on page 120.

Proceed as follows:
1.

Start Timestamp Server Administration and click Certificates.

2.

Select the old certificates and click Remove Certificate. Click Yes to confirm.

3.

Click Add Certificate. A window to select a certificates in PEM format opens.

AR090701-ACN-EN-6

Administration Guide

125

Chapter 7 Configuring Security Settings

4.

Add certificates. Start with the self-signed root certificate (either issued by the
trust center for itself or issued by the root authority for itself). The program will
complain if the order is not correct. A dialog displays the properties of each
certificate you are about to install.

5.

Verify this information thoroughly, especially the Valid not before and Valid
not after items.

6.

Click Yes to confirm that you want to use this certificate. The certificate will be
copied to the application directory.
Note: The program checks the certificate's Valid not before and Valid not
after specifications and rejects it if it is not valid.

7.11 Timestamp Server Administration


Timestamp Server is quite easy to be administered by Timestamp Server
Administration. The program allows monitoring the status of Timestamp Server and
provides functions to configure it.
For detailed information about the Options window, see Configuring Basic
Settings with Timestamp Server Administration on page 114.
For detailed information about the Certificates window, see Configuring
Certificates and Signature Keys on page 121.

126

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.11 Timestamp Server Administration

7.11.1 Checking the Status and Restarting Timestamp Server


The Status display indicates whether Timestamp Server Administration was able to
contact Timestamp Server. If the server is reachable, the status is running and
Timestamp Server's system time is displayed. If the server could not be connected,
the status is not running. The Service's System Time field shows the following text:

Note: If Timestamp Server for some reason does not grant you access for
configuration requests, the server's system time is displayed but the status
values for Signature key, Certificates, Location, and Time only show a
question mark.
If you are performing remote administration (i.e. with Timestamp Server
Administration on your local host and Timestamp Server on another
computer), make sure that the correct hostname for the administration host is
entered on the computer that runs Timestamp Server (see Configuring Basic
Settings with Timestamp Server Administration on page 114).
The following steps are recommended:
1.

Make sure that Timestamp Server is running.

2.

Start Timestamp Server Administration and click Options.


Make sure that the Server entry contains the hostname of the machine on which
Timestamp Server runs. This is your local machine's name unless you want to
remote administer a Timestamp Server on a different computer. In this case, also
verify that the Port is the same on the machine that runs Timestamp Server.

3.

If you still cannot get Timestamp Server to run, open a command prompt
window, go to the <OT install>/bin directory and type
>> ixTkernel -debug

The debug output should give you a hint, why Timestamp Server refuses to
start.
Checking the
status via Web
browser

The general status of Timestamp Server together with some details about its
configuration can also be retrieved and displayed with a standard Web browser.
Use the following URL:
http://<servername>:<port>

As <servername> use the machine name of Timestamp Server and as <port> use
the configured port. (The default port is 32001.)

AR090701-ACN-EN-6

Administration Guide

127

Chapter 7 Configuring Security Settings

Note: The status can only be retrieved on machines that are configured as
Administration hosts in Timestamp Server setup. If Allow remote
administration from any host is selected, the Web status can be used on any
host, of course.
There is a link to Timestamp Server's logfile. Following this link may take some time
if the logfile is large. Your browser may even hang or crash if the logfile is too large.
This is not a bug in the server software!

7.11.2 Transmit Parameters


After starting Timestamp Server, several configuration requests must be sent to
Timestamp Server: one for the location, one for the signature key-pair and one for
each certificate. To read the key-pair from the file and decrypt it, you must supply
the passphrase. If you are using the default key file for the quick start, the
passphrase is ixos. However, the program does not transmit the key-pair in plain
format. It again encrypts it for the transfer.
Proceed as follows:
1.

Start Timestamp Server Administration and click Transmit Parameters.

2.

Check the displayed time whether it is correct. If not, you must cancel this
dialog and adjust the time for Timestamp Server first (see Checking and
Adjusting the Time on page 129).

3.

Enter the passphrase and click OK.

7.11.3 Open Logfile


Timestamp Server writes one line containing the serial number of the timestamp
and other information to its protocol file for each timestamp issued. <OT
logging>/ixTkernel.hist is the file's default location which can be overwritten in
Timestamp Server's configuration. When starting up, Timestamp Server reads the
last serial number issued and continues timestamping with the next serial number.

128

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.11 Timestamp Server Administration

The protocol file opens in notepad.exe in case of local administration. In case of


remote administration of Timestamp Server, the default HTML browser is used.
Proceed as follows:
1.

Start Timestamp Server Administration.

2.

Click Open Logfile.

7.11.4 Checking and Adjusting the Time


Timestamp Server is unable to determine whether the machine it is running on has
the correct time. Unless Timestamp Server is running in auto-initialization mode,
the system time is not accepted before Timestamp Server receives its signature keypair. This is why you confirm that the displayed time is correct by entering your
passphrase and thus decoding the key file. The status is valid after this
confirmation. If a Timeout period > 0 is given in the Options dialog (see
Configuring Basic Settings with Timestamp Server Administration on page 114), a
timer will start to count until the end of that period. A configurable number of hours
before the timer reaches the timeout, the status for Time will also display the hours
and minutes remaining. Timestamp Server continues to timestamp incoming
requests until the timeout is eventually reached. You have the possibility to reset the
timeout counter as described below.
After the full timeout period has passed without any transmission of the signature
key, the status becomes invalid and Timestamp Server refuses to timestamp any
incoming requests.
If Timestamp Server detects a manipulation of the system time, it will immediately
stop issuing timestamps. The status check shows invalid within the next minute
(the status is requested and updated every 60 seconds).
Note: Time adjustment is not possible when Timestamp Server runs in autoinitialization mode and the configuration has been set up outside Timestamp
Server Administration. In this case, the system time must be maintained on the
server, and Timestamp Server must be restarted if the system time has been set
back.
Proceed as follows:
1.

Make sure that the system time on the server is correct.

2.

Start Timestamp Server Administration

3.

Re-configure the timeout if necessary (see Configuring Basic Settings with


Timestamp Server Administration on page 114).

4.

Click Adjust Time and correct Timestamp Server's time if necessary. The time
can be entered in either GMT or the local time representation.

AR090701-ACN-EN-6

Administration Guide

129

Chapter 7 Configuring Security Settings

5.

Click OK to send this new time and date to Timestamp Server.

6.

Click Transmit Parameters again and provide your passphrase when asked (see
Transmit Parameters on page 128).

7.11.5 Checking the Current Signature Key and Certificates


Configuration
Signature key

Certificates

Once Timestamp Server is connected to Timestamp Server Administration, the


status of the signature key is requested every minute. After a fresh start of
Timestamp Server, no signature key is available and the status is invalid. After you
transmitted the signature key along with the certificates and the location, the status
changes to valid (see Transmit Parameters on page 128).
The certificates status reflects whether Timestamp Server has accepted the
certificates and a key-pair that matches the public key in the server's certificate.
After a fresh start of Timestamp Server, no certificates are available and the
certificates status will be not set. After you transmitted a set of valid certificates
(see Transmit Parameters on page 128) along with the signature key and the
location, the status should change to set.
No timestamps must be issued at a time when a certificate required for verification
of that timestamp has expired. Therefore, Timestamp Server checks the validity
dates of its certificates against the system time for every timestamp. It sends a
notification every 24 hours starting a configurable number of days before a
certificate expires.
In case of problems, try the following steps:

130

1.

Start Timestamp Server Administration.

2.

Make sure that Timestamp Server is running and can be contacted. The Status
must be running (see Checking the Status and Restarting Timestamp Server
on page 127).

3.

Click Certificates. Right-click the certificate to check and select view.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

7.11 Timestamp Server Administration

Ensure that all certificates are valid (not expired) and the server has the correct
time.
4.

In the Certificates dialog, click Verify Path.

First, the program compares the server's public key with the public key in
the server's certificate. The two should match, otherwise the error message
Signature key could not be verified is displayed.

Second, it is verified that every certificate is currently valid and has not
expired. A certificate has expired is displayed otherwise.

Finally all certificates are verified with the issuer's public keys (taken from
the issuer's certificates). If this fails, the error message Verification of
certification path failed is displayed.

5.

If you receive errors, check whether the signature keys, the certificates, the
location and the time settings are configured correctly (see Configuring
Certificates and Signature Keys on page 121, Checking the Location on
page 131, Checking and Adjusting the Time on page 129).

6.

Click Transmit Parameters again and provide your passphrase when asked (see
Transmit Parameters on page 128).

If no error occurs and you see the message Certification path verified
successfully, the configuration is correct and can be used to run Timestamp
Server.

7.11.6 Checking the Location


SigI-A4 timestamps must contain information about the document, the current time
and about Timestamp Server's geographic location. If a string with a minimum
length of 3 characters has been transferred to Timestamp Server, the status should
be provided.
If not, try the following:
1.

Start Timestamp Server Administration.

2.

Make sure that Timestamp Server is running and can be connected. The Status
must be running. If not, see Checking the Status and Restarting Timestamp
Server on page 127.

3.

Click Options and enter an appropriate location. Click OK.

4.

Click Transmit Parameters and provide your passphrase when asked (see
Transmit Parameters on page 128).

AR090701-ACN-EN-6

Administration Guide

131

Chapter 8

Configuring Users, Groups and Policies


Archive and Storage Services needs a few specific administrative users for proper
work. They are managed in the System object of the archive server. The required
settings are preset during installation. Use the user management in the following
cases:

You want to change the password of the dsadmin administrator of the archive
server.
Important
See Password Security and Settings below for additional information
on passwords.

You need a user with specific rights.

You want to change settings of users, groups or policies.

The productive users of the leading application are managed in other user
management systems.

8.1 Password Security and Settings


To secure the system, Open Text strongly recommends the following:

Password
settings
Minimum length
for passwords

Change the password for the administrative users after installation, e.g. dsadmin
and dp*, if pipelines are in use.

Change the password regularly.

In case the administrator password has been lost: Contact Open Text Customer
Support to create an initial password for the archive administrator.

You can specify a minimum length for passwords, if a user is locked out after
several unsuccessful logons and how long the lockout is to be.
You can define a minimum character length for passwords. If you do not set this
property, the default value is eight.
Proceed as follows:
1.

AR090701-ACN-EN-6

From the <OT config AS>\setup directory, open the DS.Setup file in a text
editor.

Open Text Archive and Storage Services

133

Chapter 8 Configuring Users, Groups and Policies

2.

Enter the following line (or modify it if present already):


DS_MIN_PASSWD_LEN=<required password length>

Lock out after


failed logons

3.

Save the file.

4.

Restart WC. In a command window, enter spawncmd restart dswc

You can define that a user is locked out after a specified number of failed attempts
to log on; default is 0 (no lockout).
Note: The dsadmin user will never be locked out.
Proceed as follows:
1.

From the <OT config AS>\setup directory, open the DS.Setup file in a text
editor.

2.

Enter the following line (or modify it if present already):


DS_MAX_BAD_PASSWD=<number of failed attempts>

Unlock after
failed logons

3.

Save the file.

4.

Restart WC. In a command window, enter spawncmd restart dswc

You can define how long a user is locked out after a failed attempt; default is zero
seconds.
Note: The dsadmin user will never be locked out.
Proceed as follows:
1.

From the <OT config AS>\setup directory, open the DS.Setup file in a text
editor.

2.

Enter the following line (or modify it if present already):


DS_BAD_PASSWD_ELAPS=<unlock time in seconds>

3.

Save the file.

4.

Restart WC. In a command window, enter spawncmd restart dswc

8.2 Concept
Modules

To keep administrative effort as low as possible, the rights are combined in policies
and users are combined in user groups. The concept consists of three modules:
User groups
A user group is a set of users who have been granted the same rights. Users are
assigned to a user group as members. Policies are also assigned to a user group.
The rights defined in the policy apply to every member of the user group.

134

Open Text Archive and Storage Services

AR090701-ACN-EN-6

8.3

Configuring Users and Their Rights

Users
A user is assigned to one or more user groups, and he is allowed to perform the
functions that are defined in the policies of these groups. It is not possible to
assign individual rights to individual users.
Policies
A policy is a set of rights, i.e. actions that a user with this policy is allowed to
carry out. You can define your own policies in addition to using predefined and
unmodifiable policies.
Standard users

During the installation of Archive and Storage Services, some standard users, user
groups and policies are preconfigured:
dsadmin in aradmins group
This is the administrator of the archive system. The group has the ALL_ADMS
policy and can perform all administration tasks, view accounting information,
and start/stop the Spawner. After installation, the password is empty, change it
as soon as possible, see Creating and Modifying Users on page 138.
dpuser in dpusers group
This user controls the DocTools of the Document Pipelines. The group has the
DPinfoDocToolAdministration policy. The password is set by the dsadmin
user, see Creating and Modifying Users on page 138.
dpadmin in dpadmins group
This user controls the DocTools of the Document Pipelines and the documents in
the queues. The group has the ALL_DPINFO policy. The password is set by
the dsadmin user, see Creating and Modifying Users on page 138.

8.3 Configuring Users and Their Rights


If you need an additional user with specific rights for example, if the administrator
of Open Text DesktopLink is not allowed to use the dsadmin user to upload the
client's configuration profiles carry out the following steps:
Proceed as follows:
1.

Create and configure the policy, see Creating and Modifying Policies on
page 137.

2.

Create the user, see Checking, Creating and Modifying Users on page 137.

3.

Create and configure the user group and add the users and the policies, see
Checking, Creating and Modifying User Groups on page 139.

8.4 Checking, Creating and Modifying Policies


In a policy, you define which functions are allowed to be carried out. You can create
your own policies and associate them with a combination of rights of your choice.
When creating or modifying a policy, consider that the configuration applies to all
members of user groups to which the policy is assigned (group concept).

AR090701-ACN-EN-6

Administration Guide

135

Chapter 8 Configuring Users, Groups and Policies

Note: The standard policies are write-protected (read only) and cannot be
modified or deleted.

8.4.1 Available Rights to Create Policies


A policy is a set of rights. The available rights are combined in groups and
subgroups. For new policies, only rights of the Administrative WebServices group
should be used. The following table provides a short description of available rights.
Table 8-1: Administrative WebServices
Group

Description

Archive Administration

Summary of rights to control creation, configuration and deletion of logical archives.

Archive Users

Summary of rights to control creation, configuration and deletion of users and groups and their associated policies.

Notifications

Summary of rights to control creation, configuration and deletion of notifications and events.

Policies

Summary of rights to control creation, configuration and deletion of policies.

Important
Rights out of the following policy groups should no longer be used. These
rights are still available to ensure compatibility to policies created for former
versions of Archive and Storage Services (Archive Server).

Accounting
Administration Server
DPinfo
Scanning Client
Spawner

8.4.2 Checking Policies


Proceed as follows:

136

1.

Select Policies in the System object in the console tree to check, create, modify
and delete policies. All available policies are listed in the top area of the result
pane. In the bottom area the assigned rights are shown as a tree view.

2.

To check a policy, select it in the top area of the result pane. The assigned rights
are listed in the bottom area.

3.

To create and modify a policy, see Creating and Modifying Policies on


page 137.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

8.5

Checking, Creating and Modifying Users

8.4.3 Creating and Modifying Policies


Proceed as follows:
1.

Select Policies in the System object in the console tree. All available policies are
listed in the top area of the result pane.

2.

Click New Policy in the action pane. The window to create a new policy opens.

3.

Enter a name and description for the new policy.


Name
Name of the policy. Spaces are not allowed. The name cannot be modified
after creation.
Description
Short description of the role the user can assume by means of this policy.

4.

The Available Rights tree view shows all rights that are currently not associated
with the policy. Select a single right or a group of rights that should be assigned
to the policy and click Add >>.

5.

To remove a right or a group of rights, select it in the Assigned Rights tree view
and click << Remove.

Modifying a
policy

To modify a self-defined policy, select the policy in the top area of the result pane
and click Edit Policy in the action pane. Proceed in the same way as when creating a
new policy. The name of the policy cannot be changed.

Deleting a policy

To delete a self-defined policy, select the policy in the top area of the result pane and
click Delete in the action pane. The rights themselves are not lost, only the set of
them that makes up the policy. Pre-defined policies cannot be deleted.
See also:

Checking, Creating and Modifying Users on page 137

Checking, Creating and Modifying User Groups on page 139

Concept on page 134

8.5 Checking, Creating and Modifying Users


8.5.1 Checking Users
Proceed as follows:
1.

Select Users and Groups in the System object in the console tree to check,
create, modify and delete users.

2.

Select the Users tab in the top area of the result pane to list all users.

AR090701-ACN-EN-6

Administration Guide

137

Chapter 8 Configuring Users, Groups and Policies

3.

To check a user, select the entry in the top area of the result pane. The groups
which the user is assigned to are listed in the bottom area.

4.

To create and modify a user, see Creating and Modifying Users on page 138.

8.5.2 Creating and Modifying Users


A user can be member of several groups. The user has all rights that are defined in
the policies for these groups.
Proceed as follows:
1.

Select Users and Groups in the System object in the console tree.

2.

Select the Users tab in the result pane. All available users are listed in the top
area of the result pane.

3.

Click New User in the action pane. The window to create a new user opens.

4.

Enter the user name and the password and check Global if the user should be
assigned to Global Users.
Username
User name for Archive and Storage Services. The name may be a maximum
of 14 characters in length. Spaces are not permitted. This name cannot be
changed subsequently.
Password
Password for the specified user.
Confirm password
Enter exactly the same input as you have already entered under Password.
Global
Select this check box to replicate the user to all known servers.

5.

Click Next. A window with available user groups opens.

6.

Select the groups the user should be assigned to. Click Finish.

Modifying user
settings

To modify a user's settings, select the user and click Properties in the action pane.
Proceed in the same way as when creating a new user. The name of the user cannot
be changed.

Deleting users

To delete a user, select the user and click Delete in the action pane.
See also:

138

Creating and Modifying Policies on page 137

Checking, Creating and Modifying User Groups on page 139

Concept on page 134

Open Text Archive and Storage Services

AR090701-ACN-EN-6

8.6

Checking, Creating and Modifying User Groups

8.6 Checking, Creating and Modifying User Groups


8.6.1 Checking User Groups
Proceed as follows:
1.

Select Users and Groups in the System object in the console tree to check,
create, modify and delete user groups.

2.

Select the Groups tab in the top area of the result pane to list all groups.

3.

To check a user group, select the entry in the top area of the result pane.
Depending on the tab you selected, additional information is listed in the
bottom area:
Members tab
List of users who are members of the selected group.
Policies tab
List of policies which are assigned to the selected group.

4.

To create and modify a user group, see Creating and Modifying User Groups
on page 139.

8.6.2 Creating and Modifying User Groups


Proceed as follows:
1.

Select Users and Groups in the System object in the console tree.

2.

Select the Groups tab in the top area of the result pane. All available groups are
listed in the top area of the result pane.

3.

Click New Group in the action pane. The window to create a new group opens.

4.

Enter the name of the group and select Global if the members of the group
should be assigned to Global Users.
Name
A name that clearly identifies each user group. The name may be a
maximum of 14 characters in length. Spaces are not permitted.
Global
Select this check box to replicate the users of this group to all known servers.
Implicit
Implicit groups are used for the central administration of clients. If a group is
configured as implicit, all users are automatically members. If users who
have not been explicitly assigned to a user group log on to a client, they are
considered to be members of the implicit group and the client configuration
corresponding to the implicit group is used. If several implicit groups are
defined, the user at the client can select which profile is to be used.

AR090701-ACN-EN-6

Administration Guide

139

Chapter 8 Configuring Users, Groups and Policies

5.

Click Finish.

Modifying group
settings

To modify the settings of a group, select it and click Properties in the action pane.
Proceed in the same way as when creating a user group.

Deleting a user
group

To delete a user group, select it and click Delete in the action pane. Neither users
nor policies are lost, only the assignments are deleted.
See also:

Adding Users and Policies to a User Group on page 140

Creating and Modifying Policies on page 137

Checking, Creating and Modifying Users on page 137

Concept on page 134

8.6.3 Adding Users and Policies to a User Group


Proceed as follows:

Removing users
and policies

1.

Select the user group in the top area of the result pane for which users and
policies should be added.

2.

Select the Members tab in the bottom area. Click Add User in the action pane. A
window with available users opens.

3.

Select the users which should be added to the group and click OK.

4.

Select the Policies tab in the bottom area. Click Add Policy in the action pane. A
window with available policies opens.

5.

Select the policies which should be added to the group and click OK.

To remove a user or a policy, select it in the bottom area and click Remove in the
action pane.

8.7 Checking a User's Rights


You cannot see the rights of an individual user directly because they are assigned
indirectly via policies to user groups and not to individual users. Proceed as follows
to ascertain a user's rights:
Proceed as follows:

140

1.

Select Users and Groups in the System object of the console tree.

2.

Select the Users tab in the top area of the result pane and select the user. Note
the groups listed under Members in the bottom area.

3.

Select the Groups tab in the top area of the result pane and select Policies in the
bottom area of the result pane.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

8.7

Checking a User's Rights

4.

Select one of the groups you noted and note also the assigned policies listed in
the bottom area.

5.

Select Policies in the System object.

6.

Select one of the policies you noted. The associated groups of rights and
individual rights appear in the bottom area. Make a note of these.

7.

Repeat step 6 for all policies that you noted for the user group.

8.

Repeat steps 4 to 7 for the other user groups which the user is a member of.

AR090701-ACN-EN-6

Administration Guide

141

Chapter 9

Connecting to SAP Servers


If you use SAP as leading application, you configure the connection not only in the
SAP system but also in Administration Client. The Open Text Document Pipeline
DocuLink and Open Text Document Pipeline SAP - in particular the DocTools
R3Insert, R3Formid, R3AidSel and cfbx - require some connection information.
Thus, these Document Pipelines can send some data back to the SAP server, for
example, the document ID in bar code scenarios. For theses scenarios, the Open Text
Document Pipeline SAP must be installed. The basis and scenario customizing for
SAP is described in OpenText Archiving and Document Access for SAP Solutions Scenario Guide (ER-CCS). The configuration in the Archive Administration includes:

Creating and Modifying SAP Gateways on page 145

Creating and Modifying SAP Systems on page 143

Assigning a SAP System to a Logical Archive on page 146

9.1 Creating and Modifying SAP Systems


The Document Pipeline connects the SAP server in some scenarios. You configure
which SAP systems will be accessed.
Proceed as follows:
1.

Select SAP Servers in the Environment object in the console tree.

2.

Select the SAP Systems tab in the result pane.

3.

Click New SAP System in the action pane. A window to configure the SAP
system opens.

4.

Enter the settings for the SAP system.


SID
Three-character system ID of the SAP system (SAP_SID) with which the
administered server communicates. You cannot modify the name later.
Server name
Name of the SAP server on which the logical archives are set up in the SAP
system.
Client
Three-digit number of the SAP client in which archiving occurs.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

143

Chapter 9 Connecting to SAP Servers

Feedback user
Feedback user in the SAP system. The cfbx process sends a notification
message back to this SAP user after a document has been archived using
asynchronous archiving. A separate feedback user (CPIC type) should be set
up in the SAP system for this purpose.
Password
Password for the SAP R/3 feedback user. This is entered, but not displayed,
when the SAP system is configured. The password for the feedback user
must be identical in the SAP system and in Archive Administration.
Instance number
Two-digit instance number for the SAP system. The value 00 is usually used
here. It is required for the sapdpxx service on the gateway server in order to
determine the number of the TCP/IP port (xx = instance number) being
used.
Codepage
Relevant only for languages which require a 16-bit character set for display
purposes or when different character set standards are employed in different
computer environments. A four-digit number specifies the type of character
set which is used by the RFCs. The default is 1100 for the 8-bit character set.
To determine the codepage of the SAP system, log into the SAPGUI and
select System > Status. If the SAP system uses another codepage, two
conversion files must be generated in SAP transaction sm59, one from the
SAP codepage to 1100 and the other in the opposite direction. Copy these
files to the Archive and Storage Services directory <OT config
AS>/r3config and declare the codepage number here in Archive
Administration. The cfbx DocTool reads these files.
Language
Language of the SAP system; default is English. If the SAP system is
installed exclusively in another language, enter the SAP language code here.
Description
Here you can enter an optional description (restricted to 255 characters).
Test Connection
Click this button to test the connection to the SAP system. A window opens
and shows the test result.
5.
Modifying SAP
systems
Deleting SAP
systems
Testing a SAP
connection

144

Click Finish.

To modify a SAP system, select it in the SAP Systems tab and click Properties in the
action pane. Proceed in the same way as when creating a SAP system.
To delete a SAP system, select it in the SAP Systems tab and click Delete in the
action pane.
To test a SAP connection, select it in the SAP Systems tab and click Test SAP
Connection in the action pane. A window opens and shows the test result.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

9.2

Creating and Modifying SAP Gateways

9.2 Creating and Modifying SAP Gateways


SAP gateways link the SAP systems to the outside world. At least one gateway must
be defined for each SAP system. One gateway can also be used for multiple SAP
systems.
Access to a specific SAP gateway depends on the subnet in which a Document
Pipeline or Enterprise Scan workstation is located. The Internet address is evaluated
for identification purposes.
Proceed as follows:
1.

Select SAP Servers in the Environment object in the console tree.

2.

Select the SAP Gateways tab in the result pane.

3.

Click New SAP Gateway in the action pane. A window to configure the SAP
gateway opens.

4.

Enter the settings for the SAP gateway.


Subnet address
Specifies the address for the subnet in which an archive server or Enterprise
Scan is located. At least the first part of the address (e.g. NNN.0.0.0 in case of
IPv4) must be specified. A gateway must be established for each subnet.
IPv6
If you use IPv6, do not enclose the IPv6 address with square brackets.
Subnet mask / Length
Specifies the sections of the IP address that are evaluated. You can restrict
the evaluation to individual bits of the subnet address.
IPv4
Enter a subnet mask, for example 255.255.255.0.
IPv6
Enter the address length, i.e. the number of relevant bits, for example 64.
SAP SID
Three-character system ID of the SAP system (SAP_SID) for which the
gateway is configured. If this is not specified, then the gateway is used for all
SAP systems for which no gateway entry has been made. If subnets overlap,
the smaller network takes priority over the larger one. If the networks are of
the same size, the gateway to which a concrete SAP system is assigned has
priority over the default gateway that is valid for all the SAP systems.
Gateway address
Name of the server on which the SAP gateway runs. This is usually the SAP
server.

AR090701-ACN-EN-6

Administration Guide

145

Chapter 9 Connecting to SAP Servers

Gateway number
Two-digit instance number for the SAP system. The value 00 is usually used
here. It is required for the sapgwxx service on the gateway server in order to
determine the number of the TCP/IP port (xx = instance number; e.g.
instance number = 00, sapgw00, port 3300).
5.
Modifying SAP
gateways
Deleting SAP
gateways

Click Finish.

To modify a SAP gateway, select it in the SAP Gateways tab and click Properties in
the action pane. Proceed in the same way as when creating a SAP gateway.
To delete a SAP gateway, select it in the SAP Gateways tab and click Delete in the
action pane.

9.3 Assigning a SAP System to a Logical Archive


For archives used with SAP as leading application, specific information is required
for most archive scenarios. Enterprise Scan reads this information from the
Administration Server and stores it in the COMMANDS file. The cfbx DocTool needs
these settings to connect to the SAP system.
Requirements:

The gateway to the SAP system is created and configured, see Creating and
Modifying SAP Gateways on page 145.

The SAP system is created and configured, see Creating and Modifying SAP
Systems on page 143.

Proceed as follows:
1.

Select SAP Servers in the Environment object in the console tree.

2.

Select the Archive Assignments tab in the result pane. All archives are listed in
the top area of the result pane.

3.

Select the archive to which a SAP system should be assigned. Keep in mind, that
SAP system can be assigned only to original archives.

4.

Click Add SAP system in the action pane. A window to configure the SAP
archive assignment opens.

5.

Enter the settings for SAP archive assignment:


SID
Three-character system ID of the SAP system with which the logical archive
communicates (SAP_SID).
Archive link version
The ArchiveLink version 4.5 for SAP R/3 version 4.5 and higher is currently
used.

146

Open Text Archive and Storage Services

AR090701-ACN-EN-6

9.3

Assigning a SAP System to a Logical Archive

Protocol
Communication protocol between the SAP application and Archive and
Storage Services. Fully configured protocols, which can be transported in the
SAP system, are supplied with the SAP products of Open Text.
Use as default SID
Selects the SAP system to which the return message with the barcode and
document ID is sent in the Late Storing with Barcode scenario. This setting
is only relevant if the archive is configured on multiple SAP applications, e.g.
on a test and a production system.
6.
Modifying
archive
assignments
Deleting archive
assignments

Click Finish.

To modify an archive assignment, select it in the bottom area of the result pane and
click Properties in the action pane. Proceed in the same way as when assigning a
SAP system.
To delete an archive assignment, select it in the bottom area of the result pane and
click Delete in the action pane.

AR090701-ACN-EN-6

Administration Guide

147

Chapter 10

Configuring Scan Stations


There are archiving scenarios in which scan stations submits scanned content to
logical archives. For these scenarios, the scan stations needs information about the
archiving operation. It needs to know which logical archives the documents are sent
to, and how the documents are to be indexed when archived. The archive mode
contains this information.
Archive modes are assigned to every scan station. When a scan station starts, it
queries the archive modes that are defined for it at the specified archive server. The
employee at the scan station assigns the appropriate archive mode to the scanned
documents in the course of archiving.
The following details must be configured correctly to archive from scan stations:

Archive in which the documents are stored, scenario and conditions, workflow:
see Adding and Modifying Archive Modes on page 151.

Scan station to which an archive mode applies: see Adding a New Scan Host
and Assigning Archive Modes on page 154.

If SAP is the leading application: the SAP system to which the barcode and the
document ID are sent, the communication protocol and version of the
ArchiveLink interface: see Assigning a SAP System to a Logical Archive on
page 146.

For more information on archiving scenarios, see Scenarios and Archive Modes
on page 149.

10.1 Scenarios and Archive Modes


Below you find some example settings for various archiving scenarios, sorted
according to the leading applications.
Suite for SAP Solutions
You need the Document Pipelines for SAP (R3SC) for all archiving scenarios. For
scenarios in which archiving is started from the SAP GUI, you do not need an
archive mode.
Scenario (Opcode)

Conditions

Workflow

Extended Conditions

Late storing with barcodes


See also section 8.2.4 "Archiving with bar code technology" in OpenText Archiving and
Document Access for SAP Solutions - Scenario Guide (ER-CCS).

AR090701-ACN-EN-6

Open Text Archive and Storage Services

149

Chapter 10 Configuring Scan Stations

Scenario (Opcode)

Conditions

Workflow

Extended Conditions

Late_Archiving

BARCODE

n/a

n/a

Specific scenarios
Early_Archiving

n/a

Late_R3_Indexing

n/a

Early_R3_Indexing

n/a

DirectDS_R3

n/a

Transactional Content Processing


Scenario
(Opcode)

Conditions

Workflow

Extended Conditions

Pre-indexing
Documents are indexed in Enterprise Scan first. The archiving process archives the document to the Transactional Content Processing Servers.
DMS_Indexing

n/a

n/a

n/a

Pre-indexing to Process Inbox of TCP GUI


Documents are indexed in Enterprise Scan first. The archiving process archives the document to the Transactional Content Processing Servers and starts a process with the document.
DMS_Indexing

n/a

<processname>

PS_MODE LEA_9_7_0
PS_ENCODING_BASE64_UTF8N 1

Pre-indexing to Tasks inbox of PDMS UI


Documents are indexed in Enterprise Scan first. The archiving process archives the document to the Transactional Content Processing Servers and creates a task in the TCP Application Server PDMS UI inbox for a particular user, or for any user in a particular group.
DMS_Indexing

n/a

n/a

BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>

User:
key = BIZ_DOC_RT_USER
value = <domain>\<name>
User group:
key = BIZ_DOC_RT_GROUP
value = <domain>\<name>
Late indexing to Process Inbox of TCP GUI
Archives the document to the Transactional Content Processing Servers and starts a process
with the document in the TCP GUI inbox. Documents are indexed in TCP.
DMS_Indexing

n/a

<processname>

PS_MODE LEA_9_7_0
PS_ENCODING_BASE64_UTF8N 1

150

Open Text Archive and Storage Services

AR090701-ACN-EN-6

10.2 Adding and Modifying Archive Modes

Scenario
(Opcode)

Conditions

Workflow

Extended Conditions

Late indexing to Indexing inbox of PDMS UI


Archives the document to the Transactional Content Processing Servers and creates an indexing item in the TCP Application Server PDMS UI Indexing inbox. Documents are indexed in TCP.
DMS_Indexing

PILE_INDEX

n/a

BIZ_ENCODING_BASE64_UTF8N
BIZ_REG_INDEXING

Leave the values empty


BIZ_APPLICATION<name>

Late indexing to Tasks inbox of PDMS UI


Archives the document to the Transactional Content Processing Servers and creates a task in
the TCP Application Server PDMS UI inbox for a particular user, or for any user in a particular group. Documents are indexed in TCP.
DMS_Indexing

PILE_INDEX

n/a

BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>

User:
key = BIZ_DOC_RT_USER
value = <domain>\<name>
User group:
key = BIZ_DOC_RT_GROUP
value = <domain>\<group>
Late indexing for plug-in event
Archives the document to the Transactional Content Processing Servers and calls a plug-in
event in the TCP Application Server. Documents are indexed in TCP.
DMS_Indexing

PILE_INDEX

n/a

BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>
BIZ_PLG_EVENT=<plugin>:<event>

10.2 Adding and Modifying Archive Modes


With the archive mode, you define the archiving scenario and the archive in which
the documents are to be stored.
Proceed as follows:
1.

Select Scan Stations in the Environment object in the console tree.

2.

Select the Archive Modes tab in the result pane.

3.

Click New Archive Mode in the action pane.

4.

Enter the settings for the archive mode.


Details: Archive Mode Settings on page 152

AR090701-ACN-EN-6

Administration Guide

151

Chapter 10 Configuring Scan Stations

5.

Click Finish.
Thus you can create several archive modes, e.g. if you want to assign document
types to different archives.

Modifying an
archive mode

To modify the settings of an archive mode, select it in the Archive Modes tab in the
result pane and click Properties in the action pane. Proceed in the same way as
when adding an archive mode. Details: Archive Mode Settings on page 152

Deleting an
archive mode

To delete an archive mode, select it in the Archive Modes tab in the result pane.
Click Delete in the action pane. If the archive mode is assigned to a scan host, it
must be removed first, see Removing Assigned Archive Modes on page 156.
See also:

Archive Mode Settings on page 152

Scenarios and Archive Modes on page 149

Adding a New Scan Host and Assigning Archive Modes on page 154

10.3 Archive Mode Settings


With archive mode settings, you define where the documents are stored, how they
are processed, and further actions that are triggered in the leading application. You
can find a list of archiving scenarios and their archive mode settings in Scenarios
and Archive Modes on page 149.
General tab
Archive mode name
Name of the archive mode. Do not use spaces. You cannot change the name of
the archive mode after creation.
Scenario
Name of the archiving scenario (also known by the technical name Opcode).
Scenarios apply to leading applications.
Archive name
Name of the logical archive, to which the document is sent.
SAP system (SID)
Three-character ID of the SAP system (SAP_SID) with which the administered
server communicates.
Pipeline Host tab
Pipeline Info
Use local pipeline: The document pipeline installed on the client is used.
Use remote pipeline: The Document Pipelines can be installed on a separate
computer. The pipeline is accessed via an HTTP interface. For this configuration
the protocol, the pipeline host and the port must be set.

152

Open Text Archive and Storage Services

AR090701-ACN-EN-6

10.3 Archive Mode Settings

Protocol
Protocol that is used for the communication with the pipeline host. For security
reasons, HTTPS is recommended.
Pipeline host
The computer where the Document Pipeline is installed.
Port
Port that is used for the communication with the pipeline host. Use 8080 for
HTTP or 8090 for HTTPS.
Advanced tab
Workflow
Name of the workflow that will be started in BPM Server when the document is
archived. For details concerning the creation of workflows, see the BPM Server
documentation.
Conditions
These archiving conditions are available:
R3EARLY
Early archiving with SAP.
BARCODE
If this option is activated, the document can only be archived if a barcode was
recognized. For Late Archiving, this is mandatory. For Early Archiving, the
behavior depends on your business process:

If a barcode or index is required on every document, select the Barcode


condition. This makes sure that an index value is present before archiving.
The barcode is transferred to the leading application.

If no barcode is needed, or it is not present on all documents, do not select


the Barcode condition. In this case, no barcode is transferred to the
leading application.

PILE_INDEX
Sorts the archived documents into piles for indexing according to certain
criteria. For example, the pile can be assigned to a document group, and the
access to a document pile in a leading application like Transactional Content
Processing can be restricted to a certain user group.
INDEXING
Indexing is done manually.
ENDORSER
Special setting for certain scanners. Only documents with a stamp are stored.
Extended Conditions
This table is used to hand over archiving conditions to the COMMANDS file, for
example, to provide the user name so that the information is sent to the correct
task inbox. The extended conditions are key-value pairs. Click Add to enter a

AR090701-ACN-EN-6

Administration Guide

153

Chapter 10 Configuring Scan Stations

new condition. To modify a extended condition select it and click Edit. Click
Remove to delete the selected condition.
See also:

Adding and Modifying Archive Modes on page 151

Adding a New Scan Host and Assigning Archive Modes on page 154

10.4 Adding Additional Scan Hosts


It is possible to assign more than one scan host to an archive mode.
Proceed as follows:
1.

Select Scan Stations in the Environment object in the console tree.

2.

Select the Archive Modes tab in the result pane.

3.

Select the archive mode to assign scan hosts.

4.

Click Add Scan Host in the action pane. A window with available scan hosts
opens.

5.

Select the designated scan hosts and click OK.

See also:

Adding and Modifying Archive Modes on page 151

Adding a New Scan Host and Assigning Archive Modes on page 154

10.5 Adding a New Scan Host and Assigning Archive


Modes
The assignment of archive modes to scan hosts specifies which archive modes can
be used by a scan station. Multiple assignments are possible, i.e. you can operate
with several scanners and store documents in the same or different archives using
different scenarios. Also a default mode for each scan host can be set. Enterprise
Scan reads the archive modes from the Administration Server when it is starting. So
you have to restart Enterprise Scan after assigning archive modes.
Proceed as follows:

154

1.

Select Scan Stations in the Environment object in the console tree.

2.

Select the Scan Hosts tab in the result pane.

3.

Click New Scan Host in the action pane.

4.

Enter the settings for the scan host:

Open Text Archive and Storage Services

AR090701-ACN-EN-6

10.6 Adding Additional Archive Modes

Scan host name


Name of the scan station that is used to reference it in the network. Spaces
are not permitted. You can check the validity of the name by sending a ping
to the scan station. The name must be entered in exactly the same way as it
has been defined at operating system level.
Site
Describes the location of the scan host.
Description
Brief, self-explanatory description of the scan host.
Default archive mode
Archive mode assigned as default to the corresponding scan station.

Deleting an
archive mode

5.

Click Finish.

6.

Add additional archive modes if needed (see Adding Additional Archive


Modes on page 155).

To delete an archive mode, select it in the Archive Mode tab in the result pane. Click
Delete in the action pane. If the archive mode is assigned to a scan host, it must be
removed first, see Adding a New Scan Host and Assigning Archive Modes on
page 154.
See also:

Adding Additional Archive Modes on page 155

Adding and Modifying Archive Modes on page 151

Archive Mode Settings on page 152

10.6 Adding Additional Archive Modes


It is possible to assign more than one archive mode to a scan host to support
different scenarios.
Proceed as follows:
1.

Select Scan Stations in the Environment object in the console tree.

2.

Select the Scan Hosts tab in the result pane.

3.

Select the scan host to assign archive modes.

4.

Click Add Archive Mode in the action pane. A window with available archive
modes opens.

5.

Select the archive modes and click OK.

See also:

Adding and Modifying Archive Modes on page 151

AR090701-ACN-EN-6

Administration Guide

155

Chapter 10 Configuring Scan Stations

Archive Mode Settings on page 152

10.7 Changing the Default Archive Mode


You can assign more than one archive mode to a scan host. The default archive
mode is the preferred mode for scan clients, which are using this scan host. The first
assigned archive mode is the default mode, but can be changed if necessary.
Proceed as follows:
1.

Select Scan Stations in the Environment object in the console tree.

2.

Select the Scan Hosts tab in the result pane.

3.

Select the scan host to change the default archive mode.

4.

Click Properties in the action pane.

5.

Choose the new default archive mode and click OK.

10.8 Removing Assigned Archive Modes


Proceed as follows:

156

1.

Select Scan Stations in the Environment object in the console tree.

2.

Select the Scan Hosts tab in the result pane.

3.

Select the scan host in the top area of the result pane.

4.

Select the archive mode which you want to remove in the bottom area of the
result pane.

5.

Click Remove in the action pane.

6.

Click OK to confirm.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Chapter 11

Adding and Modifying Known Servers


Known servers are used to realize remote standby scenarios to increase data
security. If a server is added as a known server to the environment, all archives of
this server can be checked in External Archives in the Archives object of the console
tree. If a logical archive of a known server is replicated to the original server, this
archive can be checked in Replicated Archives in the Archives object of the console
tree. See Configuring Remote Standby Scenarios on page 161.

11.1 Adding Known Servers


Proceed as follows:
1.

Select Known Servers in the Environment object in the console tree.

2.

Click New Known Server in the action pane.

3.

Enter the known server parameters:


Remote server name
Name of the remote server to be added as known server.
Note: Instead of the host name, you can also use IPv4 addresses.
However, IPv6 addresses are not supported.
Remote server is allowed to replicate from this host
Check this if the known server should be used to replicate archives, e.g. for
remote standby scenarios.
Port, Secure port, Context path
Specifies the port, the secure port and the context path, that enables the
archive server to create URLs of a designated Remote Standby Server.
Structure of the URLs:
http://<host>:<port><context>?...
https://<host>:<secure port><context>?...

Example:
<host> = host03100
<port> = 8080
<secure port> = 8090
<context> = /archive
http://host03100:8080/archive?...
https://host03100:8090/archive?...

AR090701-ACN-EN-6

Open Text Archive and Storage Services

157

Chapter 11 Adding and Modifying Known Servers

4.

Click Finish. The new known server is added to the Environment.

11.2 Checking and Modifying Known Servers


Proceed as follows:
1.

Select Known Servers in the Environment object in the console tree.

2.

Select the server you want to check.

3.

Click Properties in the action pane.

4.

To modify the settings of a known server, proceed in the same way as when
adding a known server. Additional to the New known server window, you get
more information of the known server:
Version
The version number of the known server.
Startup time
The date and time when the known server was started last.
Build Information
Detailed information of the software build and revision of the known server.
Description
Shows the short description of the known server, if available.

5.
Modifying known
server settings

Click OK.

To modify the settings of a known server, select it in the top area of the result pane
and click Properties in the action pane. Proceed in the same way as when adding a
known server.

11.3 Synchronizing Servers


The Synchronize Servers function transfers settings from known servers to the local
server. This is useful if settings on a known server are changed (e.g. replicated pools
or buffers).
Thus you can update:

Settings of replicated archives

Settings of replicated buffers

Encryption certificates

Timestamp certificates

System keys

Proceed as follows:

158

Open Text Archive and Storage Services

AR090701-ACN-EN-6

11.3 Synchronizing Servers

1.

Select Known Servers in the Environment object in the console tree.

2.

Click Synchronize Servers in the action pane.

3.

Click OK to confirm. The synchronization is started.

AR090701-ACN-EN-6

Administration Guide

159

Chapter 12

Configuring Remote Standby Scenarios


In a remote standby scenario, a Remote Standby Server is configured as duplicate of
the original archive server. The Remote Standby Server and the archive server are
connected via LAN or WAN. To configure a remote standby scenario, the Remote
Standby Server must be added as a known server to the original archive server first.
See Adding and Modifying Known Servers on page 157. Thus, the Remote
Standby Server can transmit data from the original archive server.

Figure 12-1: Remote Standby scenario


In a remote standby scenario, all new and modified documents are asynchronously
transmitted from the original archive to the replicated archive of a known server.
This is done by the Synchronize_Replicates job on the Remote Standby Server.
The job physically copies the data on the storage media between these two servers.
Therefore, the Remote Standby Server provides more data security than the local
backup of media.
The Remote Standby Server has the following advantages:

Increased availability of the archive, since the Remote Standby Server is accessed
when the original server in not available.

Backup media are located in greater distance from the original archive server,
providing security in case of fire, earthquake, and other catastrophes.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

161

Chapter 12 Configuring Remote Standby Scenarios

Nevertheless, there are also disadvantages:

Only read access to the documents is possible; modifications to and archiving of


documents is not possible directly.

A document may have been stored or modified on the original server, but not
yet transmitted to the Remote Standby Server.

No minimization of downtime with regard to archiving new documents, since


only read access to the Remote Standby Server is possible.
Note: The usage of a Remote Standby Server depends on your backup strategy.
Contact the Open Text Global Services for the development of a backup
strategy that fits your needs.

12.1 Configuring Original Archive Server and Remote


Standby Server
You have to perform several configuration steps on the original archive server and
on the Remote Standby Server to replicate data.

12.1.1 Configuring the Original Archive Server


The original server must be configured, that the Remote Standby Server is allowed
to replicate the original server.
Proceed as follows:
1.

Log on to the original archive server.

2.

Add the Remote Standby Server as known server (see Adding Known Servers
on page 157). Ensure that Remote server is allowed to replicate from this host
is set.

3.

Click OK. The Remote Standby Server is listed in Known Servers in the
Environment object of the console tree.

12.1.2 Configuring the Remote Standby Server


If the known server is added, the Remote Standby Server must be configured. You
have to configure the logical archives and the buffers that are to be replicated. To
replicate the data from the original server, matching devices and volumes must be
configured on the Remote Standby Server first.
Important
These volumes have to be named the same way as the original volume. The
replicate volumes need at least the same amount of disk space.

162

Open Text Archive and Storage Services

AR090701-ACN-EN-6

12.1 Configuring Original Archive Server and Remote Standby Server

See also:

Configuring Disk Volumes on page 45

Installing and Configuring Storage Devices on page 56

Configuring the replicated archives


Proceed as follows:
1.

Log on to the Remote Standby Server.

2.

Add the original server as known server (see Adding Known Servers on
page 157). Remote server is allowed to replicate from this host must not be set.
Unless the two servers replicate each others archives over cross.

3.

Click OK.

4.

Click Synchronize Servers in the action pane to synchronize settings between


known servers.

5.

Select External Archives in the Archives object in the console tree. All logical
archives of the known servers are listed.

6.

Select the archive which should be replicated in the result pane and click
Replicate in the action pane.
The archive is moved to Replicated Archives. A message is shown, that the
pools of the replicated archive must be configured (see Backups on a Remote
Standby Server on page 165).

7.

Select the replicated archive and select the Server Priorities tab in the result
pane.

8.

Click Change Server Priorities in the action pane. A wizard to assign the
sequence of server priorities opens; for details, see Configuring the Server
Priorities on page 71.

9.

Assign the server priorities. The order should be: first the Remote Standby
Server, then the original server(s).

Configuring pools of replicated archives


Proceed as follows:
1.

Select the replicated archive and select the Pools tab in the result pane.

2.

Select the first pool in the top area. In the bottom area, the assigned volumes are
listed. Volumes that are not configured are labeled with the missing type.

3.

Depending on the type of the volume, do one of the following:


Disk volumes
a.

AR090701-ACN-EN-6

Select the first missing volume and click Attach or Create Missing
Volume in the action pane.

Administration Guide

163

Chapter 12 Configuring Remote Standby Scenarios

b.

Enter Mount Path and Device Type and click OK. Repeat this for every
missing volume.

ISO volumes
ISO volumes will be replicated by the asynchronously running
Synchronize_Replicates job (see also ISO Volumes on page 165).
a.

Select Replicated Archives in the console tree and select the designated
archive.

b.

Select a replicated pool in the console tree and click Properties in the
action pane.

c.

Enter settings (see Write At-once Pool (ISO) Settings on page 76) for
Number of Backups to n (n>0, for volumes on HDWO: n=1) and select
the Backup Jukebox.

d. Configure the Synchronize_Replicates job according to your needs


(see Setting the Start Mode and Scheduling of Jobs on page 87).
IXW volumes
IXW volumes will be replicated by the asynchronously running
Synchronize_Replicates job (see also IXW Volumes on page 166).
a.

Select Replicated Archives in the console tree and select the designated
archive.

b.

Select a replicated pool in the console tree and click Properties in the
action pane.

c.

Enter settings (see Write Incremental (IXW) Pool Settings on page 78)
for Number of Backups to n (n>0) and select the Backup Jukebox.

d. Configure the Synchronize_Replicates job according to your needs


(see Setting the Start Mode and Scheduling of Jobs on page 87).
4.

Schedule the replication job Synchronize_Replicates (see Setting the Start


Mode and Scheduling of Jobs on page 87).
Note: On the original archive server, the backup jobs can be disabled if no
additional backups should be written.

Configuring replicated disk buffers


Proceed as follows:

164

1.

Select Known Servers in the Environment object in the console tree.

2.

Select the known server which disk buffer needs to be replicated in the top area
of the result pane. The assigned disk buffers are listen in the bottom area of the
result pane.

3.

Select the disk buffer which needs to be replicated and click Replicate in the
action pane.

4.

Enter the name of the disk buffer and click Next.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

12.2 Backups on a Remote Standby Server

A message is shown, that the disk buffer gets replicated and a volume has to be
attached to this disk buffer.
5.

Select Buffers in the Infrastructure object in the console tree.

6.

Select the Replicated Disk Buffers tab in the result pane. The replicated buffers
are listed in the top area.

7.

Select the replicated buffer in the top area. In the bottom area, the assigned
volumes are listed. Volumes which are not configured are labeled with the
missing type.

8.

Select the first missing volume and click Attach or Create Missing Volume in
the action pane.

9.

Enter Mount Path and click OK. Repeat this for every missing volume.

12.2 Backups on a Remote Standby Server


The backup procedure depends on the used media type.
Note: For backup and recovery of GS, ISO (HDWO) and FS volumes, contact
Open Text Customer Support.

12.2.1 ISO Volumes


The backup for ISO volumes on a Remote Standby Server for optical media as well
as for ISO volumes on storage systems is done asynchronously by the
Synchronize_Replicates job.
Proceed as follows:
1.

Log on to the Remote Standby Server.

2.

Select Replicated Archives in the console tree and select the designated archive.

3.

Select a replicated pool in the console tree and click Properties in the action
pane.

4.

Enter settings (see Write At-once Pool (ISO) Settings on page 76) for Number
of Backups to n (n>0, for volumes on HDWO: n=1) and select the Backup
Jukebox.

5.

Configure the Synchronize_Replicates job according to your needs (see


Setting the Start Mode and Scheduling of Jobs on page 87).
The Synchronize_Replicates job now backups the data of the original ISO
pool according to the scheduling.
Note: If problems occur, have a look at the protocol of the
Synchronize_Replicates job (see Checking the Execution of Jobs on
page 88).

AR090701-ACN-EN-6

Administration Guide

165

Chapter 12 Configuring Remote Standby Scenarios

12.2.2 IXW Volumes


The backup for IXW volumes on a Remote Standby Server is done asynchronously
by the Synchronize_Replicates job.
Proceed as follows:
1.

Log on to the Remote Standby Server.

2.

Select Replicated Archives in the console tree and select the designated archive.

3.

Select a replicated pool in the console tree and click Properties in the action
pane.

4.

Enter settings (see Write Incremental (IXW) Pool Settings on page 78) for
Number of Backups to n (n>0) and select the Backup Jukebox.

5.

Configure the Synchronize_Replicates job according to your needs (see


Setting the Start Mode and Scheduling of Jobs on page 87).
According to the scheduling, the Synchronize_Replicates job performs a
backup of the new data on the original medium since the last backup to one
backup media.
Note: If problems occur, have a look the protocol of the
Synchronize_Replicates job (see Checking the Execution of Jobs on
page 88).

12.3 Restoring of IXW or ISO Volumes


12.3.1 Restoring an Original IXW or ISO Volume
If the original IXW or ISO medium has to be replaced by a backup medium from the
Remote Standby Server (e.g. defective original), the following main steps have to be
done:

166

1.

Write lock the original volume to avoid write access, see To write lock the
original volume: on page 167.

2.

Update the replicated volume, see To update the replicated volume: on


page 167.

3.

Export and remove the replicated volume, see To export and remove the
replicated volume: on page 167.

4.

In case of IXW: insert a new volume for replication, see To export and remove
the replicated volume: on page 167.

5.

Remove the original volume and insert the replicate volume, see To remove the
defective original volume and insert the replicate volume: on page 168.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

12.3 Restoring of IXW or ISO Volumes

6.

Update the new replicated volume, see To update the new replicated volume:
on page 169.
Note: For double-sided media, you have to execute the following steps for both
sides!

To write lock the original volume:


1.

Log on to the original archive server.

2.

Select the original archive in the console tree and the designated pool in result
pane.

3.

Select the volume to be restored in the bottom area of the result pane and click
Properties in the action pane.

4.

Select Write locked to avoid write access. Perform this step also for the second
side of a double-sided medium.

To update the replicated volume:


1.

Log on to the Remote Standby Server.

2.

Select Jobs in the System object in the console tree.

3.

Select the Synchronize_Replicates job in the result pane and click Start in the
action pane.
This starts the job, and the Remote Standby Server requests the data that has not
been backed up from the original server.
Important
If this job is executed during office times, make sure there is enough
bandwidth between the original and remote standby server for the
replicated data available.

4.

Check whether the job run successfully (see Checking the Execution of Jobs
on page 88). If it was not possible to back up all data, break off here and contact
Open Text Customer Support.

To export and remove the replicated volume:


1.

Ensure that you are logged on to the Remote Standby Server.

2.

Select the replicated archive in the console tree and the designated pool in result
pane.

3.

Determine the name of the volume (<ixwName>) to be removed in the bottom


area of the result pane.

4.

Open a command line and determine the ID of the IXW (ISO) medium
(<WORM_ID>):

AR090701-ACN-EN-6

Administration Guide

167

Chapter 12 Configuring Remote Standby Scenarios

cdadm survey v +sodi o=<ixwName>


Note: vid (option +i) is required later

5.

Select the jukebox in Devices in the Infrastructure object in the console tree.

6.

Select the designated volume and click Eject Volume in the action pane.

7.

Remove the volume from the jukebox.

8.

Export also the IXW (ISO) volume(s) from the STORM configuration.
a.

In the command line, change to directory <OT install AS>\bin

b.

Determine the ID of the IXW (ISO) medium:


cdadm survey -n +uoi

c.

Delete the entries in the file system information:


cdadm delete vid=<WORM_ID>

In case of IXW: insert and initialize a new volume for replication


Proceed as follows:
1.

Insert the new media in the jukebox of the Remote Standby Server.

2.

Select the jukebox in Devices in the Infrastructure object in the console tree and
click Insert Volume in the action pane.

3.

Select the new volume (status blank) and click Initialize Backup in the action
pane. A window with original volumes opens.

4.

Select the original volume and click OK.

To remove the defective original volume and insert the replicate volume:
1.

Log on to the original archive server.

2.

Select the jukebox in Devices in the Infrastructure object in the console tree.

3.

Select the defective volume in the bottom area of the result pane and click Eject
Volume in the action pane.

4.

Remove the medium from the jukebox and label it as defective.

5.

Insert the replicate IXW (ISO) medium and restore it as original:

6.

168

a.

Insert the replicate IXW (ISO) medium in the jukebox of the original archive
server.

b.

Select the jukebox in Devices in the Infrastructure object in the console tree
and click Insert Volume in the action pane.

c.

Select the medium (status bak) and select Restore in the action pane.
This makes the backup volume available as the original volume.

Select the designate archive in the console tree and the designated pool in the
result pane.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

12.3 Restoring of IXW or ISO Volumes

7.

Select the backup volume in the bottom area of the result pane and select Clear
Backup Status in the action pane.

To update the new replicated volume:


1.

Connect to the Remote Standby Server.

2.

Select Jobs in the System object in the console tree.

3.

Select the Synchronize_Replicates job in the result pane and click Start in the
action pane.
This starts the job, and the Remote Standby Server requests the data that has not
been backed up from the original server.
Important
If this job is executed during office times, make sure there is enough
bandwidth between the original and remote standby server for the
replicated data available.

4.

Check whether the job run successfully (see Checking the Execution of Jobs
on page 88). If it was not possible to back up the data, break off here and contact
Open Text Customer Support.

12.3.2 Restoring a Replicate of an IXW or ISO Volume


If a replicate IXW or ISO medium is defective, the Synchronize job for the defective
volume cannot run successfully. The replicate is restored on the same principle as
the original volume. The only difference is that it is not necessary to insert an IXW
(ISO) medium in another jukebox and declare it as the original.
1.

Export and remove the replicated volume, see To export and remove the
replicated volume: on page 169.

2.

In case of IXW: insert a new volume for replication, see In case of IXW: insert
and initialize a new volume for replication on page 170.

3.

Update the new replicated volume, see To update the new replicated volume:
on page 170.
Note: For double sided media, you have to execute the following steps for both
sides!

To export and remove the replicated volume:


1.

Ensure that you are logged on to the Remote Standby Server.

2.

Select the replicated archive in the console tree and the designated pool in result
pane.

AR090701-ACN-EN-6

Administration Guide

169

Chapter 12 Configuring Remote Standby Scenarios

3.

Determine the name of the volume (<ixwName>) to be removed in the bottom


area of the result pane.

4.

Open a command line and determine the ID of the IXW (ISO) medium
(<WORM_ID>):
cdadm survey v +sodi o=<ixwName>
Note: vid (option +i) is required later

5.

Select the jukebox in Devices in the Infrastructure object in the console tree.

6.

Select the designated volume and click Eject Volume in the action pane.

7.

Remove the volume from the jukebox.

8.

Export also the IXW (ISO) volume(s) from the STORM configuration.
a.

In the command line, change to directory <OT install>\bin

b.

Determine the ID of the IXW (ISO) medium:


cdadm survey -n +uoi

c.

Delete the entries in the file system information:


cdadm delete vid=<WORM_ID>

In case of IXW: insert and initialize a new volume for replication


Proceed as follows:
1.

Insert the new media in the jukebox of the Remote Standby Server.

2.

Select the jukebox in Devices in the Infrastructure object in the console tree and
click Insert Volume in the action pane.

3.

Select the new volume (status blank) and click Initialize Backup in the action
pane. A window with original volumes opens.

4.

Select the original volume and click OK.

To update the new replicated volume:


1.

Connect to the Remote Standby Server.

2.

Select Jobs in the System object in the console tree.

3.

Select the Synchronize_Replicates job in the result pane and click Start in the
action pane.
This starts the job, and the Remote Standby Server requests the data that has not
been backed up from the original server.
Important
If this job is executed during office times, make sure there is enough
bandwidth between the original and remote standby server for the
replicated data available.

170

Open Text Archive and Storage Services

AR090701-ACN-EN-6

12.3 Restoring of IXW or ISO Volumes

4.

AR090701-ACN-EN-6

Check whether the job run successfully (see Checking the Execution of Jobs
on page 88). If it was not possible to back up the data, break off here and contact
Open Text Customer Support.

Administration Guide

171

Chapter 13

Configuring Archive Cache Services


Archive Cache Services are performed on a server called cache server. Archive
Cache Services distinguishes between read and write requests. In case of read
requests, the cache server tries to satisfy the request from its local cache instead of
transferring the document via slow WAN from an archive server. If not found in
local cache, the document will be cached for later access.
In case of write requests, Archive Cache Services distinguishes between two
operational modes. This mode can be set per logical archive.
write through
In this mode, all documents are transferred to the archive server, but on the fly,
they are also cached in the local store to speed up later read requests.
write back
In this mode, all the documents are cached in the local store of the cache server.
Archive and Storage Services just will be informed that there are new documents
residing on the cache server. The configured Copy_Back job will later transfer
these documents to the archive server.
Typical scenario for using the write back mode
You have a quite slow network connection between a cache server and an archive
server. During the day, a lot of new documents are written to the cache server,
which should not additionally burden the slow network connection. Archive and
Storage Services are just informed about new documents. During the night, the
WAN is much faster, because of reduced network traffic. The documents just stored
by Archive Cache Services on the cache server can now be safely transferred to the
archive server in an efficient way. This can be achieved by appropriate scheduling of
the Copy_Back job. If this scenario does not exactly fit your environment or your
demands e.g. because you have full load round the clock or you have high security
demands it is recommended to use write through mode (see also Restrictions
Using Archive Cache Services on page 174).
The following figure shows a simple outlay of a scenario with only one archive
server and one cache server. In real environments, one cache server may support
more than one archive server and one archive server can have more than one cache
server attached. Clients can also access the archive server directly without using
Archive Cache Services. This depends on the configuration, see Configuring Access
Via a Cache Server on page 179.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

173

Chapter 13 Configuring Archive Cache Services

Figure 13-1: Archive Cache Services scenario


As the diagram hints, the Administration Server is central to the coordination of the
cache scenario at large. Administration Client is used to configure the settings of
each Archive Cache Services and the associated clients and archives.
Important
To ensure accurate retention handling, the clock of the cache server must be
synchronized with the clock of the archive server.

13.1 Restrictions Using Archive Cache Services


The cache server ideally is transparent to any client, which means it must behave the
same way as the archive server. Especially for write back documents, this
paradigm cannot be followed completely. The following table shows all known
restrictions.

174

Open Text Archive and Storage Services

AR090701-ACN-EN-6

13.1 Restrictions Using Archive Cache Services

Table 13-1: Restrictions using Archive Cache Services


Topic

Description

Restrictions valid for write back


MTA documents

MTA documents can be stored but the single document in an


MTA document cannot be accessed until they are transferred
to a cache server.

Attribute Search

Attribute Search in print lists is not available until the content


is transferred from a cache server to the related archive server.

VerifySig

The signature verification is processed for write back items


but the signer chain is not verified (no timestamp certificates
are available on related archive server).

Deletion behavior

To avoid problems with deletion, do not use the following


archive settings:
Original Archive > Properties > Security > Document

Deletion > Deletion is ignored (see also Configuring the


Archive Security Settings on page 68)
Archive and Storage Services > Modify Operation Mode
> Documents cannot be deleted, no errors are returned
(see also Setting the Operation Mode of Archive and Storage Services on page 304
Retention behavior

As long as write back documents are just stored on the cache


server, there is no protection based on the document
retention. After transferring documents to a related archive
server, the retention behavior gets effective. If there is no
client retention, the retention setting of the logical archive is
used.
In special case of event-based retention, the expiring date may
be extended up to 24 hours.

Audit

There are no audit trails for documents as long as they are not
transferred to the related archive server.

Update Document

This call is not supported for write back documents.

migrateDocument

Results in an error if just the pool name is changed.


Important: Target archives must be enabled to be cached
by this cache server, otherwise update calls will fail.

Versioning of components

AR090701-ACN-EN-6

As long as components are just stored on the cache server,


there is no version control! This means, after a successful
modification, the modified component is available, but the
version number will not be increment. A subsequent info call
still will deliver back version 1 of the just modified component, until the component has been transferred to the related
archive server.

Administration Guide

175

Chapter 13 Configuring Archive Cache Services

Topic

Description

Transfer and commit

Write back documents are transferred to the related archive


server in a two-phase process:
Phase 1: document is requested
Phase 2: commit to previously requested document is sent
To avoid any inconsistency, any update client request that
comes in between phase 1 and 2 cannot be satisfied and an
HTTP_CONFLICT error is returned to the client.

Application type

Application type is not stored on the cache server and thus,


not transferred to the related archive server. This means that
automatic pool separation depending on application type
does not work.

Maintenance mode

Documents cannot be accessed during maintenance mode.

Disabled archives

Documents cannot be modified if the logical archive is disabled.

Restrictions valid for write through and write back


Component name mapping

In write back mode, an error occurs if you try to create a


component matching one of these names:
<n>.pg
im
To support all component names, create a new entry in the
configuration:

1.

Select Runtime and Core Services > Configuration >


Content Service.
2.
Click New Property in the action pane.
3.

Enter the property name:


contentservice.ILLEGALCOMPONENTNAMES

4.
5.

Select Global as Scope and String as Datatype.


Click Next.

6.

Leave the Property Value field empty and select


Requires Restart?

7.

176

Click Next and then Finish to resume.

Timestamp verification

A mandatory signature check before reading can be configured for each archive. This setting is ignored for cached documents.

Encryption, Compression,
Single Instance, Blobs

Content on the cache server gets neither encrypted nor compressed, regardless of the archive setting.

Destroy

Documents are not destroyed on the cache server, regardless


of the archive setting.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

13.2 Configuring a Cache Server in the Environment

13.2 Configuring a Cache Server in the Environment


13.2.1 Adding a Cache Server to the Environment
The first step for using a cache server is to make it known to a archive server using
Administration Client. To do this, you have to add a cache server to the
environment of the logical archive.
Proceed as follows:
1.

Select Cache Servers in the Environment object in the console tree.

2.

Click New Cache Server in the action pane.

3.

Enter the cache server parameters:


Cache server name
Unique name of the cache server. This name is used throughout the
configuration and administration to refer to the cache server.
Description
Brief, self-explanatory description of the cache server.
Host (client)
Physical host name to address the cache server when a client accesses it.
Note: Instead of the host name, you can also use IPv4 addresses.
However, IPv6 addresses are not supported.
'Copy back' job
Displays the associated Copy_Back job. This entry cannot be changed.
Host (archive server)
Physical host name used by the archive server to communicate with a cache
server. This name may be different from the host name relating to client.
Note: Instead of the host name, you can also use IPv4 addresses.
However, IPv6 addresses are not supported.
Port, Secure port, Context path
Specifies the port, the secure port and the context path, that enables the client
to create URLs of the designated cache server.
Structure of the URLs:
http://<host>:<port><context>?...
https://<host>:<secure port><context>?...

Example:
<host> = csrv03100
<port> = 8080

AR090701-ACN-EN-6

Administration Guide

177

Chapter 13 Configuring Archive Cache Services

<secure port> = 8090


<context> = /archive
http://csrv03100:8080/archive?...
https://csrv03100:8090/archive?...

4.

Click Finish.

5.

Configure the Copy_Back job. See also Configuring Jobs and Checking Job
Protocol on page 83 and Table 6-3 on page 85.
Note: Be aware that this job is disabled by default. In case you intend to
use the "write back" mode, enable this job.

6.

Click Finish. The new cache server is added to the Environment.

Next step:

Configuring Archive Access Via a Cache Server on page 180.

13.2.2 Modifying a Cache Server


Proceed as follows:
1.

Select Cache Servers in the Environment object in the console tree.

2.

Select the cache server you want to modify and click Properties in the action
pane.

3.

Modify the cache server parameters. See also Adding a Cache Server to the
Environment on page 177.

4.

Click Finish.

13.2.3 Deleting a Cache Server


A cache server can only be deleted if it is not attached to any logical archive. If so,
you first have to detach the cache server from logical archives. See Deleting an
Assigned Cache Server on page 182.
Proceed as follows:

178

1.

Detach the cache server from all logical archives it is attached to. See Deleting
an Assigned Cache Server on page 182.

2.

Select Jobs in the System object in the console tree.

3.

Select the Copy_Back job which is assigned to the cache server and click Start in
the action pane. The cached documents are transferred to the related archive
server. A window to watch the transfer status opens.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

13.3 Configuring Access Via a Cache Server

Caution
This step ensures that pending write-back documents are transferred to
the related archive server. If this step fails, the cache server must not be
deleted before the problem is solved.
4.

Select Cache Servers in the Environment object in the console tree.

5.

Select the cache server you want to delete.

6.

Click Delete in the action pane. A warning message opens.

7.

Click Yes to confirm. The cache server is deleted from the environment.

13.3 Configuring Access Via a Cache Server


13.3.1 Subnet Assignment of a Cache Server
For each logical archive, it is possible to configure one or more cache servers to
speed up processing in case a slow WAN is between clients and archive servers. The
following steps are necessary to assign a cache server to a group (subnet) of clients
per logical archive. This allows assigning different cache servers to different groups
of clients. A client not contained in any of these subnets will access the archive
server directly.

Figure 13-2: Example of subnet assignment of cache servers

AR090701-ACN-EN-6

Administration Guide

179

Chapter 13 Configuring Archive Cache Services

Important
The subnet configuration will only be evaluated by our intelligent clients.
Note: Archive Cache Services keeps track of any relevant changes to the
archive settings and is synchronized automatically.

13.3.2 Configuring Archive Access Via a Cache Server


Note: To configure the access to a logical archive via a cache server, the cache
server must first be added to the environment. See Adding a Cache Server to
the Environment on page 177.
Proceed as follows:
1.

Select Original Archives in the Archives object in the console tree.

2.

Select the logical archive which a the cache server should get access to.

3.

Select the Cache Servers tab in the top area of the result pane and click Assign
Cache Server.

4.

Enter settings:
Cache server
The name of the cache server assigned to this archive.
Caching enabled
If caching is enabled, one of the following modes can be set.
Write through
The cache server will operate in write through mode for this logical
archive.
Write back
The cache server will operate in write back mode for this logical
archive.
Note: If caching is disabled, the cache server does not cache any new
documents for this logical archive. Instead, it acts as a proxy and forwards
all requests to Archive and Storage Services. Outstanding write back
documents can still be retrieved.

5.

Click Next and enter settings for subnet address and subnet mask/length.
The combination of subnet mask and subnet address specifies a subnet. Clients
residing in this subnet will use the selected cache server. Typically, the cache
server resides in the same subnet. It is possible to add more than one subnet
definition to a cache server, see also Subnet Assignment of a Cache Server on
page 179.

180

Open Text Archive and Storage Services

AR090701-ACN-EN-6

13.3 Configuring Access Via a Cache Server

Several subnets
If a client belongs to more than one subnet, it will use the cache server that
is assigned to the best matching subnet.
Subnet address
Specifies the address for the subnet in which a cache server is located. At
least the first part of the address (e.g. NNN.0.0.0 in case of IPv4) must be
specified. A gateway must be established for each subnet.
IPv6
If you use IPv6, do not enclose the IPv6 address with square brackets.
Subnet mask / Length
Specifies the sections of the IP address that are evaluated. You can restrict
the evaluation to individual bits of the subnet address.
IPv4
Enter a subnet mask, for example 255.255.255.0.
IPv6
Enter the address length, i.e. the number of relevant bits, for example 64.
6.
Modifying cache
server settings

Click Finish to complete.

To modify the settings of a cache server, select it in the top area of the result pane
and click Properties in the action pane. Proceed in the same way as when
configuring a cache server.

13.3.3 Adding and Modifying Subnet Definitions of a Cache


Server
It is possible to configure more than one subnet definition for each cache server.
Proceed as follows:

Modifying subnet
definitions of a
cache server

1.

Select Original Archives in the Archives object in the console tree.

2.

Select the logical archive which the cache server is assigned to.

3.

Select the Cache Servers tab in the top area of the result pane and select the
cache server. In the bottom area, the subnet definitions are listed.

4.

Click New Subnet Definition in the action pane and enter settings for subnet
mask and subnet address. See also Configuring Archive Access Via a Cache
Server on page 180

5.

Click Finish.

To modify the subnet definitions of a cache server, select it in the bottom area and
click Properties in the action pane. Proceed in the same way as when adding a
subnet definition.

AR090701-ACN-EN-6

Administration Guide

181

Chapter 13 Configuring Archive Cache Services

13.3.4 Deleting an Assigned Cache Server


Note: The steps 3 to 6 are only necessary if you use a cache server that operates
in write back mode.
Proceed as follows:

182

1.

Select Original Archives in the Archives object in the console tree.

2.

Select the logical archive which the cache server is assigned to.

3.

Select the Cache Servers tab in the top area of the result pane and select the
cache server you want to delete.

4.

Click Properties in the action pane.

5.

Deselect enabled to stop caching. See also Configuring Archive Access Via a
Cache Server on page 180.

6.

Select Jobs in the System object in the console tree.

7.

Select the Copy_Back job which is assigned to the cache server you want to
delete and click Start. The cached documents are transferred to the related
archive server. A window to watch the transfer status opens.

8.

Select the cache server you want to delete again and click Delete in the action
pane.

9.

Click Yes to confirm. The cache server is no longer assigned to the logical
archive.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Part 3
Maintenance

Chapter 14

Handling Storage Volumes


This chapter describes tasks that are relevant for optical storage volumes as well as
for storage systems: Finalization, export and import, consistency checks. If you
archive documents with retention periods, you also have to check for correct
deletion of the documents and clear volumes whose documents are deleted
completely.

14.1 Finalizing Storage Volumes


Finalization is relevant for volumes in IXW pools. The basic idea of IXW volume
finalization is to distill a file system according to ISO 9660 from the IXW file system
information and to write this structure permanently onto the medium. Thus it will
act as an ISO 9660 medium like CD and DVD and can be accessed using standard
software.
Inode and hash
files

Export and
import

After the IXW volume is successfully converted to an ISO 9660 volume the
corresponding inodes are deleted from inode and hash files. So the size of the inode
and hash files can be kept small while providing fast access to the volume. If you
plan to use finalization consequently from the beginning, you may configure smaller
inode and hash files at installation time. It is not possible to reduce the size of inode
and hash files at a later time except by re-importing all volumes.
Regarding export and import, finalized volumes are handled like other ISO 9660
volumes. No export from and time-consuming import to the IXW file system
information is required.

Flags

Finalization is implemented as a utility that can be started either automatically or


manually. Once a volume was finalized successfully, it is marked as finalized (see
Checking the Finalization Status on page 187).

Backups

Backup volumes should be finalized when the corresponding original volume is


finalized and the backup is completed. Therefore finalization is included into the
backup jobs. If a backup job recognizes that the original volume is finalized, it
performs the backup as usual. When done, it calls the finalization program for the
backup medium. The High Sierra name of the volume is not changed. It is not
possible to finalize backup volumes manually.

14.1.1 Automatic Finalization of IXW Volumes


IXW volumes are automatically finalized if you activate the Auto Finalization
option in the pool configuration. The Finalize Partition utility is started when the
Write job has finished. It looks for volumes meeting the given conditions and, if
found, finalizes them.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

185

Chapter 14 Handling Storage Volumes

You can enable automatic finalization and set the conditions either when creating
the pool or at a later time.
See also:

Manually Finalizing IXW Volumes on page 186

14.1.2 Manually Finalizing IXW Volumes


To finalize IXW volumes manually, the Finalize Volume utility is used.
Proceed as follows:
1.

Select Original Archives in the Archives object in the console tree.

2.

Select the original archive with the IXW pool the volume is assigned to.

3.

Select the designated IXW pool in the top area and the volume to be finalized in
the bottom area of the result pane.

4.

Click Finalize Volume in the action pane.

5.

Click OK.
A protocol window shows the progress and the result of the finalization. To
check the protocol later on, see Checking Utilities Protocols on page 222.
To check the volume status, see Checking the Finalization Status on page 187.

See also:

Checking Utilities Protocols on page 222

Checking the Finalization Status on page 187

Automatic Finalization of IXW Volumes on page 185

Manually Finalizing IXW Pools on page 186

14.1.3 Manually Finalizing IXW Pools


You also can finalize all volumes of a IXW pool at once. In particular, this is required
if you did not use finalization so far.
Proceed as follows:

186

1.

Select Original Archives in the Archives object in the console tree.

2.

Select the original archive with the IXW pool that should be finalized.

3.

Select the designated IXW pool in the top area of the result pane.

4.

Click Finalize Pool in the action pane.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

14.1 Finalizing Storage Volumes

5.

Enter settings:
Last write access
Defines the number of days since the last write access.
Filling level of volume
Defines the filling level in percent at which an IXW volume should be
finalized. For IXW volumes, the Storage Manager automatically calculates
and reserves the storage space required for the ISO file system. The filling
level therefore refers to the space remaining on the IXW volume.

6.

Click OK.
A protocol window shows the progress and the result of the finalization. To
check the protocol later on, see Checking Utilities Protocols on page 222.
To check the status of the volumes, see Checking the Finalization Status on
page 187.

See also:

Checking Utilities Protocols on page 222

Checking the Finalization Status on page 187

Manually Finalizing IXW Volumes on page 186

Automatic Finalization of IXW Volumes on page 185

14.1.4 Checking the Finalization Status


The finalization status of a volume can be checked to ensure successful finalization.
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree. All available
devices are listed in the top area of the result pane.

2.

Select the designated jukebox device. The attached volumes are listed in the
bottom area of the result pane.

3.

Check the entry in the Final State column of the finalized volume(s), it must be
fin. The entry in the File System column of the volume must be ISO.

See also:

Setting the Finalization Status Manually on page 188

Manually Finalizing IXW Volumes on page 186

Automatic Finalization of IXW Volumes on page 185

AR090701-ACN-EN-6

Administration Guide

187

Chapter 14 Handling Storage Volumes

14.1.5 Setting the Finalization Status Manually


If finalization is interrupted for whatever reason, you can restart it again as often as
you want. If finalization has failed, the final state of the volume is set to fin_ro (see
Checking the Finalization Status on page 187). If finalization has failed several
times and you no longer want to repeat it, you can set the error status for that
volume to fin_err to indicate that the volume cannot be finalized. This error status
cannot be removed later.
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree. All available
devices are listed in the top area of the result pane.

2.

Select the designated device. The attached volumes are listed in the bottom area
of the result pane.

3.

Select the volume to set the finalization status.

4.

Click Set Finalization Status in the action pane.

5.

Click OK.
The Final state of the volume is set to fin_err.
Note: The failure of the finalization does not affect the security of the data on
the medium!

See also:

Checking Utilities Protocols on page 222

Checking the Finalization Status on page 187

Manually Finalizing IXW Volumes on page 186

Automatic Finalization of IXW Volumes on page 185

14.2 When the Retention Period Has Expired


If documents have been archived with retention periods, the leading application can
delete these documents when the retention period has expired. The deletion of
documents and resulting empty volumes depends on the pool type and storage
medium. For general information on retention, see Retention on page 65. In this
section, you find the details of deletion behavior and the tasks to keep your archive
system well organized.

188

Open Text Archive and Storage Services

AR090701-ACN-EN-6

14.2 When the Retention Period Has Expired

Document
deletion

When the leading application sends the delete request for a document, the archive
system works as follows:
Single files (from HDSK, FS, VI pools)
1.

Archive and Storage Services deletes the index information of the document
from the archive database. The document cannot be retrieved any longer, the
document is logically deleted.2

2.

Archive and Storage Services propagates the delete request to the storage
system.

3.

The storage system deletes the document physically and the client gets a
success message. Not all storage systems release the free space after deletion
for new documents (see documentation for your storage system). If deletion
is not possible for technical reasons, the information with the storage
location of the document is written into the TO_BE_DELETED.log file. The
administrator can configure a notification.
Note: If the state of an FS volume (NetApp or NASFiler) is set to write
locked, components will not be removed from this volume when one
tries to delete them from Document Service. The case will be handled as
if the removal was prevented by the hardware (entry in
TO_BE_DELETED.log, notification, additional delete from archive
database if the request was a docDelete).

Container files (from ISO, IXW pools, blobs)


1.

Archive and Storage Services deletes the index information of the document
from the archive database. The document cannot be retrieved any longer.

2.

The delete request is not propagated to the storage system and the content
remains in the storage. Only logically empty volumes can be removed in a
separate step.
Note on IXW pools
Volumes of IXW pools are regarded as container files. Although the documents
are written as single files to the medium, they cannot be deleted individually,
neither from finalized volumes (which are ISO volumes) nor from nonfinalized volumes using the IXW file system information.

Delete empty
partitions

If documents with retention periods are stored in container files, the container
volume gets the retention period of the document with the longest retention. The
retention period of the volume is propagated to the storage subsystem if possible.
The volume and the content of all its documents can be deleted only if all
documents are deleted from the archive database. The volume is purged by the
Delete_Empty_Volumes job. It checks for logically empty volumes meeting the
conditions defined in Runtime and Core Services > Configuration > Archive
Server:
AS.ADMS.JOBS.DEL_VOL_NOT_MODIFIED_SINCE_DAYS

Deletion of components works differently: If the storage system cannot delete a component physically, the component
remains, it is not deleted logically.

AR090701-ACN-EN-6

Administration Guide

189

Chapter 14 Handling Storage Volumes

AS.ADMS.JOBS.DEL_VOL_AT_LEAST_FULL
and deletes these volumes automatically. IXW volumes are only considered if they
are physically full at the given level and logically empty. You can schedule the job
and run it automatically, or use the List Empty Volumes/Images utility to display
the empty volumes first and then start the deletion job manually (see Checking for
Empty Volumes and Deleting Them Manually on page 190).
Important
To ensure correct deletion, you must synchronize the clocks of the archive
server and the storage subsystem, including the devices for replication.
Summary

The following table provides an overview of the deletion behavior:


Storage
mode

Pool type

Delete from
archive DB

Delete content physically

Destroy content

Single file
storage

HDSK

x (Destroy unrecoverable)

FS and VI

ISO, IXW
on optical
media

Delete volume, when the


last document is deleted:
Delete_Empty_Volumes job

x (destroy media)

ISO on
storage
system

Delete volume, when the


last document is deleted:
Delete_Empty_Volumes job

Container
file storage

Notes:

Not all storage systems release the space of the deleted volumes (see
documentation for your storage system).

Blobs are handled like container file archiving.

14.2.1 Checking for Empty Volumes and Deleting Them


Manually
If you want to check for empty volumes before you delete them, you use the List
Empty Volumes/Images utility. It displays a list of volumes that are logically
empty.
Proceed as follows:

190

1.

Select Original Archives in theArchives object in the console tree.

2.

Click List Empty Volumes in the action pane. A window to start the utility
opens.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

14.2 When the Retention Period Has Expired

3.

Enter settings.
Not modified since xx days
Number of days since the last modification. The parameter prevents that the
volume or image can be deleted very soon after the last document is deleted.
More than xx percent full
Only relevant for non-finalized IXW volumes. The parameter ensures that
the volume is filled with data at the given percentage (but logically, it is
empty).

4.

Click Run and check the resulting list.

5.

To delete volumes, start the Delete_Empty_Volumes job manually.


Before you start the job, check the settings which specify the volumes that
should be deleted. They are configured in Runtime and Core Services >
Configuration > Archive Server:
AS.ADMS.JOBS.DEL_VOL_NOT_MODIFIED_SINCE_DAYS
AS.ADMS.JOBS.DEL_VOL_AT_LEAST_FULL
and avoid that new, empty volumes can be deleted.
Select Jobs in the System object in the console tree.

6.

Select the Delete_Empty_Volumes job and click Start in the action pane.

7.

If you work with optical media, proceed as described in step 2 in Deleting


Empty Volumes Automatically on page 191.

14.2.2 Deleting Empty Volumes Automatically


If you want to delete empty volumes automatically, proceed as follows:
Proceed as follows:
1.

Select Jobs in the System object in the console tree.


Schedule and enable the Delete_Empty_Volumes job, see also Creating and
Modifying Jobs on page 87 and Enabling and Disabling Jobs on page 86.

2.

If you work with optical media:


a.

Select Devices in the Infrastructure object in the console tree. In the Servers
tab, open the Devices directory and check the jukeboxes for volumes with
the name XXXX. These are the deleted volumes.
Important
On double-sided media, check that both volumes are deleted.

AR090701-ACN-EN-6

Administration Guide

191

Chapter 14 Handling Storage Volumes

b.

Select the designated jukebox in the top area of the console tree. Check the
volume list in the bottom area of the result pane for volumes with the name
XXXX.

c.

Select the XXXX volume and click Eject Volume in the action pane.

d. Destroy the medium physically.

14.3 Exporting Volumes


A medium can be exported when the stored documents are no longer accessed. Use
export, if:

The volume is defective.

The volume contains data that is no longer needed.

During export, the entries about documents and their components on the volume
are deleted from the archive database. The volume gets the internal status exported
and is treated as nonexistent. After that, you remove the optical medium together
with its local backups from the jukebox. The database entries can be restored by
importing the volume.
For IXW media (WORM or UDO), consider the finalization status. When nonfinalized IXW volumes are exported, the document information is deleted from the
database but the file system information (inode and hashfiles) are not updated.
Therefore, we recommend finalizing IXW volumes before export.
Important

Each side of a double-sided optical medium (WORM, UDO or DVD)


constitutes a volume. Export both volumes before you remove the
medium from the jukebox.

Do not use the Export utility for volumes belonging to archives that are
configured for single instance archiving (SIA). A SIA reference to a
document may be created long after the document itself has been stored;
the reference is stored on a newer medium than the document. SIA
documents can be exported only when all references are outdated but the
Export utility does not analyze references to the documents.

Volumes containing at least one document with non expired retention


are not exported.

Proceed as follows:

192

1.

If the optical medium is not in the jukebox, insert it.

2.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

14.4 Importing Volumes

3.

Select the Export Volumes(s) utility.

4.

Click Run in the action pane.

5.

Enter the export parameters.


Volume name(s)
Name of the volumes(s) to be exported. You can use wildcards to export
multiple volumes at the same time.
Export from database
Enable this option when you export a defective volume. It causes the
database to be searched for entries for this volume, and the entries relating to
the contents of the volume are deleted. The volume itself is not accessed.
If this option is disabled, the command searches the volume directly and
deletes the associated entries from the database. Intact volumes that are no
longer needed are exported in this way. The volume must be in the jukebox.

6.

Click Run. A protocol window shows the progress and the result of the export.
The export process may take some time.

7.

If the medium is a double-sided optical one, export the second volume in the
same way.

8.

Remove the optical medium from the jukebox with Eject.


Details: Removing Optical Media from Jukebox on page 206
Volumes on storage systems can be deleted by means of the storage system
administration if provided.

See also:

Utilities on page 221

Checking Utilities Protocols on page 222

14.4 Importing Volumes


When a volume is imported, the entries in the archive database are restored from
the information that is stored on the volume.
The file system information that is needed for non-finalized IXW volumes is
updated automatically when the IXW medium is inserted. For each pool type, an
import utility is provided. Import a volume, if:

it was exported by mistake,

it is moved to another archive server.


Note: To import ArchiSig Documents with timestamps the ArchiSig-Archive
must be imported first to avoid problems.

AR090701-ACN-EN-6

Administration Guide

193

Chapter 14 Handling Storage Volumes

14.4.1 Importing ISO Volumes


The utility imports ISO volumes. After import, you must attach the volume to the
correct pool manually.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the Import ISO Volume utility in the result pane and click Run in the
action pane.

3.

Enter settings:
Volume name
Name of the volume(s) to be imported.
STORM server
Name of the STORM server by which the imported volume is managed.
Backup
The volume is imported as a backup volume and entered in the list of
volumes as a backup type. Not available for ISO volumes.
Arguments
Additional arguments. Not required for normal import, only for special tasks
like moving documents to another logical archive. Contact Open Text
Customer Support.

4.

Click Run.
The import process may take some time. A message box shows the progress of
the import.

5.

Select Original Archives in the Archives object in the console tree.

6.

Select the designated archive and the pool.

7.

Click Attach Volume in the action pane.

8.

Select the volume and define the priority.

9.

Click Finish to attach the imported volume to the pool.

See also:

Utilities on page 221

Checking Utilities Protocols on page 222

14.4.2 Importing Finalized and Non-finalized IXW Volumes


The utility imports finalized and non-finalized IXW volumes. After import, you
must attach the volume to the correct pool manually.

194

Open Text Archive and Storage Services

AR090701-ACN-EN-6

14.4 Importing Volumes

Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the Import IXW Or Finalized Volume(s) utility in the result pane and
click Run in the action pane.

3.

Enter settings:
Volume name(s)
Name of the volume(s) to be imported.
STORM server
Name of the STORM server by which the imported volume is managed.
Import original volumes
The volumes are imported as original volumes.
Import backup partitions (for use in replicate archives only!)
The volumes are imported as backup volumes and entered in the list of
volumes as backup type.
Set read-only flag after import
The volume is imported as a write-protected volume.
Arguments
Additional Arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact Open Text
Customer Support.

4.

Click Run.
The import process may take some time. A message box shows the progress of
the import.

5.

Select Original Archives in the Archives object in the console tree.

6.

Select the designated archive and the pool.

7.

Click Attach Volume in the action pane.

8.

Select the volume and define the priority.

9.

Click Finish to attach the imported volume to the pool.

See also:

Utilities on page 221

Checking Utilities Protocols on page 222

AR090701-ACN-EN-6

Administration Guide

195

Chapter 14 Handling Storage Volumes

14.4.3 Lost&Found for IXW Volumes


During import, it is possible to display the parts of a corrupt IXW medium that still
are readable in a separate subfolder. The medium is write protected and a backup of
the medium is not possible. Execute the migration of the data to a new medium (see
Migration on page 225) and destroy the damaged medium or send it to IXOS for
analyzing. Do not finalize these media.

14.4.4 Importing Hard Disk Volumes


The utility imports hard disk volumes for use in HDSK and FS pools. After import,
you must attach the volume to the correct pool manually.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the Import HD Volume utility in the result pane and click Run in the
action pane.

3.

Enter settings:
Volume name
Name of the hard disk volume to be imported.
Base directory
Mount path of the volume.
Backup
The volume is imported as a backup volume and entered in the list of
volumes as a backup type.
Read-only
The volume is imported as a write-protected volume.
Arguments
Additional Arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact Open Text
Customer Support.

4.

Click Run.
The import process may take some time. A message box shows the progress of
the import.

196

5.

Select Original Archives in the Archives object in the console tree.

6.

Select the designated archive and the FS or HDSK pool.

7.

Click Attach Volume in the action pane.

8.

Select the volume and define the priority.

9.

Click Finish to attach the imported volume to the pool.


Open Text Archive and Storage Services

AR090701-ACN-EN-6

14.4 Importing Volumes

See also:

Utilities on page 221

Checking Utilities Protocols on page 222

14.4.5 Importing GS Volumes for Single File (VI) Pool


The utility imports GS volumes for use in Singe File (VI) pools. After import, you
attach the volume to the correct pool manually.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the Import GS Volume utility in the result pane and click Run in the
action pane.

3.

Enter settings:
Volume name
Name of the hard disk volume to be imported.
Base directory
Mount path of the volume.
Read-only
The volume is imported as a write-protected volume.
Arguments
Additional arguments. Not required for normal import, only for special tasks
like moving documents to another logical archive. Contact Open Text
Customer Support.

4.

Click Run.
The import process may take some time. A message box shows the progress of
the import.

5.

Select Original Archives in the Archives object in the console tree.

6.

Select the designated archive and the VI pool.

7.

Click Attach Volume in the action pane.

8.

Select the volume and define the priority.

9.

Click Finish to attach the imported volume to the VI pool.

See also:

Utilities on page 221

AR090701-ACN-EN-6

Administration Guide

197

Chapter 14 Handling Storage Volumes

Checking Utilities Protocols on page 222

14.5 Consistency Checks for Storage Volumes and


Documents
The Archive Administration provides utilities for various checks and comparisons:

Consistency checks of volumes and database

Checking and counting documents and components

Checking volumes

Comparison of backup and original IXW volumes

You can start the utilities in the System object in the console tree. When the utility is
started, a message window shows the progress of the utility.

14.5.1 Checking Database Against Volume


The Check Database Against Volume utility determines whether the documents
and components that are known to the database are actually stored on the volume.
It detects missing documents on the storage volume. Use the utility:

after restoring an original volume from the backup, in particular, after restoring
IXW volumes,

if you suspect the damage of a storage medium or volume.

The volume to be checked must be online. You can only check the volume, or try to
repair inconsistencies.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the Check Database Against Volume utility.

3.

Click Run in the action pane.

4.

Type the volume name and specify how inconsistencies are to be handled.
Volume
Name of the volume that is to be checked.
copy document/component from other partition
The utility attempts to find the missing component on another volume. If the
component is found, it is copied to the checked volume. If not, the
component entry is deleted from the database, i.e. the component is
exported.

198

Open Text Archive and Storage Services

AR090701-ACN-EN-6

14.5 Consistency Checks for Storage Volumes and Documents

export component
The database entry for the missing component on the checked volume is
deleted.
Repair, if needed
Check this box if you really want to repair the inconsistencies.
If the option is deactivated, the test is performed and the result is displayed.
Nothing is copied and no changes are made to the database.
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the archive.
If in doubt, contact Open Text Customer Support.
5.

Click Run.
A protocol window shows the progress and the result of the check.

See also:

Utilities on page 221

Checking Utilities Protocols on page 222

14.5.2 Checking Volume Against Database


The Check Volume Against Database utility checks whether all the documents and
components on the volume are entered in the database. It detects lost document
references in database. Use the utility:

for database recovery,

if you suspect problems with the database contents.

The volume to be checked must be online. You can only check the volume, or try to
repair inconsistencies.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the Check Volume Against Database utility.

3.

Click Run in the action pane.

4.

Type the volume name and specify how documents missing in the database are
to be handled.

AR090701-ACN-EN-6

Administration Guide

199

Chapter 14 Handling Storage Volumes

Volume
Name of the volume that is to be checked.
Import documents if they are not in the database
Missing document or component entries are imported into the database.
5.

Click Run.
A protocol window shows the progress and the result of the check.

See also:

Utilities on page 221

Checking Utilities Protocols on page 222

14.5.3 Checking a Document


The Check Document utility checks if a document is correctly on the medium as
known by the database. Use it to analyze trouble with document access. You can run
just the test or have the document repaired at the same time. The medium
containing the document must be online.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the Check Document utility.

3.

Click Run in the action pane.

4.

Enter the document ID, the type and select whether the document should be
repaired.
DocID
Type the document ID accordingly to the Type setting.
You can determine the string form of the document ID by searching for the
document in the application (e.g. on document type and object type) and
displaying the document information in Windows Viewer or in Java Viewer.
Type
Select the type of document ID. The ID can be entered in numerical (Number)
or string (String) form.
Repair document, if needed
Check this box if you want to repair defective documents. The utility attempts to copy the document from another volume. If this option is deactivated, the utility simply performs the test and displays the result.

200

Open Text Archive and Storage Services

AR090701-ACN-EN-6

14.5 Consistency Checks for Storage Volumes and Documents

Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the archive.
If in doubt, contact Open Text Customer Support.
5.

Click Run.
A protocol window shows the progress and the result of the check.

See also:

Utilities on page 221

Checking Utilities Protocols on page 222

14.5.4 Counting Documents and Components in a Volume


The Count Documents/Components utility determines the number of components
and the number of documents on the volume.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the Count Documents/Components utility.

3.

Click Run in the action pane.

4.

Enter the name of the volume.

5.

Click Run.
A protocol window shows the progress and the result of the counting.

See also:

Utilities on page 221

Checking Utilities Protocols on page 222

14.5.5 Checking a Volume


The Check Volume utility checks a volume without accessing the information in the
database. It checks whether all documents have a consistent structure, whether there
are any damaged documents on the volume, whether every document has at least
one component and whether the file ATTRIB.ATR is in order. Use it when you

AR090701-ACN-EN-6

Administration Guide

201

Chapter 14 Handling Storage Volumes

suspect any problem with a storage medium. The medium must be online and is
only tested, no repair option is available.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the Check Volume utility.

3.

Click Run in the action pane.

4.

Enter the name of the volume.

5.

Click Run.
A protocol window shows the progress and the result of the check.

See also:

Utilities on page 221

Checking Utilities Protocols on page 222

14.5.6 Comparing Backup and Original IXW Volume


The Compare Backup WORMs utility compares one or more backup IXW volumes
with the corresponding originals and detects corrupt IXW backups. The original and
backup volume must be online. The volumes are only tested, no repair option is
available.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the Compare Backup WORMs utility.

3.

Click Run in the action pane.

4.

Enter the Backup volume to be compared. You can specify multiple volumes
separated by spaces. You can also use the * character as a wildcard.

5.

Click Run.
A protocol window shows the progress and the result of the comparison.

See also:

202

Utilities on page 221

Checking Utilities Protocols on page 222

Open Text Archive and Storage Services

AR090701-ACN-EN-6

14.6 Backup for Storage Systems

14.6 Backup for Storage Systems


Data is archived on a storage system if you use one of the following pools: Single
File (FS), Single File (GS) and ISO (with media type HD-WO). The backup and
recovery scenario depends on the storage system in use. The development of this
scenario is a complex and individual task, thus contact Open Text Global Services
for support, and refer to the documentation of your storage system (see Open Text
Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744073/3551
166/customview.html?func=ll&objId=3551166)). This chapter describes only the
general aspects.
Basically you can backup archived data by means of the storage system or by means
of the Archive Administration (local backup, Remote Standby). Some scenarios can
be restricted to one of these ways. The backup medium should be the same type as
the original medium. In some scenarios, backup to optical media is also possible. For
detailed information, see the Hardware Release Notes (Open Text Knowledge
Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744073/3551
166/customview.html?func=ll&objId=3551166)).
Backup of ISO volumes on HD-WO
These volumes are managed in virtual jukeboxes. The backup on archive server side
is similar to the backup of optical ISO volumes, see Backup of ISO Volumes on
page 207. Unlike optical media, the storage media of a storage system cannot be
removed and stored on another place, so a backup system is required, and the
backup must be written by one of the backup jobs. The pool configuration for the
backup jobs is:
Number of Partitions

Number of Backups

Backup Jukebox

Must be different from Original Jukebox

Backup

On for Local_Backup job

AR090701-ACN-EN-6

Administration Guide

203

Chapter 15

Finalizing and Backing Up of Optical Media


The administrator's tasks in connection with optical storage media differ from tasks
related to hard disk-based storage systems. The administrator inserts empty optical
media into the jukebox and manages written media that is no longer accessed.
Empty WORM and UDO (IXW) media require also initialization, full IXW media
can be finalized.

15.1 Managing Written Optical Media


15.1.1 Newly Written ISO Media
Check regularly to see whether any new optical ISO media have been written. You
can configure notification and assign the event filter ISO volume has been written
to it (see Creating and Modifying Notifications on page 269). Newly written ISO
media must be labeled and the backups stored in a safe place. The frequency of this
operation will depend on the amount of data that needs to be archived.
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the ISO jukebox in the top area of the result pane.

3.

Check whether new ISO media have been added to the list in the bottom area of
the result pane. You can click the column title Name to sort by names. The ISO
volumes in each pool are numbered sequentially.

4.

Select the new ISO volume and click Eject Volume in the action pane.

5.

Label the ISO medium.


Do not use solvent-based pens or stickers. Never use a ballpoint pen or any
other sharp object to label your discs. The safest area for a label is within the
center stacking ring. If you use adhesive labels, make sure that they are attached
accurately and smoothly.

6.

Remove and label all the new ISO media in this way.

7.

Re-insert one of each set of identically named ISO media. To do this, select the
ISO jukebox in the top area of the result pane and click Insert Volume in the
action pane.

8.

Remove all defective ISO media with the name --bad--. Label these as
defective. They must not be re-used.

9.

Store the backup ISO media in a safe place.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

205

Chapter 15 Finalizing and Backing Up of Optical Media

Note: Perform these tasks also for the jukeboxes of the remote standby server.

15.1.2 Removing Optical Media from Jukebox


An optical medium is removed when the capacity of the jukebox is insufficient but
the documents are still expected to be accessed. The medium is removed from the
jukebox but the entries in the database are retained. In this way, the medium can be
made available on demand very quickly.
Note: Note that each side of a medium (WORM, UDO or DVD-R) constitutes a
volume, and that neither volume is available when the medium has been
removed from the jukebox.
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the jukebox from which you want to remove a volume in the top area of
the result pane.

3.

Select the volume in the bottom area of the result pane and click Eject Volume
in the action pane.

4.

Remove the backup volume in the same way.

The status of removed volumes is set to offline.

15.2 Backup and Recovery of Optical Media


ISO and IXW media provide a high level of data security. Nevertheless, physical
faults may occur on optical media so that the risk of data loss cannot be excluded
completely. Data is normally backed up on archive server on a regular basis by the
corresponding jobs automatically. As administrator, you only need to back up a
single volume, explicitly in exceptional circumstances and in case of errors.
The jobs are set up on installation. You can modify them if necessary whenever
modifications are made to the backup strategy. To ensure data security, you have to
check that the backup jobs are performed every day successfully (see Checking the
Execution of Jobs on page 88).
You define your backup strategy during installation in cooperation with Open Text
Global Services. Nevertheless, there are some basic principles that apply to all
backup strategies:

206

Data must always be stored simultaneously on two media at least. This means
also the mirroring of the disk buffer.

The original and backup optical media must possess identical capacities and
sector sizes.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

15.2 Backup and Recovery of Optical Media

Regarding optical media, backup media must have the same name as the original. Make sure that the identification of backups is clear on volume labels.
Important
You can also use a Remote Standby Server for backing up data. For details
refer to Configuring Remote Standby Scenarios on page 161.

15.2.1 Optical ISO Media


Immediately after recording, the ISO medium is automatically checked to see
whether the data was written completely and whether it is readable. If this is not the
case, a new ISO medium is recorded - also automatically. This ensures that the
required number of correct ISO media for the corresponding archive is available
after successful completion of the ISO write job. As a rule, two or three identical
ISO media are produced, both on the original server and on the Remote Standby
Server.
Notes:

Remove the backup media from the jukebox and store them in a safe place
(see Handling Storage Volumes on page 185).

For supported optical ISO media, see the Hardware Release Notes (Open
Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/7440
73/3551166/customview.html?func=ll&objId=3551166)).

The backup of ISO volumes on HD-WO media (storage systems) is


described in Backup for Storage Systems on page 203.

15.2.1.1 Backup of ISO Volumes


There are different methods to back up an ISO medium: by the Write job of the pool
(see Creating and Modifying Pools on page 74) or by one of the backup jobs.
Depending on the amount of archived data and the overall job scheduling, you can
decide for one method or combine these methods. The following table shows the
settings that are required for each method:
Pool configuration
Number of
Partitions
Backup and original media are written by the
Write job in the same
jukebox

AR090701-ACN-EN-6

Number of
Backups

n>1

Administration Guide

Job configuration
Backup
Schedule Write job

207

Chapter 15 Finalizing and Backing Up of Optical Media

Pool configuration
Number of
Partitions

Job configuration

Number of
Backups

Backup

Backup media in all pools


are written by the backup
job, in the same or different jukebox

n>0
select
Backup
Jukebox

On

Backup media in one pool


are written by the backup
job, in the same or different jukebox

n>0
select
Backup
Jukebox

Schedule
Local_Backup job

Create and schedule backup_pool


job.
Argument = pool
name

Notes:

The Local_Backup job considers all pools, for which the Backup option is
set. The backup_pool job considers only the pool for which it is created.
You can schedule additional backups of a pool by configuring both jobs, or
configure the pool backup separately.

If problems occur, have a look in the protocol of the relevant job (see
Checking the Execution of Jobs on page 88).

15.2.1.2 Recovering of ISO Volumes


Keep the backup ISO volume in a safe place. If the original or backup optical
medium is damaged and no additional backup exists, it is necessary to create a new
backup medium manually. This process is done by a the Backup Volume utility.
The Backup option has to be activated for the ISO pool (see Creating and
Modifying Pools on page 74).
Proceed as follows:

208

1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the jukebox where the damaged volume is located in the top area of the
result pane.

3.

Select the damaged volume in the bottom area of the result pane and click Eject
Volume in the action pane.

4.

Insert the backup copy in the jukebox and click Insert Volume in the action
pane. It is now used as the original ISO volume without any further
configuration.

5.

Select Original Archives in the Archives object in the console tree.

6.

Select the original archive in which the volume used.

7.

Select the pool in the top area and the volume in the bottom area of the result
pane.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

15.2 Backup and Recovery of Optical Media

8.

Click Backup Volume in the action pane.

9.

Click OK to start the backup.


A protocol window shows the progress and the result of the backup. To check
the protocol later on, see Checking Utilities Protocols on page 222.
The volume list now contains a volume of the backup type and the same name
as the original volume.

10. Check the columns Unsaved (MB) and Last Backup/Replication:


The Unsaved (MB) column should now be blank, indicating that there is no
more data on the original volume that has not been backed up. The Last
Backup/Replication column shows the date and time of the last backup. The
Host column indicates the server where the backup resides.

15.2.2 IXW Volumes


As IXW media are written incrementally, backup and recovery slightly differ from
that of ISO media.
Unlike backup ISO media that can be removed from the jukebox immediately after
they have been created, backup IXW media must reside in the jukebox as long as
their original counterpart is being written, because the IXW backup is incrementally
synchronized with the original. As soon as the original has been filled completely
and its backup has been synchronized a last time and both media are finalized, the
backup can be removed and stored at a safe place (see Handling Storage Volumes
on page 185).

15.2.2.1 Backup of IXW Volumes


There are different ways to back up an IXW volumes.
In contrast to ISO volumes, the IXW backup volumes have to be initialized before
the backup. This can be done either automatically or manually.
Automatic backup
Normally the backup of IXW volumes is done asynchronously by the Local_Backup
job.
Proceed as follows:
1.

Select Original Archives in the Archives object in the console tree.

2.

Select the designated archive in the console tree.

3.

Select the designated pool in the top area of the result pane and click Properties
(see Write Incremental (IXW) Pool Settings on page 78).

4.

Check the Backup option.

5.

Set the value for Number of Backups to n>0 and select the required Backup
Jukebox.

AR090701-ACN-EN-6

Administration Guide

209

Chapter 15 Finalizing and Backing Up of Optical Media

6.

Check the option Auto Initialization for complete automatic backup.

7.

Schedule the Local_Backup job according to your needs (see Setting the Start
Mode and Scheduling of Jobs on page 87).
According to the scheduling, the Local_Backup job updates the oldest backup
volume. The job writes only one backup volume per instance.
Note: If problems occur, have a look in the protocol of the Local_Backup
job (see Checking the Execution of Jobs on page 88).
Semi-automatic backup
With this method, you initialize the original and backup volumes manually in
the corresponding jukebox devices. The backup volume must have the same
name as the original one. To initialize the volume, proceed as described in
Manual Initialization of Original Volumes on page 61. The configuration
procedure is the same as for automatic backup except for steps 5 and 6 which
are here: No Auto Initialization, no Number of Backups and no Backup
Jukebox selection. The backup job finds the backup volumes by their names.

Manual backup of one volume


If the original or backup medium is damaged, it is necessary to create a new backup
medium manually. If the damaged medium is a double-sided one, initialize and
backup both sides of the medium.
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the jukebox where you inserted the media in the top area of the result
pane.

3.

Select a volume with the -blank- status in the bottom area of the result pane.

4.

Click Initialize Backup in the action pane. The Init Backup Volume window
opens.

5.

Select the original volume and click OK to initialize the backup volume.

6.

For double-sided media, initialize the second side of the medium in the same
way.

7.

Select Original Archives in the Archives object in the console tree.

8.

Select the original archive in which the volume used.

9.

Select the pool in the top area and the original volume in the bottom area of the
result pane.

10. Click Backup Volume in the action pane.


11. Click OK to start the backup.

210

Open Text Archive and Storage Services

AR090701-ACN-EN-6

15.2 Backup and Recovery of Optical Media

A protocol window shows the progress and the result of the backup. To check
the protocol later on, see Checking Utilities Protocols on page 222.
The volume list now contains a volume of the backup type and the same name
as the original volume.
12. Check the columns Unsaved (MB) and Last Backup/Replication:
The Unsaved (MB) column should now be blank, indicating that there is no
more data on the original volume that has not been backed up. The Last
Backup/Replication column shows the date and time of the last backup. The
Host column indicates the server where the backup resides.
13. For double-sided media, backup the second side of the medium in the same
way.

15.2.2.2 Restoring of IXW Volumes


It is necessary to restore a volume whenever an IXW medium is defective. A defect
is normally noticed when data is written to the IXW medium. The job writing the
data to the IXW medium cannot run successfully. To detect such a problem in time,
you have to check the execution of the backup and write jobs every day (see
Checking the Execution of Jobs on page 88).
Note: There are additional recovery scenarios if you use a Remote Standby
Server (see Configuring Remote Standby Scenarios on page 161).
Generally a defective IXW medium can still be read so we recommend trying to
complete the backup before performing the actual restore process (see Backup of
IXW Volumes on page 209).
Proceed as follows:
1.

Select Devices in the Infrastructure object in the console tree.

2.

Select the jukebox where the damaged volume is located in the top area of the
result pane.

3.

Select the damaged volume in the bottom area of the result pane and click Eject
Volume in the action pane. Label it clearly as defective.

4.

Select the backup volume of the damaged volume the bottom area of the result
pane.

5.

Click Restore Volume in the action pane. This makes the backup volume
available as original. If a volume has already been written to the second side of
the defective IXW medium, restore it in exactly the same way.

6.

Create a new backup volume (see Manual backup of one volume on


page 210).
Note: If an IXW backup volume is damaged, remove the medium with Eject
and create a new backup volume (see Manual backup of one volume on
page 210).

AR090701-ACN-EN-6

Administration Guide

211

Chapter 16

Backups and Recovery


The backup concept used by Archive and Storage Services ensures that documents
are protected against data loss throughout their entire path to, through, and in the
archive server.

Figure 16-1: Backups relevant areas


There are several parts that have to be protected against data loss:
Volumes
All hard disk volumes that may hold the only instance of a document must be
protected against data loss by RAID. Which volumes have to be protected you
find in the Installation overview chapter of the installation guides for Archive and
Storage Services.
Document Pipelines
The Document Pipeline on the Enterprise Scan has to be protected against data
loss. For details please refer to section 19 "Backing up" in Open Text Imaging
Enterprise Scan - User and Administration Guide (CL-UES).

AR090701-ACN-EN-6

Open Text Archive and Storage Services

213

Chapter 16 Backups and Recovery

Database
The database with the configuration for logical archives, pools, jobs and relations
to other archive servers and leading applications has to be protected against data
loss. The process depends on the type of database you are using (see Backup of
the Database on page 214).
Optical media
Optical storage media have to be protected against data loss. The process differs
if you use ISO or IXW media (see Backup and Recovery of Optical Media on
page 206).
Storage Manager configuration
The IXW file system information and the configuration of the Storage Manager
must be saved, see Backup and Restoring of the Storage Manager
Configuration on page 216.
Data in storage systems
Data that is archived on storage systems like HSM, NAS, CAS needs also a
backup, either by means of the storage system or with archive server tools, see
Backup for Storage Systems on page 203.
Cache Server
If write back mode is enabled, the cache server locally stores newly created
documents without saving them immediately to the destination. It is
recommended to perform regular backups of the cache server data, seeBackup
and Recovery of a Cache Server on page 216.

16.1 Backup of the Database


All archived documents are administered in the archive server database. This
contains information about the documents themselves as well as about the storage
locations of the documents and their components. This database must be backed up
in a similar way as the archived documents.
To avoid data loss and extended down times you, as system administrator, should
back up the database regularly and in full, and complement this full backup with a
daily backup of the log files. In general: the more backups are performed, the safer
the system is. Backups should be performed at times of low system job.
It is advisable to back up the archive database at the same time as the database of
the leading application if possible.

214

Open Text Archive and Storage Services

AR090701-ACN-EN-6

16.1 Backup of the Database

Important
If you have installed BPM Server and/or Transactional Content Processing,
database backups are required for all databases of the system: Archive and
Storage Services, the Context Server and the User Management Server. Note
that storage media does neither contain any data of the Context Server
database and the User Management Server database, i.e. you cannot restore
these databases by importing from media. The database backup procedures
are very similar.
The database can be set up as an Oracle database, or as an MS SQL Server database.
The procedure adopted for backups depends on which of these database systems is
used.
The database must be backed up at regular intervals. However, because its data
contents are constantly changing, all database operations are written to special files
(online and archived redo logs under Oracle, transaction logs for Microsoft SQL). As
a result, the database can always be restored in full on the basis of the backup and
these files.
Important
During the configuration phase of installation, you can either select default
values for the database configuration or configure all relevant values. To
make sure that this user guide remains easy to follow, the default values are
used below. If you configured the database with non-default values, replace
these defaults with your values.
Changing the password of the database user
The login DBLOGIN and password DBPASSWORD of the database user is stored
encrypted in the setup file DBS.Setup. If you change the password of the database
user, you must change it in the corresponding entry, too.
Proceed as follows:
1.

Encrypt the new password with the command line tool enc:
dsClient
enc <decrypted password>

The encrypted password is displayed, for example:


G7F187E050632E85D

2.

AR090701-ACN-EN-6

Copy the content - in this example G7F187E050632E85D - and paste it in the


setup file.

Administration Guide

215

Chapter 16 Backups and Recovery

16.1.1 Backing Up an Oracle Database


The following links provide information how to backup and recover an Oracle 10.2
database with the Oracle utility rman:

http://download.oracle.com/docs/cd/B19306_01/server.102/b14196/backrest
001.htm#sthref606

http://download.oracle.com/docs/cd/B19306_01/backup.102/b14192/toc.htm

A lesson on using rman is to be found in OTN (Oracle Technology Network):

http://www.oracle.com/technology/obe/10gr2_db_single/ha/ob/ob_otn.htm

16.1.2 Backing Up MS SQL Server Databases


In SQL Server 2005 Online books, see SQL Server Database Engine > Administering
the Database Engine > Backing Up and Restoring Databases.

16.2 Backup and Restoring of the Storage Manager


Configuration
For backup and restoring of the Storage Manager configuration, see STORM
Configuration Guide (Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744073/3551
166/customview.html?func=ll&objId=3551166)).

16.3 Backup and Recovery of a Cache Server


Caution
If write back mode is enabled, the cache server locally stores newly
created documents without saving them immediately to the destination.
This means that highly critical data are hold on the local disk of the
related archive server. For security reasons, Open Text strongly
recommends storing data on a RAID system. For performing regular
backups of cache server data, you should include relevant items in your
backup.

16.3.1 Backup of Cache Server Data


Note: A so called maintenance mode is introduced to allow a backup if the
write back cache of the cache server is enabled. If maintenance mode is
activated the cache server still runs and handles requests, but does no longer
access the local file system so that backups can run without any conflicts. The

216

Open Text Archive and Storage Services

AR090701-ACN-EN-6

cache server acts like a proxy and routes all requests directly to the cache
server. Operations with write back items are not possible during this time.
To find out whether maintenance mode is active, start a command line and
enter
cscommand c isOnline

or
cscommand c getStatistics

With the cache server installation comes a small utility (cscommand), which allows to
activate or deactivate the maintenance mode. The commands to activate and
deactivate maintenance mode may be called from any script or batch file. Usually
the commands are added to the script that controls your backup. You can find
cscommand in the contentservice subdirectory of the <Web configuration
directory> (filestore).
Proceed as follows:
1.

Run Copy_Back jobs (recommened).

2.

Activate maintenance mode. Use:


cscommand -c setOffline -u <username> -p <password>

3.

Start your backup. Be sure that all relevant directories are included.

4.

Deactivate maintenance mode. Use:


cscommand -c setOnline -u <username> -p <password>

Directories to be backed up
Note: The directories used by Archive Cache Services are configured during
the installation.
Cache volumes

One or more cache volumes to be used for write through caching. Not
highly critical but useful for reducing time to rebuild cached data.

Write back
volume

One single cache volume to be used for write back caching. This
volume contains the following subdirectories:

dat
Components are stored here.

idx
Per document, additional information is stored, which contains all
necessary information to reconstruct the data in case of a crash.

log
Special protocol files (one per day) are stored here. Containing
relevant info when a document is transferred to and committed by
the Document Service.
Important: Protocol files are not deleted automatically. Ensure
regular deletion of protocol files to avoid storage problems.

AR090701-ACN-EN-6

Administration Guide

217

Chapter 16 Backups and Recovery

The absolute path to the volume where the cache server stores its
metadata for the cached documents. Necessary to recover.

Path to store
database files

16.3.2 Recovery of Cache Server Data


In principle, two different recovery scenarios are supported:

Complete loss of all volumes

Data gets corrupt or partial loss of data volumes

Recovery in case of complete loss of all volumes


This proceeding recovers the cache server to the state of a previous backup. This
means all data in the time span between last backup and crash are lost. Documents
that are already transferred to the archive server are not affected.
Proceed as follows:
1.

Activate maintenance mode. Use


cscommand -c setOffline -u <username> -p <password>

2.

Copy your backup data to the correct location.

3.

Activate consistency check. Use


cscommand c checkVolume -u <username> -p <password>

4.

Deactivate maintenance mode. Use


cscommand -c setOnline -u <username> -p <password>

Recovery in case of corrupt data or partial loss of data


If successful, this proceeding recovers the actual state of the cache server.
Proceed as follows:
1.

Activate maintenance mode. Use


cscommand -c setOffline -u <username> -p <password>

2.

If the write back volume is still available, rename the root directory of the write
back volume (see step 5, <location of write back data>).

3.

Copy your backup of the data to the correct location to replace the corrupt one.
If you have also a partial loss of data volumes, copy the lost data from your
backup to the correct location.

4.

Activate consistency check. Use


cscommand c checkVolume -u <username> -p <password>

5.

Start recovering of data. Use


cscommand -c recover <location of write back data> -u <username> -p
<password>.

218

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Important
Each successfully recovered document is listed on the command line
and removed from <location of write back data>. This means that
the recover operation can just be processed once.
6.

If you do not get any error messages, the renamed directory (<location of
write back data>) can be deleted. Any data left in this subtree is no longer
needed for operation.
Important
If you get error messages, do not delete any data. If you cannot fix the
problem, contact Open Text Customer Support.

7.

Deactivate maintenance mode. Use


cscommand -c setOnline -u <username> -p <password>

AR090701-ACN-EN-6

Administration Guide

219

Chapter 17

Utilities
Utilities are tools that are started interactively by the administrator. The following
table provides an overview of all utilities that can be reached in Utilities in the
System object in the console tree. Cross references are leading to detailed
descriptions in the relevant chapters. You also find a description of how to start
utilities and how to check the utility protocol in this chapter.
Some utilities are assigned directly to objects and can be reached in the action pane.
Protocols of these utilities can also be reached in Utilities in the System object in the
console tree
Note: Some utilities need to enter the name of the STORM server. To
determine the name, select Devices in the Infrastructure object in the console
tree. The name of the STORM server is displayed in brackets behind the device
name; for example:
WORM(STORM1)

Table 17-1: Overview of utilities


Utility

Link

Analyze Security Settings

Analyzing Security Settings on page 105

Check Database Against Volume

Checking Database Against Volume on


page 198

Check Document

Checking a Document on page 200

Check Volume

Checking a Volume on page 201

Check Volume Against Database

Checking Volume Against Database on


page 199

Compare Backup WORMs

Comparing Backup and Original IXW Volume


on page 202

Count Documents/Components

Counting Documents and Components in a


Volume on page 201

Export Volumes(s)

Exporting Volumes on page 192

Import GS Volume

Importing GS Volumes for Single File (VI) Pool


on page 197

Import HD Volume

Importing Hard Disk Volumes on page 196

Import ISO Volume(s)

Importing ISO Volumes on page 194

Import IXW Or Finalized Volume(s)

Importing Finalized and Non-finalized IXW


Volumes on page 194

AR090701-ACN-EN-6

Open Text Archive and Storage Services

221

Chapter 17 Utilities

Utility

Link

View Installed Archive Server Patches

Viewing Installed Archive Server Patches on


page 297

VolMig Cancel Migration Job

Canceling a Migration Job on page 254

VolMig Continue Migration Job

Continuing a Migration Job on page 253

VolMig Fast Migration Of ISO Volume

Creating a Local Fast Migration Job for ISO Volumes on page 244

VolMig Fast Migration Of remote


ISO Volume

Creating a Remote Fast Migration Job for ISO


Volumes on page 245

VolMig Migrate Components On


Volume

Creating a Local Migration Job on page 239

VolMig Migrate Remote Volumes

Creating a Remote Migration Job on page 242

VolMig Pause Migration Job

Pausing a Migration Job on page 253

VolMig Renew Migration Job

Renewing a Migration Job on page 254

VolMig Status

Monitoring the Migration Progress on page 249

17.1 Starting Utilities


Proceed as follows:
1.

Select Utilities in the System object in the console tree.

2.

Select the Utilities tab in the top area of the result pane. All available utilities are
listed in the top area of the result pane.

3.

Select the utility you want to start.

4.

Click Run in the action pane.

5.

Enter dedicated values.

6.

Click Run to start the utility.

A window to monitor the results of the utility execution opens.

17.2 Checking Utilities Protocols


If you start a utility, a window opens to monitor the results. At the same time, a
protocol is created which can be checked later. You can check the results and
messages of a single utility or you check a protocol out of the protocol list where all
stored protocols are listed.
To check results and messages of a single utility
Proceed as follows:

222

Open Text Archive and Storage Services

AR090701-ACN-EN-6

17.2 Checking Utilities Protocols

1.

Select Utilities in the System object in the console tree.

2.

Select the Utilities tab in the top area of the result pane. All available utilities are
listed in the top area of the result pane.

3.

Select the utility you want to check.


The latest message of the utility is listed in the bottom area of the result pane.

4.

Select the Results tab in the bottom area of the result pane to check whether the
execution of the utility was successful
or
select the Message tab in the bottom area of the result pane to check the
messages created during execution of the utility.

To check utilities protocols


Proceed as follows:
1.

Select Utilities in the System object in the console tree.

2.

Select the Protocol tab in the top area of the result pane.

3.

Select the protocol you want to check.


The messages created during the execution of the utility are listed in the bottom
area of the result pane.

To clear protocols
Proceed as follows:
1.

Select Utilities in the System object in the console tree.

2.

Select the Protocol tab in the top area of the result pane.

3.

Click Clear Protocol in the action pane.


All protocol entries are deleted.

To reread scripts
Utilities and jobs are read by Archive and Storage Services during the startup of the
server. If utilities or jobs are added or modified they can be reread. Thus avoids a
restart of Archive and Storage Services.
Proceed as follows:
1.

Select Utilities in the System object in the console tree.

2.

Select the Protocol tab in the top area of the result pane.

AR090701-ACN-EN-6

Administration Guide

223

Chapter 17 Utilities

3.

224

Click Reread Scripts in the action pane.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Part 4
Migration

Chapter 18

About Migration
The very dynamic IT market makes it difficult to provide long-term archiving of
documents. Although currently known storage media have an expected life time of
up to 50 years, after such a long time there will be no devices that still can read these
storage media. Therefore, it is recommended to migrate all data periodically from
old to new storage media. Open Text delivers a reliable, secure, comfortable and
efficient solution for this challenge of volume migration.
You handle volume migration with two components:

The volmig program, which is running permanently as a spawner service


controlling the actual migration procedure (= VolMig Server).

The vmclient program, which supplies an interface for other components that
need to interact with volume migration. See Volume Migration Utilities on
page 257.

18.1 Features of Volume Migration


The volume migration suite has been designed to make media migration easier.
These are the features of volume migration:

All kinds of storage systems are supported


Migration of documents from ISO, IXW, HD or Single-File volumes to ISO, IXW
or Single-File pools.

Remote migration
Migration of documents from ISO or IXW volumes on a known server to the
local server via a network connection.

Fast migration of ISO images


Migration of entire ISO images. This allows fast migration but no filtering of
components.

Remote fast migration of ISO images


Migration of entire ISO images from a known server to the local server via a
network connection. This allows fast migration but no filtering of components.

Filters
Selecting of documents within creation date ranges.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

227

Chapter 18 About Migration

Compression, encryption
Compression and/or encryption of documents before they are written to new
media.

Retention
Setting of a retention period for documents during the migration process.

Automatic Verification
Verifying of all migrated documents. A verification strategy can be defined for
each volume, specifying the verification procedure. Timestamps or different
checksums can be selected as well as a binary comparison.

18.2 Restrictions
The following restrictions are valid for the volume migration features:

Remote single-file
Remote migration is only possible for volumes that are handled by STORM and
that can be mounted via NFS. Single-File volumes like HSM or HD volumes
cannot be migrated from a remote archive server.

DBMS provider
Remote migration is only possible if the remote archive server uses the same
DBMS provider as the local archive server. For a cross-provider migration setup,
contact Open Text Services.

Fast migration of ISO images


It is not possible to filter components. Everything is copied regardless whether it
is very new, very old or has been deleted logically.

Compression, encryption
You cannot compress encrypted data. Decompression and decryption of
documents is not supported by the migration suite.

Caution
Consider that replication and backup settings are not transferred to the
target archive during migration. Therefore, the configuration for backup and
replicated archives must be performed for the migrated archive again. See
Configuring Remote Standby Scenarios on page 161 and Creating and
Modifying Pools on page 74.

228

Open Text Archive and Storage Services

AR090701-ACN-EN-6

18.3 Migration to HDSK

18.3 Migration to HDSK


The volume migration allows the migration of documents from optical media to
HDSKRO volumes, but not to an HDSK volume. This restriction was implemented
to avoid migration from safer optical volumes to unsafer HDSK volumes where
documents can be deleted easily.
To allow migration to HDSK volumes, this restriction can be switched off
temporarily.
Proceed as follows:
1.

Open a command shell.

2.

Execute the command vmclient setParam ALLOW_HDSK_TARGET 1.

3.

Perform volume migration as described in the following sections.


Note: The restriction to migrate to HDSK volumes can also be switched off
permanently. For that, change the entry in the <OT config AS>\VMIG.Setup file
as follows: ALLOW_HDSK_TARGET = on. Ensure that the entry is set back after migration.

AR090701-ACN-EN-6

Administration Guide

229

Chapter 19

Setting Parameters of Volume Migration


Configuration and logging parameters of volume migration can be specified. All
other necessary settings are delivered by the archive system, e.g. the temporary
paths.

19.1 Setting Configuration Parameters of Volume


Migration
Proceed as follows:
1.

Select Configuration > Archive Server in the Runtime and Core Services object
in the console tree.

2.

Specify the configuration parameters for the volume migration:

default hostname for the client to connect to


AS.VMIG.VARS.SERVER_HOST
Specifies the host to which the vmclient will connect via RPC.
Default: localhost
Server Port for RPC requests
AS.VMIG.VARS.SERVER_PORT
Specifies the server port of the host for the vmclient.
Default: 4038
Max MB of documents to copy in one run
AS.VMIG.VARS.MEGABYTES_PER_NIGHT
The volume migration is set to stand-by, after the given amount of data has
been ordered to be copied to the destination pool.
Default: 10000 (~10 GB)
Protocol Directory
AS.VMIG.VARS.PROTOCOL_DIRECTORY
Defines the directory where the protocols of the volume migration are saved.
Default: $ECM_LOG_DIR/migration
Warn after how many days if component not written
AS.VMIG.VARS.MAX_DAYS_TO_COPY
The volume migration restarts an unfinished migration automatically and sends
a notification if any component is not successfully copied after the defined
number of days. A value of -1 disables this feature.
Default: 7 days

AR090701-ACN-EN-6

Open Text Archive and Storage Services

231

Chapter 19 Setting Parameters of Volume Migration

List all DocID/CompID tuples in job protocol


AS.VMIG.VARS.DUMP_COMP_IDS
Allows that the volmig server copies DocIDs and CompIDs for each component
in the job protocol.
Default: off
Lower process priority
AS.VMIG.VARS.PRIORITY_THROTTLE
Allows the execution of volume migration with a lower process priority.
Default: off
Enable CRC32 checksum verification
AS.VMIG.VARS.VMIG_VERIFY_CRC32
Allows CRC32 testing if checksum verification is specified for a migration job.
Default: on
Enable client-generated hash value verification
AS.VMIG.VARS.VMIG_VERIFY_CL_SIG
Allows client-generated hash value testing if checksum verification is specified
for a migration job.
Default: on
Enable timestamp hash value verification
AS.VMIG.VARS.VMIG_VERIFY_SIG
Allows timestamp hash value testing if checksum verification is specified for a
migration job.
Default: on
Enable ArchiSig timestamp SHA-1 hash value verification
AS.VMIG.VARS.VMIG_VERIFY_DIG2
Allows ArchiSig timestamp SHA-1 hash value testing if checksum verification is
specified for a migration job.
Default: on
Enable ArchiSig timestamp RipeMD-160 hash value verification
AS.VMIG.VARS.VMIG_VERIFY_DIG4
Allows ArchiSig timestamp RipeMD-160 hash value testing if checksum
verification is specified for a migration job.
Default: on
Enable ArchiSig timestamp SHA256 hash value verification
AS.VMIG.VARS.VMIG_VERIFY_DIG5
Allows ArchiSig timestamp SHA256 hash value testing if checksum verification
is specified for a migration job.
Default: on
Enable ArchiSig timestamp SHA512 hash value verification
AS.VMIG.VARS.VMIG_VERIFY_DIG6
Allows ArchiSig timestamp SHA512 hash value testing if checksum verification
is specified for a migration job.
Default: on

232

Open Text Archive and Storage Services

AR090701-ACN-EN-6

19.2 Setting Logging Parameters of Volume Migration

19.2 Setting Logging Parameters of Volume Migration


Proceed as follows:
1.

Select Configuration > Archive Server in the Runtime and Core Services object
in the console tree.

2.

Specify the logging parameters for the volume migration, all under
AS.VMIG.LOGGING.*:
Entry, Info, Debug, User-Error, Result, Database, Warning, RPC Relative times,
Use Eventlog, Maxlogsize

AR090701-ACN-EN-6

Administration Guide

233

Chapter 20

Preparing the Migration


20.1 Preparing for Local Migration
Proceed as follows:
1.

If the target pool has a jukebox with optical media, ensure that there are enough
empty media in it.

2.

Start the Archive Administration, select the dedicated logical archive and create
a new pool for the migration. See Creating and Modifying Pools on page 74.

3.

Make sure that the media to be migrated are imported.


Note: Components not listed in the ds_comp DS table are ignored. To
ensure that all components of one medium are listed in the ds_comp DS
table, Open Text recommends that you call volck first.

4.

Create and schedule a job in the Archive Administration for the


Migrate_Volumes command. See Configuring Jobs and Checking Job Protocol
on page 83.

20.2 Preparing for Remote Migration


In addition to Preparing for Local Migration on page 235, the following steps are
necessary to prepare for migration from a remote archive server:
Preconditions

The hostname of the old server is supposed to be oldarchive. The volumes to


be migrated are located on oldarchive. The volumes of the oldarchive are
listed in Devices in the Infrastructure object of the console tree. This server is
also called remote server.

The hostname of the new archive server (destination of migration) is supposed to


be newarchive. The target devices for remote migration are located on
newarchive. This server is also called local server.

The newarchive is not a known server of oldarchive.

Proceed as follows:
1.

AR090701-ACN-EN-6

Normally, newarchive cannot access the volumes of oldarchive. Thus, you


have to make sure that the local server (newarchive) is configured in the
STORM's hosts list on the remote server (oldarchive). This will allow access to

Open Text Archive and Storage Services

235

Chapter 20 Preparing the Migration

newarchive.
Modify the configuration file: <OT config AS>/storm/server.cfg
Add newarchive to the hosts { } section

2.

Restart the jbd on oldarchive after you made changes here.


> spawncmd stop jbd
> spawncmd start jbd

3.

For Oracle only: On the local server, extend the $TNS_ADMIN/tnsnames.ora file
to contain a section for the remote computer.

4.

The actual read access of the media is done via NFSSERVERs. To add access to
oldarchive media, create this configuration in Runtime and Core Services >
Configuration > Archive Server >
AS.DS.STORM.NFSSERVER.EXTRA_NFSSERVER.NFSSERVER3 and
.NFSSERVER4 (on the local server newarchive). Add an entry for each
NFSSERVER on the remote computer (at least for those that you intend to read
from). This will create access to the media on oldarchive.
Example 20-1: NFSSERVER mapping on UNIX platforms
On the remote computer (oldarchive), there are two NFSSERVER entries
NFSSERVER1 = WORM,localhost,4027,/views_hs
NFSSERVER2 = CDROM,localhost,4027,/views_hs

On the local computer, create the following entries:


NFSSERVER3 = WORM2,oldarchive,4027,/views_hs
NFSSERVER4 = CDROM2,oldarchive,4027,/views_hs

On Windows platforms, the port number is 0 instead of 4027.

5.

Restart dsrc, dswc and dsaux on newarchive.


> spawncmd restart dsrc
> spawncmd restart dswc
> spawncmd restart dsaux

Note: On Archive Servers older than version 9.6.1 use:


> spawncmd stop <process> followed by
> spawncmd start <process> instead of > spawncmd restart <process>.
6.

For the newarchive, select Configuration > Archive Server in the Runtime and
Core Services object in the console tree.

7.

Add the property AS.VMIG.VOLMIG_NFSMAP.NFSMAP_LIST.row1 to


.rowN. For each remote NFSSERVER to read from, add an entry. The syntax is:
<remote server>:<remote NFSSERVER>:local:<local NFSSERVER alias>

236

Open Text Archive and Storage Services

AR090701-ACN-EN-6

20.3 Preparing for Local Fast Migration of ISO Images

Example 20-2: VMIG NFSSERVER mapping after NFSSERVERs WORM2


and CDROM2 have been created
oldarchive:WORM:local:WORM2
oldarchive:CDROM:local:CDROM2

The entrylocal is fixed syntax; it is not the name of the local server!

8.

Restart the Migration Server on newarchive


> spawncmd restart migration

20.3 Preparing for Local Fast Migration of ISO Images


Proceed as follows:
1.

If the target pool has a jukebox with optical media, make sure that there are
enough empty media in it.

2.

Create and schedule a job in the Archive Administration for the


Migrate_Volumes command. See Configuring Jobs and Checking Job Protocol
on page 83.

20.4 Preparing for Remote Fast Migration of ISO


Images
In addition to Preparing for Local Fast Migration of ISO Images on page 237, the
following steps are necessary to prepare for migration from a remote archive server:
Proceed as follows:
1.

For Oracle only: On the local server, extend $TNS_ADMIN/tnsnames.ora to


contain a section for the remote computer.

2.

On the remote server (old archive), modify the DS configuration (<OT config
AS>/DS.Setup).
If the version is older than 9.7.0, you have to change the registry entry on
Windows: HKEY_LOCAL_MACHINE\SOFTWARE\IXOS\IXOS_ARCHIVE\DS.
Add the variable
BACKUPSERVER1 = BKCD,<newarchive>,0

<newarchive> is the hostname of the target archive server. Do not use blanks and
do not type the angle brackets in the value!
3.

Restart the Backup Server


> spawncmd restart bksrvr

AR090701-ACN-EN-6

Administration Guide

237

Chapter 20 Preparing the Migration

Note: On Archive Servers older than version 9.6.1 use:


> spawncmd stop <process> followed by
> spawncmd start <process> instead of > spawncmd restart <process>.

238

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Chapter 21

Creating a Migration Job


If the source volumes are IXW media (WORMs, UDOs), make sure they are finalized
(see Finalizing Storage Volumes on page 185) or write locked.
Setting a volume to write locked
Proceed as follows:
1.

Select Original Archives in the Archives object in the console tree.

2.

Select the archive you want to migrate in the console tree.

3.

Select the Pools tab in the top area of the result pane. The attached volumes are
listed in the bottom area of the result pane.

4.

Select the volume to be write locked and click Properties in the action pane.

5.

Select write locked in the properties windows and click OK.

21.1 Creating a Local Migration Job


Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the VolMig Migrate Components On Volume utility.

3.

Click Run in the action pane.

4.

Enter appropriate settings to all fields (see Settings for local migration on
page 239). Click Run.

A new migration job is added to the list of migration jobs.


The migration job is processed if:

the scheduler of the Administration Server calls the job Migrate_Volumes and

all previous jobs have been processed.

Settings for local migration


Source Volume
Specify the source volume(s) name. The following characters are provided
therefore:

AR090701-ACN-EN-6

Open Text Archive and Storage Services

239

Chapter 21 Creating a Migration Job

Character

Description

Wildcard: 0 to n arbitrary characters


e.g. vol5*, matches all volumes that name begins with vol5, e.g. vol5a,
vol5c78, vol52e4r

Wildcard: exactly one arbitrary character


e.g. volx?x, matches volxax to volxzx and volx0x to volx9x

Is used to escape wildcards (*, ?), if they are used as real characters in
volume names.

[]

Specifies a set of volume names:


[ ] can be used only once
, can be used to separate numbers
- can be used to specify a range
e.g. [001,005-099]

Target archive
Enter the target archive name.
Target pool
Enter the target pool name.
Migrate only components that were archived: On date or after
You can restrict the migration operation to components that were archived after
or on a given date. Specify the date here. The specified day is included.
Migrate only components that were archived: Before date
You can restrict the migration operation to components that were archived
before a given date. Specify the date here. The specified day is excluded.
Set retention in days
Enter the retention period in days. With this entry, you can change the retention
period that was set during archiving. The new retention period is added to the
archiving date of the document. The following settings are possible:

>0 (days)

0 (none)

-1 (infinite)

-6 (archive default)

-8 (keep old value)

-9 (event)
Note: The retention date of migrated documents can only be kept or extended.
The following table provides allowed settings:

240

Open Text Archive and Storage Services

AR090701-ACN-EN-6

21.1 Creating a Local Migration Job

Current retention setting


of the document

Allowed retention setting for migration

no retention

any

retention date

extended retention date (>0) or infinite (-1)

infinite, event

no change

Verification mode
Select the verification mode that should be applied for volume migration. The
following settings are possible:

None

Timestamp

Checksum

Binary Compare

Timestamp or Checksum

Timestamp or Binary Compare

Checksum or Binary Compare

Timestamp or Checksum or Binary Compare


Notes:

Many documents (including all BLOB documents) do not have a checksum


or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides binary
compare as a last alternative.

If a migration job cannot be finished because the source volume contains


documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See Modifying Attributes of
a Migration Job on page 258 (-v parameter).

Additional arguments
-e
Export source volumes after successful migration.
-k
Keep exported volume (export only the document entries, allow dsPurgeVol
to destroy this medium).
-i
Migrate only latest version, ignore older versions.
-A <archive>
Migrate components only from a certain archive.

AR090701-ACN-EN-6

Administration Guide

241

Chapter 21 Creating a Migration Job

21.2 Creating a Remote Migration Job


Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the VolMig Migrate Remote Volumes utility.

3.

Click Run in the action pane.

4.

Enter appropriate settings to all fields (see Settings for remote migration on
page 242). Click Run.

A new migration job is added to the list of migration jobs.


The migration job is processed if

the scheduler of the Administration Server calls the Migrate_Volumes job and

all previous jobs have been processed.

Settings for remote migration


Server name (Remote server)
Enter the remote server name.
Database name (Remote server)
Enter the remote database name.
Database provider (Remote server)
Select the remote DBMS provider. This must be the same as the local DBMS
provider.
Database user (Remote server)
Enter the database user name.
Database password (Remote server)
Enter the database user password.
Source Volume
Specify the source volume(s) name. The following characters are provided
therefore:

242

Character

Description

Wildcard: 0 to n arbitrary characters


e.g. vol5*, matches all volumes that name begins with vol5, e.g. vol5a,
vol5c78, vol52e4r

Wildcard: exactly one arbitrary character


e.g. volx?x, matches volxax to volxzx and volx0x to volx9x

Is used to escape wildcards (*, ?), if they are used as real characters in
volume names.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

21.2 Creating a Remote Migration Job

Character

Description

[]

Specifies a set of volume names:


[ ] can be used only once
, can be used to separate numbers
- can be used to specify a range
e.g. [001,005-099]

Target archive (Local server)


Enter the target archive name.
Target pool (Local server)
Enter the target pool name.
Migrate only components that were archived: On date or after
You can restrict the migration operation to components that were archived after
or on a given date. Specify the date. The specified day is included.
Migrate only components that were archived: Before date
You can restrict the migration operation to components that were archived
before a given date. Specify the date. The specified day is excluded.
Set retention in days
Enter the retention period in days. With this entry, you can change the retention
period that was set during archiving. The new retention period is added to the
archiving date of the document. The following settings are possible:

> 0 (days)

0 (none)

-1 (infinite)

-6 (archive default)

-8 (keep old value)

-9 (event)
Note: The retention date of migrated documents can only be kept or extended.
The following table provides allowed settings:

AR090701-ACN-EN-6

Current retention setting


of the document

Allowed retention setting for migration

no retention

any

retention date

extended retention date (>0) or infinite (-1)

infinite, event

no change

Administration Guide

243

Chapter 21 Creating a Migration Job

Verification mode
Select the verification mode that should be applied for volume migration. The
following settings are possible:

None

Timestamp

Checksum

Binary Compare

Timestamp or Checksum

Timestamp or Binary Compare

Checksum or Binary Compare

Timestamp or Checksum or Binary Compare


Notes:

Many documents (including all BLOB documents) do not have a checksum


or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides binary
compare as a last alternative.

If a migration job cannot be finished because the source volume contains


documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See Modifying Attributes of
a Migration Job on page 258 (-v parameter).

Additional arguments
-i
Migrates only latest version, ignores older versions.
-A <archive>
Migrates components only from a certain archive.

21.3 Creating a Local Fast Migration Job for ISO


Volumes
Proceed as follows:

244

1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the VolMig Fast Migration of ISO Volume utility.

3.

Click Run in the action pane.

4.

Enter appropriate settings to all fields. Click Run.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

21.4 Creating a Remote Fast Migration Job for ISO Volumes

Settings for local fast migration


Source Volume
Specify the source volume(s) name. The following characters are provided
therefore:
Character

Description

Wildcard: 0 to n arbitrary characters


e.g. vol5*, matches all volumes that name begins with vol5, e.g.
vol5a, vol5c78, vol52e4r

Wildcard: exactly one arbitrary character


e.g. volx?x, matches volxax to volxzx and volx0x to volx9x

Is used to escape wildcards (*, ?), if they are used as real characters in volume names.

[]

Specifies a set of volume names:


[ ] can be used only once
, can be used to separate numbers
- can be used to specify a range
e.g. [001,005-099]

Use target jukebox from archive


Enter the target archive name.
Use target jukebox from pool
Enter the target pool name.
A new migration job is added to the list of migration jobs.
The migration job is processed if

the scheduler of the Administration Server calls the Migrate_Volumes job and

all previous jobs have been processed.

The archive/pool specification is only necessary to determine the target jukebox


where the copy of the ISO image is to be written. The logical archive of the
contained documents is not changed. Verification of the entire ISO image is
performed automatically against the built-in ISO checksum.

21.4 Creating a Remote Fast Migration Job for ISO


Volumes
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the VolMig Fast Migration of remote ISO Volume utility.

3.

Click Run in the action pane.

AR090701-ACN-EN-6

Administration Guide

245

Chapter 21 Creating a Migration Job

4.

Enter appropriate settings to all fields (see Settings for remote fast migration
on page 246). Click Run.

A new migration job is added to the list of migration jobs.


The migration job is processed if:

the scheduler of the Administration Server calls the Migrate_Volumes job and

all previous jobs have been processed.

Settings for remote fast migration


Server name (Remote server)
Enter the remote server name.
Database name (Remote server)
Enter the remote database name.
Database provider (Remote server)
Select the remote DBMS provider. This must be the same as the local DBMS
provider.
Database user (Remote server)
Enter the database user name.
Database password (Remote server)
Enter the database user password.
Source volumes (Remote server)
Specify the source volume(s) name. The following characters are provided
therefore:
Character

Description

Wildcard: 0 to n arbitrary characters


e.g. vol5*, matches all volumes that name begins with vol5, e.g. vol5a,
vol5c78, vol52e4r

Wildcard: exactly one arbitrary character


e.g. volx?x, matches volxax to volxzx and volx0x to volx9x

Is used to escape wildcards (*, ?), if they are used as real characters in
volume names.

[]

Specifies a set of volume names:


[ ] can be used only once
, can be used to separate numbers
- can be used to specify a range
e.g. [001,005-099]

Target archive (Local server)


Enter the target archive name.
Target pool (Local server)
Enter the target pool name.

246

Open Text Archive and Storage Services

AR090701-ACN-EN-6

21.4 Creating a Remote Fast Migration Job for ISO Volumes

Verification mode
Select the verification mode which should be applied for volume migration. The
following settings are possible:

None

Timestamp

Checksum

Binary Compare

Timestamp or Checksum

Timestamp or Binary Compare

Checksum or Binary Compare

Timestamp or Checksum or Binary Compare


Notes:

Many documents (including all BLOB documents) do not have a checksum


or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides binary
compare as a last alternative.

If a migration job cannot be finished because the source volume contains


documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See Modifying Attributes of
a Migration Job on page 258 (-v parameter).

Additional arguments
-d (dumb mode)
Import of document/component entries into local DB by dsTools instead of
reading directly from the remote DB. The dumb mode disables automatic
verification. Archive- and retention settings cannot be changed.
-A <archive>
Migrates components only from a certain archive. Does not work with dumb
mode (d ).

AR090701-ACN-EN-6

Administration Guide

247

Chapter 22

Monitoring the Migration Progress


You can display an overview of migration jobs to check the progress of migration.
Each migration job has a unique ID, optional flags and a status. This information is
also needed to manipulate migration jobs. See Manipulating Migration Jobs on
page 253

22.1 Starting Monitoring


Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the VolMig Status utility.

3.

Click Run in the action pane.

4.

Specify which migration jobs will be included in the overview.


Possible settings are:

5.

AR090701-ACN-EN-6

New

In progress

Finished

Cancelled

Error

Click Run.
An overview of migration jobs with the demanded job status opens.

Open Text Archive and Storage Services

249

Chapter 22 Monitoring the Migration Progress

22.2 States of Migration Jobs


Each migration job is handled by volume migration (VolMig) and passes through a
number of processing steps. Many of these processing steps assign a new status to
the job. Depending on the migration strategy (job type), a different set of states and
a different order of processing steps can be observed.

New (enqueued)
VolMig has not yet started to process this migration job.

Impt (import remote DB entries)


VolMig has started replicating DB entries for archives, documents, components
and component types of volumes from a remote source.

Prep (prepare component list)


VolMig has started to query the components on the current medium to be
migrated.

250

Open Text Archive and Storage Services

AR090701-ACN-EN-6

22.2 States of Migration Jobs

Iso (create and write an ISO image file)


For fast migration jobs, entire ISO images are replicated at once. This state
indicates that VolMig is retrieving an ISO image file from a local or remote
volume or is writing that image file to the target storage.

Copy (create write jobs)


VolMig is now instructing the DS to copy the components from the source
medium to the migration pool. Entries in the ds_activity table are created.

Wait (wait for write jobs to become finished)


When the scheduler calls the Migrate_Volume job, VolMig checks for any
components that have been copied by dsCD, dsWorm or dsHdsk to a volume in the
target pool. When it finds some, it can optionally verify the integrity. This will be
done each time until all components from a volume are found (and verified) in
the destination pool. Until then, the migration job displays the Wait status.

Fin (finished successfully)


The migration job is finished. This means that all selected components from the
source volume have been migrated successfully.

Canc (job cancelled)


The migration job has been cancelled by the administrator (see Canceling a
Migration Job on page 254).

Paus (job paused)


This migration job has been paused and will not be processed until the
administrator continues the job (see Pausing a Migration Job on page 253).

Err (error processing job)


An error occurred during processing the migration job. To resume processing, fix
the error (check logfiles therefore) and continue the migration job afterwards (see
Continuing a Migration Job on page 253).

AR090701-ACN-EN-6

Administration Guide

251

Chapter 23

Manipulating Migration Jobs


The volume migration provides utilities to manipulate running migration jobs, by
using Administration Client.

23.1 Pausing a Migration Job


You can pause a migration job and continue it later. See Continuing a Migration
Job on page 253. This can be useful to prefer other jobs.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Determine the ID of the migration job you want to pause via the VolMig Status
utility, see Monitoring the Migration Progress on page 249.

3.

Select the VolMig Pause Migration Job utility.

4.

Click Run in the action pane.

5.

Enter the ID of the migration job that you want to pause in the Migration Job
ID(s) field.

6.

Click Run.
The migration job is set to the Paus status.

23.2 Continuing a Migration Job


You can continue a paused job (Paus status) or a job that produced an error (Err
status) to resume migration.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Determine the ID of the migration job you want to continue via the VolMig
Status utility, see Monitoring the Migration Progress on page 249.

3.

Select the VolMig Continue Migration Job utility.

4.

Click Run in the action pane.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

253

Chapter 23 Manipulating Migration Jobs

5.

Enter the ID of the migration job that you want to continue in the Migration Job
ID(s) field.

6.

Click Run.
A protocol window shows the progress and the result of the migration. The
migration job is set back to the status before it has been paused or the error
occurred.

23.3 Canceling a Migration Job


If you cancel a migration job, all copy jobs of this migration job are deleted as well.
A canceled migration job can be renewed to start the job from the beginning. See
Renewing a Migration Job on page 254.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Determine the ID of the migration job you want to cancel via the VolMig Status
utility. See Monitoring the Migration Progress on page 249.

3.

Select the VolMig Cancel Migration job utility.

4.

Click Run in the action pane.

5.

Enter the ID of the migration job that you want to cancel in the Migration Job
ID(s) field.

6.

Click Run.
A protocol window shows the progress and the result. The migration job is set
to the Canc status. All copy jobs for this migration job are deleted.

23.4 Renewing a Migration Job


You can renew any migration job to start it from the beginning (unless another
active job processes the same source volume).
Proceed as follows:

254

1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Determine the ID of the migration job you want to renew via the VolMig Status
utility. See Monitoring the Migration Progress on page 249.

3.

Select the VolMig Renew Migration job utility.

4.

Click Run in the action pane.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

23.4 Renewing a Migration Job

5.

Enter the ID of the migration job that you want to renew in the Migration Job
ID(s) field.

6.

Click Run.
A protocol window shows the progress and the result of the migration. The
migration job is set to the New status and is started from the beginning.

AR090701-ACN-EN-6

Administration Guide

255

Chapter 24

Volume Migration Utilities


The volume migration suite provides additional utilities to support you to perform
your migration. These utilities must be executed in a command shell. The following
sections explains the most important vmclient commands with their corresponding
attributes.
Execute vmclient commands
Proceed as follows:
1.

Open a command shell.

2.

Enter > vmclient <command> <attribute> [<attribute>...]

Getting help on vmclient commands


Proceed as follows:
1.

Open a command shell.

2.

Enter > vmclient -h to get help.

24.1 Deleting a Migration Job


This command deletes a migration job and removes any pending write jobs.
> vmclient deleteJob <jobID> [<jobID> ...]

jobID
The ID of the migration job to be deleted.

24.2 Finishing a Migration Job Manually


If a migration job cannot be finished regularly, but you know for sure that all
required documents have been migrated, you can set the job to the Fin status
(finished successfully) manually.
> vmclient finishJob <jobID> [<jobID> ...]

jobID
The ID of the migration job to be finished.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

257

Chapter 24 Volume Migration Utilities

24.3 Modifying Attributes of a Migration Job


This command changes the attributes of a migration job. Depending on the current
status of the specified migration job, only some attributes can be modified, others
are unchangeable.
> vmclient updateJobFlags <jobID> <attribute> [<attribute>...]

jobID
The ID of the migration job to be modified.
attribute
Note: Attributes with one hyphen (-) will be added/updated.
Attributes with two hyphens (--) will be removed.
-e (export)
Export source volumes after successful migration.
-k (keep)
Do not set the exported flag for the volume (so dsPurgeVol can destroy it).
-i (ignore old versions)
Migrate only the latest version of each component, ignore older versions.
-A <archive>
Migrate components only from a certain archive.
-r <value> (retention)
Set a new value for the retention of the migrated documents.
-v <value> (verification level)
Define how components should be verified by VolMig.
Example 24-1: Modifying attributes of a migration job
To add the export flag, execute
> vmclient updateJobFlags <jobID> -e

To remove the export flag, execute


> vmclient updateJobFlags <jobID> --e

To change the archive flag, execute


> vmclient updateJobFlags <jobID> -A H4

To remove the archive flag, execute


> vmclient updateJobFlags <jobID> --A

24.4 Changing the Target Pool of Write Jobs


While the migration is running, documents may still be archived into the source
pool. After the migration has been finished, the target pool may be intended to

258

Open Text Archive and Storage Services

AR090701-ACN-EN-6

24.5 Determining Unmigrated Components

become the new default pool. To have the documents that are archived during the
migration written into the target pool rather than the source pool, you can use this
command to update the Write jobs.
> vmclient updateDsJob <old poolname> <new poolname> -d|-v

old poolname
Is constructed by concatenating the source archive name, an underscore
character and the source pool name, e.g. H4_worm.
new poolname
Is constructed by concatenating the target archive name, an underscore character
and the target pool name, e.g. H4_iso.
-d
Update pools in ds_job only.
-v
Update pools in both, ds_job and vmig_jobs.
Note: This works only for local migration scenarios. Write jobs in a remote
migration environment remain on the remote server and cannot be moved to
the local machine.

24.5 Determining Unmigrated Components


As long as a migration job is in Wait status, there are components that have not yet
been successfully migrated to the target pool. To find those components, this
command is provided. It lists document IDs and component IDs for a deeper
investigation on those items.
> vmclient listMissingComps <jobID> <max results>

jobID
The ID of the migration job which components should be listed.
max results
How many components should be listed at most.

24.6 Switching Component Types of Two Pools


After the migration of all media of a pool has been successfully finished, it may be
useful to change the migration target pool to the new default pool. It is possible to
switch the component types (known as application types in former archive server
versions) as follows:
> vmclient switchAppTypes <archive> <pool 1> <pool 2>

archive
The archive name.
pool 1
Name of the first pool.

AR090701-ACN-EN-6

Administration Guide

259

Chapter 24 Volume Migration Utilities

pool 2
Name of the second pool.

24.7 Adjusting the Sequence Number for New


Volumes
If volumes are detached from one pool and attached to another pool, be careful with
wiring new volumes for that pool. The counter for the volume names is not aware of
the sequence numbers of the attached volumes. With this command, you can set the
counter to a new value.
> vmclient setSequenceNumber <archive> <pool> <sequence number>
[<sequence letter>]

archive
The archive name.
pool
The pool name.
sequence number
New number of the sequence.
sequence letter
New letter (for ISO pools only).

24.8 Statistic About Components on Certain Volumes


This command displays a short statistic about components found on one volume
and about other volumes where copies of the same components reside.
> vmclient investigate <volume name> [<volume name>]

volume name
Name of the primary volume.

24.9 Collecting Diagnostic Information


This command collects information about the server configuration and prints it to
stdout or to the specified output file.
> vmclient diag <output file>

output file
File to write the output to instead of stdout.

260

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Part 5
Monitoring

Chapter 25

Everyday Monitoring of the Archive System


To monitor the archiving system, you can use Administration Client, Monitor Web
Client and Document Pipeline Info. Administration Client and Document Pipeline
Info must be installed on the administrator's computer and can connect to different
archive servers and Document Pipeline hosts via network. Monitor Web Client is
installed on the archive server and is performed in a browser, accessible with an
URL.
The utilities provides the following functions:
Administration Client

checking the success of jobs, in particular of the Write and Backup jobs

checking for notifications according to your configuration (emails, alerts,


execution of files, see Monitoring with Notifications on page 265)

checking free disk space

Monitor Web Client

monitoring Archive and Storage Services

checking the space on file systems

displaying warnings and error messages of the archive components

checking the Storage Manager, for example, the filling level of storage devices
or empty media in jukeboxes

checking the number of documents in error queues of Document Pipelines


For detailed information about Monitor Web Client, see the guide OpenText
Document Pipelines - Overview and Import Interfaces (AR-CDP).

Document Pipeline Info

checking the correct document flow in Archive and Storage Services

checking the correct document flow in Document Pipelines

locating problems
For detailed information about the Document Pipeline Info, see the guide
OpenText Document Pipelines - Overview and Import Interfaces (AR-CDP).

AR090701-ACN-EN-6

Open Text Archive and Storage Services

263

Chapter 26

Monitoring with Notifications


By setting up a notification service, you can reduce the amount of work associated
with monitoring the archive system. The Notification Server sends notifications
when certain predefined server events occur. You can define both the events and the
type and recipient of the notification. You can also restrict the time slot in which
particular notifications are sent. For example, you can define notifications sent to the
workstation during working hours and by email to the on-call service outside
working hours. Thus, you ensure that responsible persons are addressed directly
when a particular event occurs.
Setting up monitoring with notifications involves the following steps:
1.

Define the events filter to which the system should respond, see Creating and
Modifying Event Filters on page 265.

2.

Create the type and settings of the notifications and assign them specific event
filters, see Creating and Modifying Notifications on page 269.

26.1 Creating and Modifying Event Filters


Defining an event filter means specifying the conditions that have to be met, before
a notification is triggered. If a system event (e.g. an error or warning) occurs, the
system checks whether it complies one of the defined event conditions. If it does, the
assigned notification is sent. It contains the complete message, the origin and the
time.
Some important event filters are already predefined. You can change them and
define new event filters.
Proceed as follows:
1.

Select Events and Notifications in the System object in the console tree.

2.

Select the Event Filters tab. All available event filters are listed in the top area of
the result pane.

3.

Click New Event Filter in the action pane. The window to create a new event
filter opens.

4.

Enter the conditions for the new event filter. See Conditions for Events Filters
on page 266.

5.

Click Finish.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

265

Chapter 26 Monitoring with Notifications

Modifying event
filters

Deleting event
filters

To modify an event filter, select it in the top area of the result pane and click
Properties in the action pane. Proceed in the same way as when creating a new
event filter. The name of the event filter cannot be changed.
To delete an event filter, select it in the top area of the result pane and click Delete in
the action pane.
See also:

Conditions for Events Filters on page 266

Available Event Filters on page 268

Creating and Modifying Notifications on page 269

Checking Alerts on page 273

26.1.1 Conditions for Events Filters


In the event filter properties window, you can define or modify the settings of an
event filter.
Name
A self-explaining name.
Message class
Classifies and characterizes events.

Any (all classes are recorded)

Administration: Events that affect administration

Database: Database event

Server: Server event

Component
Specifies the software component that issues the message. If nothing is specified
here, all components are recorded (Any). The most important components are:

266

Administration Server: Mainly monitors the execution of the jobs.

Monitor Server: Reports status changes of archive components, i.e. whenever


a status display changes in Monitor Web Client.

Document Service: Monitors the read component (RC) which provides


archived documents and the write component (WC) which archives
documents.

Storage Manager: Reports errors that occur when writing to storage media.

Timestamp Server: Reports errors that occur when creating or administering


timestamps.

High Availability: Reports errors associated with High Availability software


and the cluster software it uses.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

26.1 Creating and Modifying Event Filters

Volume Migration: Reports errors that occur during volume migration.

BASE DocTools: Reports errors associated with BASE DocTools.

R/3 DocTools: Reports errors associated with R/3 DocTools (SAP).

Filter Service: Not used.

Severity
Specifies the importance.

Any (all severities are recorded)

Fatal Error

Error

Warning

Important

Information

Message codes
Specifies which message codes should be considered by the event filter. The
codes are used to filter out concrete events and are usually defined in a message
catalog, which belongs to a component. For each component, the catalog is
installed in
<OT config>\msgcat\<COMPNAME>_<lang>.cat
Example: ADMS_us.cat is the English message catalog for the Administration

Server component.

It is possible to enter the code number directly, but it is recommended and more
comfortable to use the select button. This offers a window with current available
message codes and associated descriptions.
Select message codes
1.

Select Any if no filtering should be applied.


Select Specific or Range to configure designated message codes.

2.

Click Select. A window with current available message codes opens. The
available message codes depend on the selected combination of message
class, component and severity.

3.

Select the designated message code and click OK to resume. If you define a
range, select the first and the last message code (from to).

See also:

Creating and Modifying Event Filters on page 265

Available Event Filters on page 268

Creating and Modifying Notifications on page 269

AR090701-ACN-EN-6

Administration Guide

267

Chapter 26 Monitoring with Notifications

Checking Alerts on page 273

26.1.2 Available Event Filters


Preconfigured
events

A number of preconfigured events are delivered with the installation of Archive and
Storage Services. To use them, configure the notifications and assign the appropriate
notifications to each event. You can use these events:
Any Fatal Error
Includes all events of the Fatal Error type of all currently recorded event
classes and components.
Any Message from Admin Server
Includes all events on the Administration Server.
Any Message from Document Service
Includes all events occurring in the Document Service.
Any Message from Monitor Server
Includes all status changes in Monitor Web Client.
Any Message from Storage Manager
Includes all status changes in the Storage Manager.
Any Non-Fatal Error
Includes all events of the type Error of all currently recorded event classes and
components.
ISO volume has been written
This event occurs when an ISO volume has been written successfully.
IXW volume has been initialized
This event occurs when automatic initialization of an IXW volume has finished
successfully.
Jukebox error: Jukebox detached
This event occurs when the STORM cannot access the jukebox.
More blank media required in jukebox
This event occurs when new optical media have to be inserted in a jukebox.

User-defined
events

In addition, you can define other events to get notifications if they occur. Useful
events are:
Job Error
This event records errors that are listed in the job protocol and notifies you with
a particular message. Use this configuration:
Severity:
Message class:
Component:
Message code:

268

Error
Server or <any>
Administration Server
1

Open Text Archive and Storage Services

AR090701-ACN-EN-6

26.2 Creating and Modifying Notifications

Error from Monitor Server


This event occurs when an archive component indicates an error, for example,
when no more free storage space is available (red icon in Monitor Web Client).
Use this configuration:
Severity:
Message class:
Component:
Message code:

Error
Server or <any>
Monitor Server
-

Warning from Monitor Server


This event occurs when the monitor server issues a warning, for example when
the free storage space reaches a low level or when an attempt is made to access
an unavailable volume (yellow icon in Monitor Web Client). Use this
configuration:
Severity:
Message class:
Component:
Message code:

Warning
Server or <any>
Monitor Server
-

See also:

Conditions for Events Filters on page 266

Creating and Modifying Notifications on page 269

Checking Alerts on page 273

26.2 Creating and Modifying Notifications


After defining the event filter, you can create a notification and assign one or more
event filters. You can select different types of notification:

Alert, passive notification type, alerts must be checked by the administrator. See
Checking Alerts on page 273.

Mail Message, active notification type, when the assigned event occurs, a
message is sent.

TCL Script, active notification type, when the assigned event occurs, a tcl script
is executed.

Message File, passive notification type, notifications are written in a specific file.

SNMP Trap, active notification type, notifications are sent to an external


monitoring system via the SNMP protocol.

Proceed as follows:
1.

Select Events and Notifications in the System object in the console tree.

2.

Select the Notifications tab. All available notifications are listed in the top area
of the result pane.

AR090701-ACN-EN-6

Administration Guide

269

Chapter 26 Monitoring with Notifications

Testing
notifications

3.

Click New Notification in the action pane. The wizard to create a new
notification opens.

4.

Enter the name and the type of the notification and click Next. Enter the
additional settings for the new notification event. See Notification Settings on
page 270.

5.

Click OK. The new notification is created.

6.

Select the new notification in the top area of the result pane.

7.

Click Add Event Filter in the action pane. A window with available event filters
opens.

8.

Select the event filters which should be assigned to the notification and click
OK.

There are two possibilities for testing of notifications:

Select the new notification in the top area of the result pane and click Test in the
action pane.

Click the Test button in the notification window while creating or modifying notifications.

Modifying
notifications
settings

To modify the notification settings, select the notification in the top area of the result
pane and click Edit in the action pane. Proceed in the same way as when creating a
new notification. The name of the notification cannot be changed.

Deleting
notifications

To delete a notification, select the notification in the top area of the result pane and
click Delete in the action pane.

Adding event
filters

Remove an
event filter

To add event filters, select the notification in the top area of the result pane. Click
Add Event Filter in the action pane. Proceed in the same way as when creating a
new notification.
To remove an event filter, select it in the bottom area of the result pane and click
Remove in the action pane. The notification events are not lost, only the assignments
is deleted.
See also:

Notification Settings on page 270

Using Variables in Notifications on page 272

Checking Alerts on page 273

26.2.1 Notification Settings


In the first window of the Notification wizard, you define the type of the
notification. Depending on the type, additional settings are needed.

270

Open Text Archive and Storage Services

AR090701-ACN-EN-6

26.2 Creating and Modifying Notifications

Name
The name should be unique and meaningful.
Notification Type
Select the type of notification and enter the specific settings. The following
notification types and settings are possible:
Alert
Alerts are notifications, which can be checked by using Administration
Client. They are displayed in Alerts in the System object in the console tree
(see Checking Alerts on page 273).
Mail Message
E-mails can be sent to respond immediately to an event or in standby time. If
you want to send it via SMS, consider that the length of SMS text (includes
Subject and Additional text) is limited by most providers. Enter the following
additional settings:

Sender address: E-mail address of the sender. It appears in the from field
in the inbox of the recipient. The entry is mandatory.

Mail host: Name of the target mail server. The mail server is connected
via SMTP. The entry is mandatory.

Recipient address: E-mail address of the recipient. If you want to specify


more than one recipient, separate them by a semicolon. The entry is
mandatory.

Subject of the mail, $-variables can be used (see Using Variables in


Notifications on page 272). If not specified, the subject is $SEVERITY
message from $HOSTNAME/$USERNAME($TIME).

Include Standard Text: If selected, you get an introduction in the


notification: The preceding notification message was generated by ....
This introduction is followed by the message text. If you send SMS
messages, clear this check box.

Max. Length of mail message text: Use this setting to restrict the number
of characters in the email body. If you send notifications as SMS, thus you
can enter a value according to the limitation of your provider.

TCL Sript
Enter the name and the path of the tcl script. It will be executed if the event
occurs.
Message File
The notification is written to a file. Enter name and path of the target file or
click Browse to open the file browser. Select the designated message file and
click OK to confirm.
Enter also the maximum size of the message file in bytes.

AR090701-ACN-EN-6

Administration Guide

271

Chapter 26 Monitoring with Notifications

SNMP Trap
Provides an interface to an external monitoring system that supports the
SNMP protocol. Enter the information on the target system.
Active Period
Weekdays and time of the day at which the notification is to be sent.
Text
Free text field with the maximum length of 255 characters. $-variables can be
used (see Using Variables in Notifications on page 272).
See also:

Creating and Modifying Notifications on page 269

Using Variables in Notifications on page 272

Checking Alerts on page 273

26.2.2 Using Variables in Notifications


When configuring notifications, variables can be used as placeholders. The variables
are replaced by the current value when the notification is sent. For example, the
$HOST variable is replaced by the name of the host at which the event was
triggered. With variables, you can keep the Subject line and the body text of the
notification generic, for example, $SEVERITY message from $HOST.
The following variables can be used:
$CLASS
Message class, characterizes the event.
$COMP
Component that has output the message.
$SEVERITY
Type of message, characterizes the importance.
$TIME
Date and time when the message was output from the component (system time
of the computer on which the component is installed).
$HOST
Name of the computer on which the reported event occurred. For server
processes, daemon is output.
$USER
Name of the user under which the processes run on the $HOST machine.
$MSGTEXT
Message text from the message catalog. Important messages are listed first. If
there is no catalog message, the default text provided by the component is used.
$MSGNO
Code number from the message catalog.

272

Open Text Archive and Storage Services

AR090701-ACN-EN-6

26.3 Checking Alerts

See also:

Notification Settings on page 270

Checking Alerts on page 273

26.3 Checking Alerts


Notifications of the alert type must be checked by using Administration Client.
Proceed as follows:

Marking
messages as
read

1.

Select Alerts in the System object in the console tree. All notifications of the alert
type are listed in the top area of the result pane.

2.

Select the alert to be checked in the top area of the result pane. Alert details are
displayed in the bottom area of the result pane. The yellow icon of the alert
entry turns to grey if read.

To mark all messages as read click Mark All as Read in the action pane. The yellow
icons of the alert entries turn to grey.

AR090701-ACN-EN-6

Administration Guide

273

Chapter 27

Using Monitor Web Client


Tasks

You use Monitor Web Client to monitor the availability of system resources and the
jobs of individual archive components. The most important functions are:

checking free storage space in the log directories,

checking free storage space in pools,

checking the jobs of the Document Service and access to unavailable volumes,

checking DocTool jobs and their correct operation,

checking the jobs of the Storage Manager.

Monitor Web Client is used solely to observe the global system and to identify
problem areas. The Monitor components gather information about the status of the
various archive system components at regular intervals.
The Monitor cannot be used to eliminate errors, modify the configuration or start
and stop processes. Viewer clients are not monitored.
Monitor Web Client can be started as a Web application from any host.
Warning and
error messages

Security

With Administration Client, you can configure warning and error messages that are
sent when the status of archive server component changes. You can also use external
system management tools within the scope of special project solutions.
HTTPS can be used to ensure data confidentiality and integrity. External access
should be restricted by means of a firewall.

27.1 First Steps and Overview


27.1.1 Starting Monitor Web Client
To start Monitor Web Client in your browser, enter the address
<prot>://<server>.[<domain>]:[<port>]/<subdir>/<cmd>
e.g. http://alpha.opentext.com:8080/w3monc/index.html.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

275

Chapter 27 Using Monitor Web Client

Variable

Description

Example

prot

Protocol

http or https

server

Name of the administered archive server

alpha

domain

Domain at which the server is registered

.opentext.com

port

Port at which Monitor Server receives requests

http:8080 or
https:8081

subdir

Subdirectory of Monitor Web Client start page

w3monc

cmd

Command

index.html

Calling this URL opens the Server start page.


You can specify a number of parameters with the URL in order to customize
Monitor Web Client to meet your requirements (see Customizing Monitor Web
Client on page 279).

27.1.2 Monitor Web Client Window

The Monitor Web Client window is subdivided as follows:


Title bar
The title bar contains the name of the monitored archive server and also specifies
the Web browser you are using.

276

Open Text Archive and Storage Services

AR090701-ACN-EN-6

27.1 First Steps and Overview

Button bar
The button bar contains buttons to configure Monitor Web Client. All these
settings apply only to the current browser session. If you want to reuse your
settings, pass them as parameters when you start the program (see Customizing
Monitor Web Client on page 279).
Left column: Monitored servers
Here you find a list of the monitored archive servers. Click a name. The current
status of this archive server is displayed in the other two columns. If you click
the name again, the status is checked at Monitor Server and the display in
Monitor Web Client is updated if needed.
Otherwise, the status of the components is updated after the specified refresh
interval (see Setting the Refresh Interval on page 278). If it is not possible to
icon is displayed in front of
establish a connection to a Web server, then the
the server name.
Tip: If you want to compare the status of different servers, open Monitor
Web Client for each of them and use the task bar to switch between the
different instances.
Middle column: Components
In a hierarchical structure, you see the groups of components that run on the
interrogated host. Below each component group, you see the associated
components. Click a component to display its current status in the right column.
icon to display the status of the component group on the right. For
Click the
information on the components and the possible messages, refer to Component
Status Display on page 280.
The icon in front of the component group name represents a summary of the
individual statuses of the components in the group. If you move the mouse
pointer to an icon in front of a component, abbreviated status information is
displayed in a tool tip even if the detailed information is not displayed in the
third column. In this way, you can compare the statuses of two components.
Right column: Detailed information and status
This column contains detailed status information on the selected components or
component groups. If the right column is too narrow to display the information,
move the mouse pointer to the icon to display the status information in a tooltip.
Status line
Provides information on the status of the initiated processes.
Status icons

The icons identify the system status at a glance. To configure the icons, see
Configuring the Icon Type on page 279. The possible statuses are:

Available without restriction.

Warning, storage space problems are imminent. You can continue working for
the present but the problem must be resolved soon.

Error, component not available.

AR090701-ACN-EN-6

Administration Guide

277

Chapter 27 Using Monitor Web Client

In the above figure, faces were used as Monitor symbols.


The Error and Warning status is also displayed for the higher-level component
group and for the host, that is to say the problem is graphically escalated to a higher
level. In this way, you can identify problems even if the particular branch of the
hierarchy is closed.
Configuration file

The configuration of Monitor Web Client is saved in the *.monitor files that are
located in the directory <OT install AS>\config\monitor.

27.1.3 Setting the Refresh Interval


Proceed as follows:
1.

In the Monitor Web Client window, click Refresh Interval.

2.

Define the period (in seconds) between two requests to the host. Short periods
increase the network load.
Note: To refresh the display of the host status manually, click the name of the
host in the left column. In the Internet Explorer, you can also refresh the
display with F5 or CRTL+R.

27.1.4 Adding and Removing Hosts


To add a host
Proceed as follows:
1.

In the Monitor Web Client window, click Add Host.

2.

Enter the name of the archive server in the form


<hostname>.<domainname>. By default, the port number is 8080 for HTTP or
8081 for HTTPS.

3.

Click OK. The selected archive server is entered in the list of hosts.

To remove a host
Proceed as follows:

278

Open Text Archive and Storage Services

AR090701-ACN-EN-6

27.1 First Steps and Overview

1.

In the Monitor Web Client window, click Remove Hosts.

2.

Select one or more archive servers that you no longer want to monitor.

3.

Click OK. The selected archive server is removed from the host list.

27.1.5 Configuring the Icon Type


Proceed as follows:
1.

In the Monitor Web Client window, click Icon Type.

2.

Select the icon type. You can select between bulbs, LEDs, faces, signs, and traffic
lights.

3.

Click OK.

27.1.6 Customizing Monitor Web Client


Monitor Web Client settings can also be passed directly as parameters in the URL
when you call the program.
The syntax for passing parameters corresponds to standard HTTP syntax:
<prot>://<server>.[<domain>]:[<port>]/<subdir>/<cmd>?<parameter>&<par
ameter>&<parameter>.

Thus the URL


http://alpha.opentext.com:8080/w3monc/index.html?iconType=Faces&refre
shInterval=10&host=beta.opentext.com:8080&host=gamma.opentext.com:808
0
starts Monitor Web Client for the archive server alpha with icon type Faces, refresh
interval 10 seconds and the additional hosts beta and gamma.

Save this URL as a bookmark. So you can always start your personal configuration.
If you do not pass any parameters with the URL, Monitor Web Client starts with the
default settings: LEDs, refresh interval 120 seconds and no additional hosts.

AR090701-ACN-EN-6

Administration Guide

279

Chapter 27 Using Monitor Web Client

27.2 Component Status Display


Detailed information on each component is displayed in the right column. The
Status of each component is displayed and further details concerning this status
can be viewed in Details. The status Can't call this server means that the
Monitor is unable to access the corresponding component and that no information is
available.

27.2.1 DP Space
Monitors the storage space for the Document Pipelines that are used for the
temporary storage of documents during the archiving process. A special directory
on the hard disk is reserved for the Document Pipelines. You can determine its
location in Administration Client (Core Services > Configuration > Document
Pipeline).
During archiving, the documents are temporarily copied to this directory and are
then deleted once they have been successfully saved. The directory must be large
enough to accommodate the largest documents, e.g. print lists generated by SAP.
The status can be Ok,Warning and Error.
In Details you can see the free storage space in MB, the total storage space in MB
and the proportion of free storage space in percent. The values refer to the hard disk
volume in which the DPDIR directory was installed. A warning or error message is
issued if insufficient free storage space is available. Possible causes are:
Error during the processing of documents in the Document Pipeline
Normally, the documents are processed rapidly and deleted immediately. If
problems occur, the documents may remain in the pipeline and storage space
may become scarce. Check the status of the DocTools (DP Tools group in the
Monitor) and the status of the Document Pipelines in Document Pipeline Info.
Document is larger than the available storage space
If no separate volume is reserved for the Document Pipeline, the storage space
may be occupied by other data and processes. In this case, the volume should be
cleaned up to create space for the pipeline. To avoid this problem, reconfigure
the Document Pipeline and locate it in a separate volume. The volume must be
larger than the largest document that is to be archived.

27.2.2 Storage Manager


Monitors the Storage Manager (STORM) which administers the jukeboxes and
media: the status of the jbd STORM process is displayed together with the fill level
of the inode files and an overview of the volumes in the connected jukebox(es).
Physical and virtual fill levels are shown in the same way.
jbd

Displays the status of the Storage Manager. The status is Active, if the server is
running. A status of either Can't call server, Can't connect to server or Not
active indicates that the server is either not reachable or not running. Check the

280

Open Text Archive and Storage Services

AR090701-ACN-EN-6

27.2 Component Status Display

jbd.log log file for errors. If necessary, solve the problem and start the Storage

Manager again.

inodes

Displays how full the inode files are. Either the status OK or Error is displayed.
In Details, you can see filling level in percent as well as the number of
configured and used inodes. If an error is displayed, the storage space for the file
system information must be increased.
<jukebox_name>

Provides an overview of the volumes for each attached jukebox. The possible
status specifications are Ok, Warning or Error. Warning means that there are no
writeable volumes or no empty slots in the jukebox. Error is displayed if at least
one corrupt medium is found in a jukebox (display -bad- in Devices in Archive
Administration).
The following information is displayed in Details:
Empty

Number of empty slots

Bad

Number of faulty (unreadable) volumes

Blank

DVD: Number of empty slots


IXW: number of non-initialized volumes

Written

Number of written volumes

27.2.3 DocService (Document Service)


The Document Service is the Archive and Storage Services component that archives
documents and delivers them for display. The DocService component monitors the
read component rc, the write component wc, the administration server admsrv, the
backup server bksrvr , whether the document processes have been started, and the
component unavail that indicates whether a user has tried to access unavailable
volumes.
The status of rc, wc, admsrv and bksrvr is Active or Error. Error means that the
component cannot be executed and must be restarted.
The status of the unavail component is OK or Warning. Warning means that a
document has been requested from an unavailable volume. The number of
unavailable volumes is displayed in Details. To find out the names of these
volumes, select the Devices directory followed by the Unavailable command in
Archive Administration.
Note: Unavailable volumes can also be seen in Administration Client (see
Checking Unavailable Volumes on page 62).

27.2.4 DS Pools
The Monitor checks the free storage space which is available to the pools (and
therefore the logical archives). The pools and buffers are listed. The availability of

AR090701-ACN-EN-6

Administration Guide

281

Chapter 27 Using Monitor Web Client

the components depends on two factors. Volumes must be assigned and there must
be sufficient free storage space in the individual volumes.

The Ok status specifies that volumes are present and sufficient storage space is
available.

The Error status together with the No volumes present message means that a
volume (WORM or hard disk) needs to be assigned to this buffer or pool.

The Error status with the No writable partitions message refers to WORM
volumes and means that the available volumes are full or write-protected.
Initialize and assign a new volume and/or remove the write-protection.

The Full status refers to disk buffers or hard disk pools and means that there is
no free storage space on the volume. In the case of a hard disk pool, create a new
volume and assign it to this pool.
In the case of a disk buffer, check whether the Purge_Buffer job has been
processed successfully and whether the parameters for this job are set correctly.

27.2.5 DS DP Tools, DS DP Queues, DS DP Error Queues


Monitors the special Document Pipeline of the Document Service, in particular the
availability of DocTools and the status of the queues and error queues. The results
are displayed in three component groups.
DS DP Tools

The availability of each DocTool in the DS DP is displayed. Under normal


circumstances, the DocTools are started by the spawner when the archive is
started and continue to run for the entire archive session. The status is
Registered if the DocTool has been started. Under Details, you can see
whether the DocTool is processing documents (active) or whether it is
unoccupied (lazy).
DS DP Queues

Monitors the Document Service DocTools queues and specifies the number of
documents in each of them. Normally, the documents are processed very quickly
and the queues are empty.
DS DP Error Queues

Monitors the Document Service DocTools error queues and specifies the number
of documents in each of them.

27.2.6 Log Diskspace


The ixos_log component checks the free storage space in the directory for the
Archive and Storage Services log files (<OT logging>).
The status is Ok,Warning or Error. In Details, you can see the free storage space in
MB, the total storage space in MB and the proportion of free storage space in
percent. The values refer to the hard disk volume in which the log directory was
installed.

282

Open Text Archive and Storage Services

AR090701-ACN-EN-6

27.2 Component Status Display

A warning or error message is issued if insufficient free storage space is available.


Delete all log files that are no longer needed. To avoid problems, delete log files
regularly.

27.2.7 DP Tools, DP Queues, DP Error Queues


Monitors the Document Pipelines which are used to archive documents. In
particular, it monitors the availability of the DocTools, the status of the
corresponding queues and the number of documents present in them. For each
queue, there is also an associated error queue that contains documents that cannot
be processed because of an error.
DP Tools
The Monitor checks the availability of the DocTools. The status is Registered if the
DocTool has been started. Various messages may appear under Details for the
status:
Lazy

The DocTool is unoccupied. There are no documents available for processing.


Active

The DocTool is processing documents.


Disabled

The DocTool has been locked. To check this status, start Document Pipeline Info.
Here, all the queues that are associated with a locked DocTool are identified by
the locked symbol. In general, a DocTool is only locked if an error has occurred.
Once the problem has been analyzed and eliminated, restart the DocTool in
Document Pipeline Info.
Not registered

The DocTool has not been started.


DP Queues
Monitors all queues of the Document Pipelines and specifies the number of
documents in each queue. Precisely one DocTool is assigned to each queue. One
DocTool may be assigned to multiple queues. You can find the same queues in
Document Pipeline Info but with different names.
Usually, the documents are processed very quickly by the associated DocTool and
the queues are empty. The Empty status is specified. If there are documents in the
queue, the status is set to Not empty. In Details, you find the number of documents
in the queue. To analyze this situation, check the availability of the DocTool under
DP Tools and use the functions provided in Document Pipeline Info.
DP Error Queues
Monitors the error queues and specifies the number of documents in each queue.
There is an error queue for each ordinary queue. Documents in error queues cannot

AR090701-ACN-EN-6

Administration Guide

283

Chapter 27 Using Monitor Web Client

be processed because of an error. The processing DocTool is specified for each


queue. You can find the corresponding queues in Document Pipeline Info but with
different names.
The error queues are usually Empty. If a DocTool cannot process a document, the
document is moved to the error queue. The status is set to Not empty. In Details,
you can see the number of unprocessed documents. If the same error occurs for all
the documents in this pipeline, then all the documents are gathered in the error
queue. The documents cannot be processed until the error has been eliminated and
the documents have been transferred for processing again with Restart in Document
Pipeline Info.
Error processing for DocTools
The following overview should provide you with guidelines on error processing.
Here, only the DocTools are listed. However, the comments apply to all queues that
use the corresponding DocTool.
...rot

A page of the scanned document cannot be rotated.


...provide

Archive and Storage Services cannot supply a document to the SAP host.

In the DocService component group, check the rc component. If Error is


displayed, Archive and Storage Services are not available and must be
restarted.

The network connection to the SAP host has been interrupted.

Check that there is sufficient free storage space in the exchange directory.

...cpfile

A document cannot be copied from the SAP host to the archive server.

Check the DP Space component group to determine whether there is


sufficient free space available for the document pipeline. Consider one of the
explanations above.

The network connection to the SAP host has been lost.

Problems with the exchange directory (shared file transfer directory that
must be available before the SAP host can be accessed).

...caracut

A collective document (outgoing document, OTF) cannot be subdivided into


single documents. Check the DP Space component group to determine whether
there is sufficient free space available for the pipeline. Consider one of the
explanations above.

284

Open Text Archive and Storage Services

AR090701-ACN-EN-6

27.2 Component Status Display

...doctods

One or more documents cannot be archived.

In the DocService component group, check the wc component. If Error is


displayed, Archive and Storage Services are not available and must be
restarted.

Check the DS Pools component group. If Warning or Error is displayed for


the logical archive in which the document is to be archived or for the
corresponding disk buffer, there is no storage space available for archiving.
Please note the comments on DS Pools above.

...wfcfbc and ...notify

These DocTools are used to subdivide collective documents into single


documents. It is unusual for errors to occur here.
...cfbx

The response cannot be sent to the SAP system.

The connection to the SAP system is not established. Check the cbfx.log log
file for information on the possible error causes.

The configuration parameters for setting up the connection are incorrect.


Check the configuration of the SAP system and the archive in the Servers tab
in Archive Administration.

...docrm

The temporary data in the pipeline are not be deleted following the correct
execution of all the preceding DocTools. Start Document Pipeline Info and
remove the documents in the corresponding error queue. You require special
access rights to do this.

27.2.8 Timestamp Service


The component monitors the working status of Open Text Timestamp Service and
some external Timestamp Servers. The monitoring must be configured with
Administration Client (Core Services > Configuration > Archive Server >
AS.TSTP.IXWATCH_VARS.IXTWATCH_TS_SYSTEM). Otherwise, the message
not being checked is shown.

AR090701-ACN-EN-6

Administration Guide

285

Chapter 28

Auditing, Accounting and Statistics


28.1 Auditing
The auditing feature of Archive and Storage Services traces events of two aspects:

It records the document lifecycle, or history of a document, when the document


was created, modified, migrated, deleted etc. These are the events of the
Document Service.

It records administrative jobs performed with Administration Client.


Important
Administrative changes are only recorded if they are done with
Administration Client. To get complete audit trails, make sure that other
configuration ways cannot be used, for example, editing configuration files
directly. At least, such jobs must be logged by other means.

The auditing data is collected in separate database tables and can be extracted from
there with the exportAudit command to files, which can be evaluated in different
ways.

28.1.1 Configuring Auditing


The administrative auditing is permanently active. You cannot switch it off.
To audit the lifecycle of the documents, select the Compliance mode enabled option
of the archive. As the compliance mode is related to logical archives, enable it for
each archive that is subject to auditing.
The compliance mode also restricts the deletion of documents. If you want to audit
the jobs on documents but retention and compliance are not relevant, enable the
compliance mode and set the retention period to No retention.
For details, see Configuring the Archive Retention Settings on page 72.

28.1.2 Accessing Auditing Information


The auditing information is stored in the database, in two specific tables - one for the
document related information, the other for administration jobs. From there, you
can extract the data into files and then evaluate the files.

AR090701-ACN-EN-6

Open Text Archive and Storage Services

287

Chapter 28 Auditing, Accounting and Statistics

Note: If you need database reports adapted to your requirements, contact


Open Text Global Services.
To extract the data of a given timeframe to files, use the command
exportAudit [-s date] [-e date] [-A|-S] [-a] [-x] [-o ext] [-h] [-c sepchar]

The files are stored in the directory <OT var>/audit


You can define the timeframe for data extraction. Without these dates, you get all
audit data until the current date and time.
Option

Description

Output format

-s date

start date and time of the timeframe

YYYY/MM/DD:HH:MM:SS

-e date

end date and time of the timeframe

YYYY/MM/DD:HH:MM:SS

-S

Extracts the information on documents that have been deleted in


the given timeframe.

The filename is STR-<begin date>-<end


date>-DEL.<ext>, where the extensions
are .prt and .idx. The extracted data
remains in the database.

-A

Extracts the audit information of


administration jobs.

The resulting file is ADM-<begin date><end date>.txt in csv format, and the
data is separated by semicolons if no
other options a

With further optional options, you can adept the output to your needs.
Option

Description

-a

Only relevant for document lifecycle information (-S is set). Extracts data
about all document related jobs on the given timeframe. The generated file
name reflects this option with the ALL indicator: STR-<begin date>-<end
date>-ALL.<ext>.

-x

Deletes data from the database after successful extraction. This option is not
supported if -a is set, so only information on deleted documents can be removed from the database after extraction.

-o ext

Defines the file format. For example, with -o csv you get a .csv file for
evaluation in Excel, independently of the extracted data.

-h

Adds a header line with column descriptions to the output file.

-c sepchar

Defines the separator character directly (e.g. -c , ) or as ASCII number in


0x<val> syntax (e.g. -c 0x7c ). The default separator is the semicolon. Consider changing the separator if it does not fit your Excel settings.

The following table gives an overview of the logged events.

288

Open Text Archive and Storage Services

AR090701-ACN-EN-6

28.1 Auditing

Event

Description

EVENT_CREATE_DOC

Document created

EVENT_CREATE_COMP

Document component created on volid1

EVENT_UPDATE_ATTR

Attributes updated

EVENT_TIMESTAMPED

Document timestamped on volid1 (dsSign,


dsHashTree)

EVENT_TIMESTAMP_VERIFIED

Timestamp verified on volid1

EVENT_TIMESTAMP_VERIF_FAILED

Timestamp verification failed on volid1

EVENT_COMP_MOVED

Document component moved from HDSK


volid1 to volid2 (dsCD etc. with -d)

EVENT_COMP_COPIED

Document component copied from volid1 to


volid2 (dsCD etc. without -d)

EVENT_COMP_PURGED

Document component purged from HDSK


volid1 (dsHdskRm)

EVENT_COMP_DELETED

Component deleted from volid1

EVENT_COMP_DELETE_FAILED

Component deletion from volid1 failed

EVENT_COMP_DESTROYED

Component destroyed from volid1

EVENT_DOC_DELETED

Document deleted

EVENT_DOC_MIGRATED

Document migrated

EVENT_DOC_SET_EVENT

setDocFlag with retention called

EVENT_DOC_SECURITY

Security error when attempting to read doc

Example 28-1: Excel output of document audit information


Command:
exportAudit S s 2005/07/14:12:00:00 e 2005/07/19:08:00:00 o csv h -a

The result of an extraction of document related audit information in Excel may look
like shown in the graphic.
The options -S -o csv -a -h were set, which results in a filename like this:
STR-2005_07_04_12_00_00-2005_07_19_08_00_00-ALL.csv

AR090701-ACN-EN-6

Administration Guide

289

Chapter 28 Auditing, Accounting and Statistics

The time is displayed in seconds since 1970/1/1. To convert it to a more readable


format (TT/MM/JJJJ hh:mm) you can use the excel function:
=sum(<timestamp cell>/86400;25569)

Auditing or SYS_CLEANUP_ADMAUDIT job


Administrative audit information is kept in the database. If you never want
to evaluate it, you can delete it from the database with the
SYS_CLEANUP_ADMAUDIT job (command Audit_Sweeper). The job is
normally deactivated and deletes data that is older than the number of days
configured in Runtime and Core Services > Configuration > Archive
Server > AS.ADMS.AUDITING.ADMS_AUDIT_MAX_RECORD_AGE.

28.2 Accounting
Archive and Storage Services allows collecting of accounting data for further
analysis and billing.
Proceed as follows:
1.

Enable the Accounting option and configure accounting in Runtime and Core
Services > Configuration, see Settings for Accounting on page 290.
The Document Service writes the accounting information into accounting files.

2.

Evaluate the accounting data, see Evaluating Accounting Data on page 290.

3.

Schedule the Organize_Accounting_Data job to remove the old accounting


data (see Setting the Start Mode and Scheduling of Jobs on page 87).

28.2.1 Settings for Accounting


The settings for accounting and for the Organize_Accounting_Data job are defined
in: Runtime and Core Services > Configuration > Archive Server >
AS.DS.ACCOUNT.xx. For detailed information on configuration settings, see part 2
"Configuration Reference: Archive and Storage Services, Document Pipeline,
Monitor Server and Monitor Web Client" in Open Text Administration Help - Runtime
and Core Services (ELCS100100-H-AGM).
Note: By default, accounting is enabled. To deactivate accounting, set the
property AS.DS.ACCOUNT.USE_ACCOUNTING to OFF.

28.2.2 Evaluating Accounting Data


Accounting files are CSV files, the data columns are separated by tabs. You can
evaluate small files directly in Excel. Normally, you import the data from the files
into a database like MS Access and use reports for evaluation. Make sure that you

290

Open Text Archive and Storage Services

AR090701-ACN-EN-6

28.2 Accounting

configure and schedule the Organize_Accounting_Data job in a way that only


evaluated data is deleted or archived.
Table 28-1: Fields in accounting files
Name

Description

Example

TimeStamp

Time of the request in seconds after


01/01/1970

10/08/2001 10:15:14

JobNumber

Internal request number; see table


below

24

RequestTime

Time to complete the execution of


the request in 1/1000 s

422

Client Address

IP address of the client (or proxy


server)

127.0.0.1

ContentServer

ID of the logical archive

DD

UserId

Actual or automatically generated


user ID

<User name>
or something like
149.235.35.28.20010912.10.44.54

ApplicationId

ID of the application that sent the


request

dsh

DocumentId

ID of the document that was affected by the request

E429B8ED8FA6D511A0630050
DA78D510

NumComponents

Number of components involved in


the request

ComponentId

Component ID of one of the components involved in this request

data

ContentLength

Data size of the request in bytes

Table 28-2: Job numbers and names of requests


Job name

Job number

GETCOMP

PUTCOMP

CREATCOMP

UPDCOMP

APPCOMP

INFO

PUT

CREATE

UPDATE

10

AR090701-ACN-EN-6

Administration Guide

291

Chapter 28 Auditing, Accounting and Statistics

Job name

Job number

LOCK

11

UNLOCK

12

SEARCHATTR

13

SEARCH

14

SEARCHFREE

15

DGET

16

GETATTR

17

SETATT

18

DELATTR

19

DELETE

20

MCREATE

23

PUTCERT

24

ADMINFO

25

SRVINFO

26

CSRVINFO

27

VALIDUSER

28

VERIFYSIG

29

SIGNURL

31

GETCERT

32

ANALYZE_SEC

34

RESERVEID

35

SETDOCFLAG

36

ADS_GETATS

37

ADS_VERIFYATS

38

ADS_MIGRATE

39

ADS_DOCHISTORY

40

ADS_CREPLACEH

41

ADS_CSRVINFO2

42

If you archive the old accounting data, you can also access the archived files. The
Organize_Accounting_Data job writes the DocIDs of the archived accounting files
into the ACC_STORE.CNT file which is located in the accounting directory (defined in
Path to accounting data files).
To restore archived accounting files, you can use the command

292

Open Text Archive and Storage Services

AR090701-ACN-EN-6

28.3 Storage Manager Statistics

dsAccTool -r -f <target directory>

The tool saves the files in the <target directory> where you can use them as usual.

28.3 Storage Manager Statistics


For Storage Manager statistics, see OpenText Archive Server - STORM Configuration
Guide (AR-IST) in the Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/open/12331031).

AR090701-ACN-EN-6

Administration Guide

293

Part 6
Troubleshooting

Chapter 29

Basics
This part is written as an introduction to troubleshooting and error analysis. It presents
tools and methods which can help you to find out the cause of a problem. It does not
explain solutions for a single problem or error. This kind of information and a lot of
useful hints and tips can be found in the KC
(https://knowledge.opentext.com/knowledge/llisapi.dll/Open/12331031) and the
Knowledge Base
(https://knowledge.opentext.com/knowledge/llisapi.dll/Open/Knowledge).

29.1 Avoiding Problems


It is still the better strategy to avoid problems than to solve them. Therefore, you
should consider these hints in your daily work.

Backup the storage media, the database, and the STORM configuration files
regularly.

Use the Monitor Web Client to monitor Archive and Storage Services. So you can
react quickly if a problem occurs.

Check the job protocol in the Archive Administration.

Make sure that there is enough space available (storage media, disk buffers,
database, exchange directory...)

Configure notifications that will be sent in case of problems (see Monitoring


with Notifications on page 265)

Follow the major upgrades of the software.

Train your archive administrators and users.

Take care for regularly maintenance of your hardware. Hardware service


contracts can help.

This documentation provides detailed instructions for configuration, maintenance


and monitoring. If you maintain and administer your archive system in the
described way, you can avoid many problems or recognize occurring problems at
the beginning.

29.2 Viewing Installed Archive Server Patches


This utility lists all the patches installed on the archive server. If you are searching
for a specific patch, the utility can be restricted to individual archive server software

AR090701-ACN-EN-6

Open Text Archive and Storage Services

297

Chapter 29 Basics

packages.
This list is useful when you contact the Open Text Customer Support.
Proceed as follows:
1.

Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.

2.

Select the View Installed Archive Server Patches utility.

3.

Click Run in the action pane.

4.

In the field View patches for packages enter the package whose patches you
want to list. Leave the field empty to view all packages.

5.

Click Run to start the utility.

A window with the installed patches opens.


See also:

Utilities on page 221

Checking Utilities Protocols on page 222

29.3 Correcting Wrong Installation Settings


The installation guides state the following about the directories for assembling the
ISO images:
The CDDIR and CDIMG directories must be different.
The VAR directory must not be a subdirectory of either these directories.
If, however, any of these parameters have been chosen inappropriately, you still can
correct them by taking the following steps:
To correct the path of the CDDIR or CDIMG directories:
1.

Create the two correct directories in the file system and make sure that they are
owned and writeable by the Archive Spawner user.

2.

Correct the directory settings in the configuration:


a.

Start Administration Client and log on to the archive server.

b.

In the console tree, expand Runtime and Core Services > Configuration >
Archive and Storage Services.

c.

In the result pane, right-click AS.DS.MEDIA.ISO.CDDIR, select Properties


and set the Global Value to the correct absolute path of the CDDIR directory.
Click OK.

d. Analogously, right-click AS.DS.MEDIA.ISO.CDIMG, select Properties and set


the Global Value to the correct absolute path of the CDIMG directory.

298

Open Text Archive and Storage Services

AR090701-ACN-EN-6

29.4 Monitoring and Administration Tools

Click OK.
3.

Restart the Archive Spawner processes (for details, see Starting and Stopping
of Archive and Storage Services on page 301).

29.4 Monitoring and Administration Tools


To monitor the archive system and to recognize problems, you use the Archive
Administration Utilities and tools delivered with the operation system.
Archive
Administration
Utilities
System tools

The Archive Administration Utilities are the Monitor Web Client, the Document
Pipeline Info and Administration Client. You can find a short summary of their use
in Everyday Monitoring of the Archive System on page 263.
The most important error messages are also displayed in the Windows Event
Viewer or in the UNIX syslog. This information is a subset of the information
generated in the log files. Use these tools to see the error messages for all
components at one place.
You can prevent the transfer of error messages to the system tools in general or for
single components with the setting Write error messages to Event Log / syslog, see
Log Settings for Archive and Storage Services Components (Except STORM) on
page 310.
To start the Windows Event Viewer, click
Start > Control Planel > Administrative Tools > Event Viewer.
The syslog file for UNIX is configured in the file /etc/syslog.conf.

29.5 Deleting Log Files


Archive and Storage Services log files
Log files record the jobs of the archive components. The number of log entries and
thus the size of the log files depend on the log level that has been set. Check the size
of the log files regularly and delete larger files. They will be automatically recreated
when Archive and Storage Services are started.
The log files for Archive and Storage Services can be found in the directory
<OT logging>.
Important
Stop the Spawner before you delete the log files!
On client workstations, other log files are used. For more information, refer to the
Imaging documentation.

AR090701-ACN-EN-6

Administration Guide

299

Chapter 29 Basics

Oracle database log files


The Oracle database also generates log and trace files for diagnostic purposes. As
administrator, you should regularly check the size of the following files and delete
them from time to time:
Windows
<ORACLE_HOME>\network\log\listener.log (log file)
<ORACLE_HOME>\network\trace\* (trace file)
<ORACLE_HOME>\rdbms\trace\*trc

UNIX
$ORACLE_HOME/network/log/listener.log (log file)
$ORACLE_HOME/network/trace (trace file)
$ORACLE_HOME/rdbms/log/*.trc/* (trace files)

300

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Chapter 30

Starting and Stopping of Archive and Storage


Services
Archive and Storage Services and the database are automatically started by the
operating system when the hardware is started. However, there are situations in
which you have to start or stop Archive and Storage Services components manually
without shutting down the hardware, e.g. when you back up the system data or
when you perform system administration tasks that require a manual stop of
Archive and Storage Services components. A restart can also help to figure out the
reason of a problem.
After the restart, read the log file spawner.log in the directory <OT logging>. You
can see whether all the processes have started correctly (see also Spawner Log File
on page 307).
You can simply use the Archive Administration to start and stop Archive and
Storage Services components. If the tool is not available, you can use the Windows
Services, or command line calls. Note that the order in which the components are
started or stopped is important. Call the commands in the given order.
Note: The following commands are not valid for installations in cluster
environments.

30.1 Starting and Stopping Under Windows


Under Windows, you can use the Services window or the command line to start
and stop the components of Archive and Storage Services.
Starting
Windows
Services

To start Archive and Storage Services using the Windows Services, proceed as
follows:
1.

To open the Windows Services, do one of the following:

On the desktop, right-click the My Computer icon and select Manage.


Then open the Services and Applications directory and click Services.

2.

AR090701-ACN-EN-6

Open the Control Panel, select Administrative Tools and then Services.

Right-click the following entries in the given order and select Start:

OracleServiceECR or MSSQLSERVER (Oracle or MSSQL database)

Oracle<ORA_HOME>TNSListener (only Oracle database)

Open Text Archive and Storage Services

301

Chapter 30 Starting and Stopping of Archive and Storage Services

Command line

Archive Spawner (archive components)

To start Archive and Storage Services from the command line, enter the following
commands in this order:
net start OracleServiceECR (Oracle database) or net start mssqlserver (MS

SQL database)

net start Oracle<ORA_HOME>TNSListener (Oracle database)


net start spawner (archive components)

Stopping
Windows
Services

Command line

To stop Archive and Storage Services components using the Windows Services,
proceed as follows:
1.

On the desktop, right-click the My Computer icon and select Manage.


The Computer Management window now opens.

2.

Open the Services and Applications directory and click Services.

3.

Right-click the following entries in the given order and select Stop:

Archive Spawner (archive components)

Oracle<ORA_HOME>TNSListener (Oracle database)

OracleServiceECR (Oracle database) or MSSQLSERVER (MS SQL database)

To stop Archive and Storage Services components from the command line, enter the
following commands in this order:
net stop spawner (archive components)
net stop Oracle<ORA_HOME>TNSListener (Oracle database)
net stop OracleServiceECR (Oracle database) or net stop mssqlserver (MS SQL

database)

30.2 Starting and Stopping Under UNIX


The commands used to start and stop Archive and Storage Services differ slightly
depending on the UNIX platform. You call a special script, that calls componentspecific scripts contained in the <OT install SPAWNER>/rc directory, for example:
S15MORA_ECR start (Oracle database, as user root)
S18BASE start (Archive and Storage Services, as user root)
Starting
Use the commands listed below to restart Archive and Storage Services after the
archive system has been stopped without shutting down the hardware.
Proceed as follows:
1.

302

Log on as root.

Open Text Archive and Storage Services

AR090701-ACN-EN-6

30.3 Starting and Stopping Single Services with spawncmd

2.

Start the archive system including the corresponding database instance with:
HP-UX

/sbin/rc3.d/S910spawner start

AIX

/etc/rc.spawner start

Solaris

/etc/rc3.d/S910spawner start

LINUX

/etc/init.d/spawner start

Stopping
Enter the commands below to terminate Archive and Storage Services manually.
Proceed as follows:
1.

Log on as root.

2.

Terminate the archive system and the database instance with:


HP-UX

/sbin/rc3.d/S910spawner stop

AIX

/etc/rc.spawner stop

Solaris

/etc/rc3.d/S910spawner stop

LINUX

/etc/init.d/spawner stop

Automatically terminating Archive and Storage Services on reboot or


shutdown
Under Linux, HP-UX and SOLARIS, symbolic links to the startup scripts ensure that
the archive system is automatically terminated when the host is shut down or
rebooted.
Under AIX, insert the line sh /etc/rc.spawner stop into the /etc/rc.shutdown
script to ensure automatic termination. After a new installation of AIX this script
does not exist; the system administrator must create it.

30.3 Starting and Stopping Single Services with


spawncmd
Sometimes it can be helpful to start and stop only a single Archive and Storage
Services process.
1.

Under UNIX, load Archive and Storage Services environment first: <OT config
AS>/setup/profile.

2.

Check the status of the process with spawncmd status (see Analyzing
Processes with spawncmd on page 307).

AR090701-ACN-EN-6

Administration Guide

303

Chapter 30 Starting and Stopping of Archive and Storage Services

3.

Enter the command:


spawncmd {start|stop} <process>

Description of parameters:
{start|stop}

To start or stop the specified process.


<process>

The process you want to start or stop. The name appears in the first column of
the output generated by spawncmd status.
Important
You cannot simply restart a process if it was stopped, regardless of the
reason. This is especially true for Document Service, since its processes must
be started in a defined sequence. If a Document Service process was
stopped, it is best to stop all the processes and then restart them in the
defined sequence. Inconsistencies may also occur when you start and stop
the monitor program or the Document Pipelines this way.
Example 30-1: Start the Notification Server
spawncmd start notifSrvr

30.4 Setting the Operation Mode of Archive and


Storage Services
Besides the normal operation mode, three maintenance modes are available. Thus,
you can restrict the access to the archive server.
Proceed as follows:
1.

From the Archive and Storage Services object in the console tree, select System.

2.

Click Modify Operation Mode in the action pane.


Select the operation mode.
No maintenance mode
No restrictions to access the server.
Documents cannot be deleted, errors are returned
Deletion is prohibited for all archives, no matter what is defined for the
archive access. Errors are returned and a message informs about deletion
requests.
Documents cannot be deleted, no errors are returned
No errors are returned and no information about deletion requests is given.

304

Open Text Archive and Storage Services

AR090701-ACN-EN-6

30.4 Setting the Operation Mode of Archive and Storage Services

Use full maintenance mode


Clients cannot access Archive and Storage Services, and thus not display and
archive documents. Only administration and access via the Administration
Client is possible.
3.

AR090701-ACN-EN-6

Click OK.

Administration Guide

305

Chapter 31

Analyzing Problems
Note: The following commands and paths for log files are not valid for
installations in cluster environments.

31.1 Spawner Log File


The Spawner log file spawner.log provides an overview of all processes running on
the archive server. It is recreated at every spawner start. After a restart, check this
file to make sure all the processes were started correctly. You can review this
information also in the Monitor Web Client, but under certain conditions you have
faster access to the information in the log file. There is no specific log level for this
log file.

31.2 Analyzing Processes with spawncmd


The Spawner starts all archive processes including the Storage Manager. By the
same token, when the Spawner is shut down, the archive processes are shut down.
You can also query the status of the archive processes, and stop and restart
individual processes. This can be useful when you are performing diagnostic
analysis of the archive processes.
Note: The spawner must be running on the machine for these commands to
take effect.
Command

Under UNIX, load Archive and Storage Services environment first: <OT config
AS>/setup/profile. Under all environments, open a command line and move to
the directory where the spawner resides:

<OT install AS>\bin for Windows and


<OT install AS>/bin for UNIX.

To display a list of all spawner commands, enter the command spawncmd


The commands include:

exit

reread

start <service>

status

stop <service>

startall

AR090701-ACN-EN-6

Open Text Archive and Storage Services

307

Chapter 31 Analyzing Problems

stopall

You can execute the commands startall, stopall, exit and status in the Archive
Administration, with the corresponding commands in the File > Spawner menu.
Process status

To check the status of the processes, do one of the following:

In the Archive Administration, on the File menu, select Spawner > Status.

Enter spawncmd status in the command line.

A brief description of some processes is listed here:


Clnt_dp

Client to monitor the Document Pipelines

Clnt_ds

Client to monitor the Document Service

admsrv

Administration Server

dscache<1...4>>

Document Service cache for a read component. There may be up to


four of them running.

dsrc

Document Service read component.

dswc

Document Service write component

ixmonsvc

Monitor server process

notifSrvr

Notification server process

dp

Document Pipelines

jbd

STORM daemon

tomcat

Web Server

timestamp

Timestamp Server

purgefiles

removes log files of Tomcat

doctods, docrm, ...

various DocTools

You get a result list with the following content:

process name, for example, Clnt_dp

process status:

R means the process is running. All processes should have the this status with
the exception of chkw (checkWorms), stockist and dsstockist; and under
Windows, additionally db and testport.

T means the process was terminated. This is the normal status of the
processes chkw (check worms), stockist, and dsstockist; and under
Windows, additionally db and testport. If any other process has the status
T, it indicates a possible problem.

The processes chkw, testport, and db are validation processes; stockist


and dsstockist are initializing processes. They are terminated automatically
as soon as they finished their task.

308

Open Text Archive and Storage Services

AR090701-ACN-EN-6

31.3 Working with Log Files

S means the spawner waits for the process to synchronize.

process ID, start and stop time

The information provided by this command is similar to that displayed by the


Monitor Web Client. Invoking the information with this command may be faster,
depending on your work environment. Although the Monitor displays more
information about the objects, its information is not always completely up-to-date.
On the other hand, the spawner does not have detailed information about the
started processes, but its information about whether the processes are running or
not is always up-to-date.
Information about the DocTools you can find in the Document Pipeline Info. This
interface allows you to start and stop single DocTools and to resubmit documents
for processing.

31.3 Working with Log Files


Log files are generated by the different components of Archive and Storage Services
to report on their operations. Log files are also generated for each DocTool as well as
for the read and write components of the Document Service. The result is a wealth
of diagnostic information.

31.3.1 About Log Files


Location

All log files of Archive and Storage Services components - including the STORM are located in the same directory:

Windows <OT logging>.

UNIX: <OT logging>

The log file names indicate the processes.


If you have a
problem

If a problem arises, carry out the following steps:


1.

Check in the Monitor Web Client in which component of Archive and Storage
Services the problem has occurred.

2.

Locate the corresponding log file in Explorer. The protocol is written


chronologically and the last messages are at the end of the file.
Note: The system might write several log files for a single component, or
several components are affected by a problem. To make sure you have the
most recent log files, sort them by the date.

Log file analysis

When analyzing log files, consider the following:

The message class - that is the error type - is shown at the beginning of a log
entry.

The latest messages are at the end of the file.

AR090701-ACN-EN-6

Administration Guide

309

Chapter 31 Analyzing Problems

Note: In jbd.log, old messages are overwritten if the file size limit is
reached. In this case, check the date and time to find the latest messages.

Messages with identical time label normally belong to the same incident.

The final error message denotes which action has failed. The messages before
often show the reason of the failure.

A system component may fail due to a previous failure of another component.


Check all log files that have been changed at the same or similar time. The time
labels of the messages help you to track the causal relationship.

31.3.2 Setting Log Levels


To set log levels, the according entries in the *.setup files of the component must
be configured. The *.setup files are stored in <OT config AS>\setup. To configure
the STORM log levels see Log Levels and Log Files for the STORM on page 312.
Note: If log levels are changed, the component must be restarted.

31.3.3 Log Settings for Archive and Storage Services


Components (Except STORM)
The log settings are configured for each component of Archive and Storage Services.
The *.setup files are stored in <OT config AS>\setup, for example DS.setup for
the Document Service. Default values are set during installation.
Permanent log
levels

Log levels for


troubleshooting

The following incidents are always written to the log files, and usually also to the
Event Viewer or Syslog. You cannot switch off the corresponding log levels.

Fatal errors indicate fatal application errors that mostly lead to server crashes
(message type FTL).

Important errors (message type IMP).

Security errors indicate security violations such as invalid signatures (message


type SEC).

Errors indicate serious application errors (message type ERR).

Warnings indicate potential problem causes (message type WRN).

The following log levels are relevant for troubleshooting. You can change them in
the Server Configuration, see Setting Log Levels on page 310.
Important
Higher log levels can generate a large amount of data and even can slow
down the archive system. Reset the log levels to the default values as soon as
you have solved the problem. Delete the log files only after you have
stopped the spawner.

310

Open Text Archive and Storage Services

AR090701-ACN-EN-6

31.3 Working with Log Files

Setting in Server Configuration

Default

Description

Maximum size of a log


file

100000

Default: 100000 bytes


If the log file exceeds
this size, it is renamed
to <filename>.old and
a new log file is created.
If there is an old
<filename>.old, it is
dropped.

Log Info Messages

off

Read configuration entries and received


commands. Most useful
for troubleshooting and
detection configuration
faults.

INF

LOG_INFO

Log Database Debug


messages

off

All jobs concerning the


database
Caution: High amount
of logging information,
huge log files, performance loss!

DB

LOG_DB

Log HTTP Data


only for Document Service (persistent)

off

Traces data transmitted


via HTTP

no type,
no time
label

LOG_HTTP

Log Error Messages


only for BASE package

on

Serious application errors


Do not switch off!

ERR

LOG_ERROR

Log Warning Messages

on

Conditions that cause


problems
Do not switch off!

WRN

LOG_WARN

Log Debug Messages

off

Debug informations
Caution: High amount
of logging information,
huge log files, performance loss!

DBG

LOG_DEBUG

Log RPC Messages

off

RPC Calls

RPC

LOG_RPC

Log Entry/Leave Messages

off

Messages if a function is
entered or left

ENT

LOG_ENTRY

only for BASE package


and Document Service
(persistent)

Time setting

Message
type

Variable
MAXLOGSIZE

Additionally to the log levels, you can define the time label in the log file for each
component. Normally, the time is given in hours:minutes:seconds. If you select
Log using relative time, the time elapsed between one log entry and the next is
given in milliseconds instead of the date, additionally to the normal time label. This
is used for debugging and fine tuning.

AR090701-ACN-EN-6

Administration Guide

311

Chapter 31 Analyzing Problems

31.3.4 Log Levels and Log Files for the STORM


The logging of the STORM differs from the logging of other archive components, see
STORM Configuration Guide (Open Text Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/fetch/2001/744073/3551
166/customview.html?func=ll&objId=3551166)) .

312

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Glossary
Administration Client (former Archiving and Storage Administration)
Administration tool for setup and maintenance of servers, logical archives,
devices, pools, disk buffers, archive modes and security settings.
Frontend interface for customizing and administering Archive and Storage
Services.
Annotation
The set of all graphical additions assigned to individual pages of an archived
document (e.g. coloured marking). These annotations can be removed again.
They simulate hand-written comments on paper documents. There are two
groups of annotations: simple annotations (lines, arrows, highlighting etc.) and
OLE annotations (documents or parts of documents which can be copied from
other applications via the clipboard).
See also: Notes.
Archive Cache Services
See: Cache Server
Archive ID
Unique name of the logical archive.
Archive mode
Specifies the different scenarios for the scan client (such as late archiving with
barcode, preindexing).
ArchiveLink
The interface between SAP system and the archive system.
Buffer
Also known as disk buffer. It is an area on hard disk where archived
documents are temporarily stored until they are written to the the final storage
media.
Burn buffer
A special burn buffer is required for ISO pools in addition to a disk buffer. The
burn buffer is required to physically write an ISO image. When the specified
amount of data has accumulated in the disk buffer, the data is prepared and

AR090701-ACN-EN-6

Open Text Archive and Storage Services

313

Glossary

transferred to the burn buffer in the special format of an ISO image. From the
burn buffer, the image is transferred to the storage medium in a single,
continuous, uninterruptible process referred to burning an ISO image. The
burn buffer is transparent for the administration.
Cache
Memory area which buffers frequently accessed documents.
The archive server stores frequently accessed documents in a hard disk volume
called the Document Service cache. The client stores frequently accessed
documents in the local cache on the hard disk of the client.
Cache Server
Separate machine, on which documents are stored temporarily. That way the
network traffic in WAN will be reduced. On a cache server Archive Cache
Services are running.
Device
Short term for storage device in the archive server environment. A device is a
physical unit that contains at least storage media, but can also contain additional
software and/or hardware to manage the storage media. Devices are:

local hard disks

jukeboxes for optical media

virtual jukeboxes for storage systems

storage systems as a whole

Digital Signature
Digital signature means an electronic signature based upon cryptographic
methods of originator authentication, computed by using a set of rules and a set
of parameters such that the identity of the signer and the integrity of the data can
be verified. (21 CFR Part 11)
Disk buffer
See: Buffer
DocID
See: Document ID (DocID)
DocTools
Programs that perform single, discrete actions on the documents within a
Document Pipeline.

314

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Glossary

Document ID (DocID)
Unique string assigned to each document with which the archive system can
identify it and trace its location.
Document Pipeline (DP)
Mechanism that controls the transfer of documents to the Document Service at a
high security level.
Document Pipeline Info
Graphical user interface for monitoring the Document Pipeline.
Document Service (DS)
The kernel of the archive system. It receives and processes documents to be
archived and provides them at the client's request and controls writing processes
to storage media.
It consists of a read component (RC) and a write component (WC) which archives
documents.
DP
See: Document Pipeline (DP)
DPDIR
The directory in which the documents are stored that are being currently
processed by a document pipeline.
DS
See: Document Service (DS)
Hard disk volume
Used as an archive medium, it supports incremental writing as well as deletion
of documents with a strictly limited lifetime, such as paperwork of applicants
not taken on by a company. Hard disk volumes must be created and assigned a
mount path on the operating system level before they can be referred to in the
Archive Administration.
Hot Standby
High-availability archive server setup, comprising two identical archive servers
tightly connected to each other and holding the same data. Whenever the first
server becomes out of order, the second one immediately takes over, thus
enabling (nearly) uninterrupted archive system operation.
ISO image
An ISO image is a container file containing documents and their file system
structure according to ISO 9660. It is written at once and fills one volume.

AR090701-ACN-EN-6

Administration Guide

315

Glossary

Job
A job is an administrative task that you schedule in the Archive Administration
to run automatically at regular intervals. It has a unique name and starts
command which executes along with any argument required by the command.
Known server
A known server is an archive server whose archives and disk buffers are known
to another archive server. Making servers known to each other provides access to
all documents archived in all known servers. Read-write access is provided to
other known servers. Read-only access is provided to replicate archives. When a
request is made to view a document that is archived on another server and the
server is known, the inquired archive server is capable of displaying the
requested document.
Log file
Files generated by the different components of Archive and Storage Services to
report on their operations providing diagnostic information.
Log level
Adjustable diagnostic level of detail on which the log files are generated.
Logical archive
Logical area on the archive server in which documents are stored. The archive
server may contain many logical archives. Each logical archive may be
configured to represent a different archiving strategy appropriate to the types of
documents archived exclusively there. An archive can consist of one or more
pools. Each pool is assigned its own exclusive set of volumes which make up the
actual storage capacity of that archive.
Media
Short term for long term storage media in the archive server environment. A
media is a physical object: optical storage media (CD, DVD, WORM, UDO), hard
disks and hard disk storage systems with or without WORM feature. Optical
storage media are single-sided or double-sided. Each side of an optical media
contains a volume.
Monitor Server (MONS)
Obtains status information about archives, pools, hard disk and database space
on the archive server. MONS is the configuration parameter name for the
Monitor Server.
Monitor Web Client
Web based administration tool for monitoring the state of the processes, storage
areas, Document Pipeline and database space of the archive server.

316

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Glossary

MONS
See: Monitor Server (MONS)
Notes
The list of all notes (textual additions) assigned to a document. An individual
item of this list should be designated as note. A note is a text that is stored
together with the document. This text has the same function as a note clipped to
a paper document.
Open Text Monitor Web Client
See: Monitor Web Client
Pool
A pool is a logical unit, a set of volumes of the same type that are written in the
same way, using the same storage concept. Pools are assigned to logical archives.
RC
See: Read Component (RC)
Read Component (RC)
Part of the Document Service that provides documents by reading them from the
archive.
Remote Standby
Archive server setup scenario including two (ore more) associated archive
servers. Archived data is replicated periodically from one server to the other in
order to increase security against data loss. Moreover, network load due to
document display actions can be reduced since replicated data can be accessed
directly on the replication server.
Replication
Refers to the duplication of an archive or buffer resident on an original server on
a remote standby server. Replication is enabled when you add a known server to
the connected server and indicate that replication is to be allowed. That means,
the known server is permitted to pull data from the original server for the
purpose of replication.
Scan station
Workstation for high volume scanning on which the Enterprise Scan client is
installed and to which a scanner is connected. Incoming documents are scanned
here and then transferred to Archive and Storage Services.

AR090701-ACN-EN-6

Administration Guide

317

Glossary

Slot
In physical jukeboxes with optical media, a slot is a socket inside the jukebox
where the media are located. In virtual jukeboxes of storage systems, a slot is
virtually assigned to a volume.
Spawner
Service program which starts and terminates the processes of the archive system.
Storage Manager
Component that controls jukeboxes and manages storage subsystems.
Timestamp Server
A timestamp server signs documents by adding the time and signing the
cryptographic checksum of the document. To ensure evidence of documents, use
an external timestamp server like Timeproof or AuthentiDate. Open Text
Timestamp Server is a software that generates timestamps.
Timestamp Server Administration
Configuration tool for Open Text Timestamp Server.
Volume

A volume is a memory area of a storage media that contains documents.


Depending on the device type, a device can contain many volumes (e.g. real
and virtual jukeboxes), or is treated as one volume (e.g. storage systems w/o
virtual jukeboxes). Volumes are attached - or better, assigned or linked logically to pools.

Volume is a technical collective term with different meaning in STORM and


Document Service (DS). A DS volume is a virtual container of volumes with
identical documents (after the complete backup is written). A STORM
volume is a virtual container of all identical copies of a volume. For ISO
volumes, there is no difference between DS and STORM volumes. Regarding
WORM (IXW) volumes, the STORM differenciates between original and
backup, they are different volumes, while DS considers original and backup
together as one volume.

WC
See: Write Component (WC)
Windows Viewer
Component for displaying, occasional scanning with Twain scanners and
archiving documents. The Windows Viewer can attach annotations and notes to
the documents.

318

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Glossary

WORM
WORM means Write Once Read Multiple. An optical WORM disk has two
volumes. A WORM disk supports incremental writing. On storage systems, a
WORM flag is set to prevent changes in documents. UDO media are handled like
optical WORMs.
Write Component (WC)
Component of the Document Service carries out all possible modifications. It is
used to archive incoming documents (store them in the buffer), modify and
delete existing documents, set, modify, and delete attributes, and manage pools
and volumes.
Write job
Scheduled administrative task which regularly writes the documents stored in a
disk buffer to appropriate storage media.

AR090701-ACN-EN-6

Administration Guide

319

Index

A
Accounting 290
Administration
Archive and Storage Services 39
Archive Server 39
Administration Client 39
Alerts 270
ArchiSig
configuration 111
job 111
migrating document timestamps 112
renewing timestamps 112
ArchiSig timestamps 107
Archive
logical 31
Archive Access 68
Archive and Storage Services
connection to SAP 143
main components 26
starting (manually) 301
stopping (manually) 301
Archive and Storage Services components
log settings (except STORM) 310
processes 308
Archive Cache Server 173
configuring 179
Archive Cache Services 173
Configuring 179
main components 26
Archive database
MS SQL Server (Backup) 216
Oracle 216
Archive mode 149
adding and modifying 151
assigning to a 154
scan host assignment 154
scenarios 149
settings 152

AR090701-ACN-EN-6

Archive Server
connection to SAP 143
Archives
(See also Logical archives)
access restriction 68
configuration settings 70
retention settings 72
security 68
B
Backup
database 214
Backups 213
Cache Server 216
data on storage system 203
IXW volumes 209
MS SQL Server 216
optical media 206
Oracle 216
Storage Manager configuration 216
Blobs 70
Buffer 33
C
Cache
local 52
Cache Server 173
configuring 179
Caches 37
Certificate For Authentication 96
Certificates
importing certificate for authentication 96
importing certificate for Timestamp
Verification 110
key store, export and import 103
re-encrypt key store 102
verifying 103
Checking
finalization status 187
Checksums 106
Commands
spawncmd 307

Open Text Archive and Storage Services

321

Index

Components 29
Conditions in archive mode 153
Configuration
certificates 125
Configuring
Archive Cache Server 179
Archive Cache Services 179
caches 37
Connection to SAP 143
Container file storage 34
Content 29
Conventions
Conventions in this documentation 20
cscommand utility 217
D
Data compression 64
Database
backup 214
change password 215
Devices
attaching 58
detaching 58
storage 56
Disk buffer 33, 47
DocService
See Document Service
Document Pipeline Info 263
Document protection level 93
Document Service 281
Documents 29
encryption 101
DP error queues 283
DP Queues 283
DP space 280
DP Tools 283
DS DP Error Queues 282
DS DP Queues 282
DS DP Tools 282
DS pools 281
dsHashTree 112
dsReHashTree 113
dsReSign 112
E
Edit
policy 137
Edit Configuration 70
Email notification 270

322

Encryption 101
Enterprise scan
assigning archive mode 154
Error queues 283
Event Filters 265
Events 265
examples 268
Events and Notifications 265
Exporting
volumes 192
F
Fast migration 227
Feedback 19
Finalization
automatic 185
error 188
volume, manually 186
FS pool 36
creating 75
G
Groups 134
GS 36
H
HDSK pool 36
creating 74
I
Illustrations 15
Implicit user 139
import Certificate for Timestamp
Verification 110
Importing
damaged media 196
volumes 193
Installation directories 27
Intializing
automatic 61
manual 61
ISO media
backups 207
ISO pool 35
creating 75
ISO volumes
recovery 208
ixoscert.pem 100

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Index

IXW pool 35
creating 75
IXW volume
restore 211
IXW volumes
backups 209
J
Job
typs 83
Job protocol 83
Jobs 37
checking 86
configuring 83
protocol 88
Jukeboxes
attaching 58
detaching 58
L
Local cache 52
Log files
location 309
STORM 312
Log levels
where and how 310
Log settings
Archive and Storage Services except
STORM 310
Logical archive 31
Logical archives 63
naming conventions 63
Lost&Found 196
M
Media
migration 227
Migration 227
fast 227
media 227
remote 227
Monitor Web Client 263, 275
add host 278
customizing 279
program window 276
refresh view 278
Starting 275

AR090701-ACN-EN-6

Monitoring
accounting 290
configuring notifications 265
Document Service 281
N
Name
STORM server 58, 221
Naming conventions 63
Notifications 265
configuring 269
event examples 268
event specification 265
events 265
types and settings 270
variables 272
O
Offline import 59
Open Text Administration 39
Open Text Online 19
Optical media
backups 206
removing from jukebox 206
Overview
Archive and Storage Services 26
Archive Cache Services 26
Timestamp Server 114
P
Password
database 215
Passwords 133
Lockout 134
Lost 133
Minimum length 133
Security 133
Settings 133
Unlock 134
Policies 135
Policy
checking 136
creating and modifying 137
overview 135
Pool 35
types 74
Pool types
HDSK 36

Administration Guide

323

Index

ISO 35
IXW 35
single file (FS) 36
single file (VI) 36
Problem analysis 309
Processes
important processes 308
start and stop 303
status 308
Protection levels 93
Protocol
Jobs 88
Purge Buffer job 33
putcert 99
Q
Queues
monitor display 283
R
recIO 103
Recover
IXW volume 211
Recovery 213
Cache Server 216
ISO volumes 208
Remote migration 227
Remote Standby Server 161
Restore
ISO volumes 208
IXW volume 211
Restoring
See Recovery
Retention 65
Retention settings 72
RSS
See Remote Standby Server
S
SAP as leading application
configuring connection 143
Scan
scenarios 149
Scan hosts
configuring 149
Scan station
archive mode 151

324

Scan stations
configuring 149
Scheduled
jobs 37
SecKeys 92
from other applications 98
from SAP 98
importing certificates 98
Security
analyzing settings 105
checksums 91, 106
encrypted document storage 91
encryption 101
importing certificate for authentication 96
importing certificate for Timestamp
Verification 110
key store encryption 102
overview 91
SecKeys 92
SecKeys/Signed URL 91, 92
SSL 91, 100
timestamps 91
Timestamps 107
verifying certificate 103
Set Encryption Certificates 102
Signature renewal
configuring 111
renewing hash tree 113
Single file (FS) 36
Single file (VI) 36
Single file storage 34
Single instance 65
spawncmd 307
Spawner
commands 307
SSL 100
Standard users 135
Start
utilities 222
Starting
Archive and Storage Services (UNIX) 302
Archive and Storage Services (Windows)
301
Statistics
Storage Manager 293
Status
finalization 187
Status checks
location 131

Open Text Archive and Storage Services

AR090701-ACN-EN-6

Index

status 127
Stopping
Archive and Storage Services (UNIX) 302
Archive and Storage Services (Windows)
301
Storage devices 56
Storage Manager
monitor display 280
Storage Manager configuration
backup 216
Storage media
checking 198
offline import 59
Storage scenarios 34
Storage system
dependency on pool type 35
Storage systems
backups 203
Storage type 34
STORM
log files 312
STORM server
name 58, 221
System key 101
T
Timestamp renewal 108
Timestamps 107
Troubleshooting
avoid problems 297
problem analysis 309
Typography 20

overview 221
Set Encryption Certificates 102
start 222
V
Variables
in notifications 272
VI pool 36
creating 75
Virus protection 92
vmclient 227
VolMig 227
Volume
finalization 185
Volumes
unavailable 62
W
Web Monitor
See Monitor Web Client
Workflow in archive mode 153
WORM
damaged 196
Write at once 35
Write files incrementally 35
Write job 33
Write through 36

U
Unavailable volumes 62
User
adding 138
new 138
User groups 134
add policy 139
add user 139
setting up 139
Users 135
setting up groups 139
standard 135
Utilities
import Certificate for Timestamp
Verification 110
importing Certificate for Authentication 96

AR090701-ACN-EN-6

Administration Guide

325

Вам также может понравиться