Вы находитесь на странице: 1из 1624

i

IBM SPSS Data Collection Base


Professional 6.0.1 User’s Guide
Note: Before using this information and the product it supports, read the general information
under Notices on p. 1589.

This edition applies to IBM SPSS Data Collection Base Professional 6.0.1 and to all subsequent
releases and modifications until otherwise indicated in new editions.
Adobe product screenshot(s) reprinted with permission from Adobe Systems Incorporated.
Microsoft product screenshot(s) reprinted with permission from Microsoft Corporation.

Licensed Materials - Property of IBM

Licensed Materials - Property of IBM © Copyright IBM Corporation 2000, 2011

Licensed Materials - Property of IBM © Copyright IBM Corporation 2000, 2011

U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP
Schedule Contract with IBM Corp.
Preface
Welcome to the IBM® SPSS® Data Collection Base Professional 6.0.1 User’s Guide. This guide
provides information on using the IBM® SPSS® Data Collection Base Professional application.
For information about installing the product, see the IBM SPSS Data Collection Desktop 6.0.1
Installation Guide.
Adobe Portable Document Format (.pdf) versions of the guides are available on the IBM SPSS
Data Collection Desktop 6.0.1 DVD-ROM. Viewing and printing the documents requires Adobe
Reader. If necessary, you can download it at no cost from www.adobe.com. Use the Adobe Reader
online Help for answers to your questions regarding viewing and navigating the documents.

Notice: IBM® SPSS® Data Collection offers many powerful functions and features for use in
the business of our customers. IBM is not responsible for determining the requirements of laws
applicable to any licensee’s business, including those relating to Data Collection Program, nor that
IBM’s provision of (or any licensee’s receipt of) the Program meets the requirements of such laws.
All licensees shall comply with all laws applicable to use and access of the Program, whether such
use or access is standalone or in conjunction with any third party product or service.

About IBM Business Analytics


IBM Business Analytics software delivers complete, consistent and accurate information that
decision-makers trust to improve business performance. A comprehensive portfolio of business
intelligence, predictive analytics, financial performance and strategy management, and analytic
applications provides clear, immediate and actionable insights into current performance and the
ability to predict future outcomes. Combined with rich industry solutions, proven practices and
professional services, organizations of every size can drive the highest productivity, confidently
automate decisions and deliver better results.

As part of this portfolio, IBM SPSS Predictive Analytics software helps organizations predict
future events and proactively act upon that insight to drive better business outcomes. Commercial,
government and academic customers worldwide rely on IBM SPSS technology as a competitive
advantage in attracting, retaining and growing customers, while reducing fraud and mitigating
risk. By incorporating IBM SPSS software into their daily operations, organizations become
predictive enterprises – able to direct and automate decisions to meet business goals and achieve
measurable competitive advantage. For further information or to reach a representative visit
http://www.ibm.com/spss.

Technical support
Technical support is available to maintenance customers. Customers may contact Technical
Support for assistance in using IBM Corp. products or for installation help for one of the
supported hardware environments. To reach Technical Support, see the IBM Corp. web site
at http://www.ibm.com/support. Be prepared to identify yourself, your organization, and your
support agreement when requesting assistance.

Licensed Materials - Property of IBM © iii


Copyright IBM Corporation 2000, 2011
Contents
1 Base Professional 1
Welcome to IBM SPSS Data Collection Base Professional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
What’s new in IBM SPSS Data Collection Base Professional 6.0.1. . . . . . . . . . . . . . . . . . . . . . . . . 2
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Creating Your First mrScriptBasic Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Using IBM SPSS Data Collection Base Professional . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Which File Types Can I Work With? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . . 12
The IBM SPSS Data Collection Base Professional Window . . . . . . . . . . . . . . ... ... ... . . . 13
Working with Templates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . . 21
Working with Macros . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . . 25
Using the Workspace Feature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . . 27
Using the Metadata Viewer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . . 28
Using the Connection String Builder. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . . 31
Using Context-Sensitive Help. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . . 42
Using ScriptAssist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . . 43
Finding and Replacing Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . . 44
Debugging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . . 46
IBM SPSS Data Collection Base Professional in Other Languages . . . . . . . . . ... ... ... . . . 53
Using IBM SPSS Data Collection Base Professional to develop interviews. . . ... ... ... . . . 54
The IBM SPSS Data Collection Base Professional menu . . . . . . . . . . . . . . . . ... ... ... . . . 91
IBM SPSS Data Collection Base Professional toolbars. . . . . . . . . . . . . . . . . . ... ... ... . . . 93
IBM SPSS Data Collection Base Professional keyboard shortcuts . . . . . . . . . ... ... ... . . . 99
IBM SPSS Data Collection Base Professional options . . . . . . . . . . . . . . . . . . ... ... ... . . 101
Local Deployment Wizard overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . 199
Activation Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . 203
Notes for IBM SPSS Quantum Users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... . . 111
The Big Picture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Activating questionnaires . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Activation templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
IBM SPSS Data Collection Activation Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
Activation History tab . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. 196
Filters tab . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. 197
Settings tab . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. 198
Local Deployment Wizard overview . . ... ... ... ... ... ... ... ... ... ... ... ... ... ... .. 199
Usage options . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... .. 199
Validation options . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... .. 200
Routing options - data entry . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... .. 200
Routing options - live interviewing . . . ... ... ... ... ... ... ... ... ... ... ... ... ... .. 201
Display options . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... .. 201
Deployment options . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... .. 201

iv
Expiry date and time options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Summary options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Activation Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Data Management Scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
Data Management Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 206
Getting Started . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 208
Understanding the Process Flow . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 231
Data Management Script (DMS) File . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 241
DMS Runner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 289
WinDMSRun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 300
Transferring Data Using a DMS File . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 310
Working with IBM SPSS Data Collection Interviewer Server Data ... ... ... ... ... ... .. 356
Merging Case Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 373
Data Cleaning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 388
Working with the Weight Component . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 406
Creating New Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 433
Analyzing a Tracking Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 442
Table Scripting in a Data Management Script . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 451
Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 466
Data Management Reference . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 482
Interview Scripting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... .. 503
What’s New in Interview Scripting 6.0.1 . . ... ... ... ... ... ... ... ... ... ... ... ... . . 504
Getting Started . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . . 504
Writing Interview Scripts . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . . 522
Testing Interview Scripts . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . . 984
Activating Interview Scripts. . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . . 987
Sample Management. . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . . 992
Quota Control . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . 1076
Interview Scripting Reference . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . 1086
Table Scripting . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . 1140
Getting Started . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1142
Table Specification Syntax. . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1192
Cell Contents . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1247
Hierarchical Data . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1271
Statistical Tests . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1310
Table Presentation. . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1396
Annotations . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1432
Working with Profile Tables . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1443
Working with Metadata . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1453
Working with Change Tracker . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1469
Working with the Variables Interface . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1475
Exporting Tables . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1477
Working with Axis Expressions . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1536
Sample Table Scripts . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... ... . 1541

v
Limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... . 1555
Table Object Model Reference. . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... . 1556
Metadata Services Object Model Reference . ... ... ... ... ... ... ... ... ... ... ... . 1560
QsfTom component object model reference . . ... ... ... ... ... ... ... ... ... ... ... . 1560
Accessibility Guide . . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... . 1560
Keyboard Navigation . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . 1561
Accessibility for the Visually Impaired . . . ... ... ... ... ... ... ... ... ... ... ... ... . 1561
Accessibility for Blind Users . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . 1561
Special Considerations . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . 1561
Troubleshooting . . . . . . . . . . . . . . . . . . . . . . . ... ... ... ... ... ... ... ... ... ... ... ... . 1562
IBM SPSS Data Collection Base Professional FAQs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1562
Error Messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1565
Data Management Troubleshooting and FAQs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1565

Appendix

A Notices 1589

Index 1592

vi
Chapter

1
Base Professional
Welcome to IBM SPSS Data Collection Base Professional
IBM® SPSS® Data Collection Base Professional is a complete set of tools that supports the
building of automated market research processes. Base Professional includes an integrated
development environment (IDE) that enables you to create, edit, run, and debug IBM® SPSS®
Data Collection scripts. In addition, Base Professional comes with the internal components that
enable you to create scripts that perform various data management tasks.

Two additional options are available for Base Professional:


„ The Tables Option includes components that enable you to create batch tables using a script.
„ The Interview Option includes components that enable you to develop and test interviews, and
activate them in IBM® SPSS® Data Collection Interviewer Server.

The documentation for Base Professional includes getting started topics, step-by-step instructions,
conceptual overviews, numerous examples, information about the samples and command prompt
tools, and extensive reference material. The following table provides a summary of the major
sections in the Base Professional documentation.
What’s new in IBM SPSS Data Collection Base Provides a summary of the main changes in this
Professional 6.0.1 section.
Getting Started A brief introduction to writing scripts in Base
Professional.
Using IBM SPSS Data Collection Base Professional How to use the Base Professional integrated
development environment (IDE).
Notes for IBM SPSS Quantum Users A section designed to help Quantum users get
started with Base Professional.
Data Management Scripting Detailed documentation on using Base Professional
to perform data management-related tasks. This
section includes a Getting Started section, details
of the DMS file, information on cleaning and
transferring data, setting up weighting schemes, and
creating new variables. Detailed reference to the
Data Management Object Model (DMOM) and the
Weight component object model in also provided.
Interview Scripting Describes how to use the Base Professional
Interview Option to create interviews that can be
activated in version 3.0 (or later) of Interviewer
Server.
Table Scripting Documentation on using the Base Professional
Tables Option to create batch tables using a script.
Troubleshooting FAQs and error messages.

Licensed Materials - Property of IBM © Copyright 1


IBM Corporation 2000, 2011
2

Chapter 1

What’s new in IBM SPSS Data Collection Base Professional 6.0.1


What’s new in IBM® SPSS® Data Collection Base Professional 6.0.1 is summarized under the
following headings:
„ IBM® SPSS® Data Collection Base Professional Installation
„ Base Professional IDE
„ IBM® SPSS® Data Collection Question Repository
„ Data Management
„ Interview Scripting
„ Table Scripting
„ Table Scripting Samples

Installation

x64 64-bit support. x64 64-bit editions are now provided for the IBM® SPSS® Data Collection
applications (note that IBM® SPSS® Data Collection Author Server Edition and IBM® SPSS®
Data Collection Survey Reporter Server Edition are only provided as x86 32-bit). Refer to the
appropriate Data Collection installation guide for more information.

Fix pack and hotfix information. You can now view information regarding which fix packs and
hotfixes are installed via the application’s Help menu.
Help > About Base Professional... > Details...

IDE

Data Management

Integration with IBM SPSS Collaboration and Deployment Services Repository.IBM® SPSS®
Data Collection 6.0.1 provides support for storing and retrieving .mrz, and .dmz packages (zip
archives) to a IBM® SPSS® Collaboration and Deployment Services Repository. A package
is an executable element of Data Collection.

A .dmz package contains a primary .dms file, a configuration file for the primary .dms file, and any
other internal includes files.

An .mrz package contains a primary .mrs file, a configuration file for the primary .mrs file, and any
other internal includes files.

IBM® SPSS® Collaboration and Deployment Services is used as a job scheduling and
configuration platform. User-configured script items are exposed to IBM SPSS Collaboration and
Deployment Services, but IBM SPSS Collaboration and Deployment Services will not execute
any part of a Data Collection script. User-configured items include parameters and store locations,
access permissions, and output file properties.
3

Base Professional

Base Professional supports the following integration with the IBM SPSS Collaboration and
Deployment Services Repository:
„ Script Packager component. The component provides support for generating deployable
.mrz, .dmz, and mtz packages (zip archives) for the purpose of integration with IBM SPSS
Collaboration and Deployment Services Repository. For more information, see the topic
Script Packager Component Object Model on p. 501.
„ IBM SPSS Collaboration and Deployment Services comment block. The IBM SPSS Collaboration
and Deployment Services comment block defines parameters for .mrs and .dms scripts. The
new CaDSCommentBlock.dms sample provides an IBM SPSS Collaboration and Deployment
Services comment block example in .dms format. For more information, see the topic Sample
DMS Files on p. 467.
„ DMS Runner. DMS Runner provides support for the /loc:<location> option. The option allows
you to specify which location is used when working with a .dmz package (zip archive). For
more information, see the topic DMS Runner on p. 289.
„ mrScript Command Line Runner. The mrScript Command Line Runner provides support for the
/loc:<location> option. The option allows you to specify which location is used when working
with an .mrz package. A package is an executable element of Data Collection; each package
contains a main script that is utilized for execution entry and a set of scripts that are included
in the main script. Packages supports script integration with the IBM SPSS Collaboration and
Deployment Services Repository.
„ Data Collection Execution Server. Provide the web services to process the zip archive packages
and associated configuration files. The server executes and returns the output variables and
output files via a web service response. The server also supports IBM SPSS Collaboration and
Deployment Services job step cancellation.
„ Data Collection IBM SPSS Collaboration and Deployment Services example.
The Data Collection IBM SPSS Collaboration and Deployment Services
example provides an IBM SPSS Data Collection integration scenario with
IBM SPSS Collaboration and Deployment Services. The example is stored
in the [INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Data
Management\Collaboration Deployment Services directory. Refer to The Data Collection
Collaboration and Deployment Services example for more information.

Refer to the Introduction to IBM SPSS Collaboration and Deployment Services Repository
integration section in the IBM® SPSS® Data Collection Developer Library for more information.

Project expiry setting in Local Deployment Wizard. The Local Deployment Wizard now provides an
expiry date and time step that allows you define the project’s expiration date and time (UTC time).
Defining a project expiration date and time allows interviewers to easily identify expired projects.
For more information, see the topic Local Deployment Wizard overview on p. 199.

Interview Scripting

Support for reserved names and keywords in metadata. Data Collection now provides full support
for SQL and mrScript reserved names and keywords in metadata variables. In previous releases,
the use of reserved SQL keywords could cause issues when using the IBM® SPSS® Data
4

Chapter 1

Collection Data Model to query data for processes such as DMOM; the use of reserved mrScript
keywords could cause syntax errors when referenced within a routing script.

Refer to and for more information.

Auto Answer feature enhancements. The Auto Answer feature has been updated to support more
robust auto answer playback capabilities including adding, editing, changing, and removing
data sources connections, defining the number of cases, and so on. For more information, see
the topic Auto Answer dialog box on p. 66.

Census.mdd hierarchical data example. The new Census.mdd hierarchical


data example is included in the Data Collection Developer Library at
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Interview\Frequently Asked
Questions\Loops and Grids. The example demonstrates the use of hierarchical data when working
with loops and grids. Refer to For more information, see the topic Hierarchical Data on p. 1391.
for more information.

MaxRecordsInBatch property. This new CATI parameter defines the maximum number of records
to pass to the sample management script. The maximum value defaults to 25 when the property is
not defined. For more information, see the topic Scripts in the CATI Folder on p. 1054.

Support for Web browser capability detection. A new property collection that
contains the respondent’s browser capabilities has been implemented. Refer to
IInterview.Info.BrowserCapabilities for more information.

Table Scripting

QsfTom component object model. The QsfTom component converts Quanvert saved table specs
(as many as possible) to a set of TOM table specs in an MTD file. For more information, see the
topic QsfTom component object model reference on p. 1560.

Table Scripting Samples

Getting Started
This section introduces some of the features of the IBM® SPSS® Data Collection Base
Professional IDE that make it easy to write scripts. This topic contains step-by-step instructions
that walk you through creating a simple mrScriptBasic file.

Creating Your First mrScriptBasic Script


This topic is designed to introduce you to some of the features in the IBM® SPSS® Data
Collection Base Professional IDE for writing mrScriptBasic code, and is in the form of
step-by-step instructions that walk you through creating a simple mrScriptBasic file. This topic
is not designed to teach you about the mrScriptBasic language itself, and you do not need to
be an expert in mrScriptBasic to follow the steps. For tips on learning mrScriptBasic, see 6.
Mastering mrScriptBasic.
5

Base Professional

E First let’s create a new mrScriptBasic file. From the File menu, choose:
New > File

This opens the New File dialog box.

E From the list of available file types, select mrScriptBasic File, and then click Open.

E In the main Edit pane, type the following:

Dim MDM

Set MDM =

We will now use the to create an instance of an MDM Document object. We will insert the
function using the Functions pane, which lists all of the functions in the .

E Click the Functions tab to bring the Functions pane to the front, and then expand the Misc folder.

E Right-click the CreateObject function, and from the shortcut menu, choose Add Function.

This inserts the name of the function into the script.

E Now type the opening parenthesis: (

This displays the function’s syntax, which shows us that there is one parameter (the class) and
that the function returns an object.

E Type the parameter and the closing parenthesis: "MDM.Document")


6

Chapter 1

Notice that Base Professional automatically highlights the parentheses so that you can see the
opening and closing pair, and colors the text to help you distinguish the different elements of your
script. For example, different colors are used for keywords, variables, and literals.
E On the next line type MDM. (including the dot).

This activates the ScriptAssist autosuggest feature, which means that a drop-down list appears
showing the object’s properties and methods.

E From the list, double-click the Open method.

This inserts the method into your script.


E Now type the opening parenthesis: (

This displays the method’s parameters. (This is called the autosignature.) The parameter that you
are on is shown in bold — we will use this parameter to specify the .mdd file to be opened.
Optional parameters are shown in square brackets.

E Type the name and location of the Short Drinks sample .mdd file. Because this is a text argument,
we must enclose it in quotation marks:
MDM.Open("[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Data\Mdd\short_drinks.mdd")

E Now, click the Types tab to bring the Types pane to the front.

The Types pane is similar to the Visual Basic object browser. It shows the properties and methods
of all of the default objects. When you use the CreateObject function to create an instance of a
IBM® SPSS® Data Collection Data Model or IBM® SPSS® Data Collection object, the Types
7

Base Professional

pane also lists the objects in the associated type library. We created an MDM object from the
MDM type library, which is called MDMLib.

E Select MDMLib from the first drop-down list in the top left corner of the pane.

This restricts the list to the objects in the MDM type library.

There are tools on the Types pane toolbar that you can use to search and sort the list of objects
and the list of an object’s members. When you select an object on the left side of the pane, its
members are shown on the right. When you click on a property or method, the lower part of the
pane shows its syntax and other useful information (such as whether it is a read-only or read-write
property) and when you click on a constant, its value is shown. The Types pane has a shortcut
menu, like the Functions pane: right-clicking a member (such as a property or method) opens a
shortcut menu, which enables you to insert the property or method into the code.

To demonstrate working with a common Microsoft object, next we will create a Microsoft
Scripting FileSystemObject and use it to create a text file.

E Enter the following:

Dim FSO, MyTextFile

Set FSO = CreateObject("Scripting.FileSystemObject")


MyTextFile = FSO.

Even though the FileSystemObject is not an SPSS object, this activates the ScriptAssist feature
and opens a drop-down list of the object’s properties and methods. The ScriptAssist feature is
available for all suitable COM object models that are created using the CreateObject function.
Note that when the ScriptAssist feature is available for an object, the object’s type library will also
be included in the Types pane. However, the type library does not show in the Types pane until
you reference the object and type the dot (.) that activates the ScriptAssist feature.
8

Chapter 1

E From the list, select the CreateTextFile method and then type the opening parenthesis: (

Again, this makes Base Professional display details of the method’s parameters.

E Enter a name, and optionally the location, for the new text file, and then use the Dim statement
to define a variable called MyVariableInstance. For example:

MyTextFile = FSO.CreateTextFile("MyVariables.txt")

Dim MyVariableInstance

We will now create a macro that we can use to insert some common code into our script.

E From the Tools menu, choose Macros:

This opens the AutoExpansion Macros dialog box.


9

Base Professional

E Click on the mrScriptBasic tab.

E In the Macro Name text box, enter a name for our new macro. For example, FE.

E In the Macro Text text box, type the following:

For Each {0} In {1}

Next

The items in braces ({0} and {1}) define parameters. When we subsequently use the macro, Base
Professional will replace these with the first and second arguments that we supply.
E Click the Add Macro toolbar button (this is the leftmost button on the toolbar).

This adds the macro to the list of mrScriptBasic macros.


E Close the AutoExpansion Macros dialog box.

Now we will use the macro in our script.


E Back in the main Edit pane, type the following (where FE refers to the name you gave the macro.
Make sure you use exactly the same combination of upper and lower case letters that you used
when you defined its name.)
FE(MyVariableInstance,MDM.Variables)

E Now type Ctrl+M.

Base Professional inserts the macro and replaces the first parameter ({0}) with
“MyVariableInstance” and the second parameter ({1}) with “MDM.Variables”. (This is a
simplistic example to demonstrate using the macro feature. In practice you would normally use
macros to insert more complex code.)
10

Chapter 1

E Use the ScriptAssist feature to insert code into the loop to write the full name of the variable
instance to the text file and to add code after the loop to close the text file.

For Each MyVariableInstance In MDM.Variables


MyTextFile.WriteLine(MyVariableInstance.FullName)
Next

MyTextFile.Close()

E Now let’s run the code. You do this by pressing Ctrl+F5, selecting the Start Without Debugging tool,
or choosing Start Without Debugging from the Debugging menu.
If you are using the Save Before Execution option, Base Professional will prompt you for a name
for the script file. If necessary, enter a name (for example, MyFirstScript.mrs). Shortly afterwards
you should then see a message telling you that the script has completed successfully. If not,
you may have made an error in the script and you will need to debug it. For more information,
see the topic Debugging on p. 46.

E If your script ran successfully, open the text file in Base Professional.
11

Base Professional

Using IBM SPSS Data Collection Base Professional


This section provides a brief introduction to working in the IBM® SPSS® Data Collection Base
Professional integrated development environment (IDE). The following table summarizes the
topics in this section.
Which File Types Can I Work With? Explains which types of files you can work with in
the Base Professional IDE.
The IBM SPSS Data Collection Base Professional Introduces the Base Professional window, and
Window describes how you can customize the window layout
to suit your needs.
Working with Templates Explains how to base a new file on a template and
how to create new templates and edit existing ones.
Working with Macros Explains how to use, create, and edit macros.
Using the Workspace Feature Information about the workspace feature, which
is particularly useful when you are working on a
number of linked files.
Using the Metadata Viewer Introduces you to the Metadata Viewer, which
enables you to browse the objects in a Metadata
Document (.mdd) file. This is useful when you need
to look up names of variables and categories for use
in a filter or table.
Using the Connection String Builder Introduces you to the Connection String Builder,
which makes it easy to insert a connection string
into a script.
Using Context-Sensitive Help Describes how to access specific help for any
element of your script.
Using ScriptAssist Describes the ScriptAssist feature in Base
Professional, which can help you to write scripts
without having to remember the names of an
object’s properties and methods.
Finding and Replacing Text Describes some of the advanced options for finding
and replacing text in your scripts, including how to
use regular expressions to find patterns of text.
Debugging Introduces you to the features of the Base
Professional IDE that help you debug mrScriptBasic
files.
IBM SPSS Data Collection Base Professional in Describes how to display Base Professional in a
Other Languages language other than English.
Using IBM SPSS Data Collection Base Professional Describes the features of Base Professional that can
to develop interviews help you to develop, test, and deploy interviews.
Working with Advanced Editing Options Describes the Base Professional advanced editing
options.
IBM SPSS Data Collection Base Professional A handy list of all of the tools on the Base
toolbars Professional toolbars and their keyboard shortcuts.
IBM SPSS Data Collection Base Professional A handy list of the keyboard shortcuts you can use
keyboard shortcuts in Base Professional.
IBM SPSS Data Collection Base Professional Explains the options that you can use to customize
options the Base Professional IDE.
12

Chapter 1

Which File Types Can I Work With?


IBM® SPSS® Data Collection Base Professional has been designed for working with IBM®
SPSS® Data Collection script files of the following types:
„ Data Management Script (.dms) files. These scripts are used for performing data management
tasks, such as cleaning and transferring data, creating derived variables for use during
analysis, and setting up weighting schemes. A data management script has two or more
different sections, which have different coding rules and use different technologies. For
example, the InputDataSource section uses property definition and SQL syntax, whereas
the Metadata section is written in mrScriptMetadata, and the Event sections are written in
mrScriptBasic. A data management script is particularly useful when you want to clean and
transfer data and create derived variables, because it handles the connections to the input and
output data sources, the merging of the metadata, and gives you scriptable access to the case
data (in the OnNextCase Event section). You can run your data management scripts from
the Base Professional IDE or by using the DMS Runner command prompt utility. For more
information, see the topic Data Management Scripting on p. 204.
„ mrScriptBasic (.mrs) files. mrScriptBasic is a programming language that enables scriptable
access to Data Collection components. You would typically use a standalone mrScriptBasic
script to perform tasks that do not involve transforming data, such as creating reports, topline
tables and charts. Sometimes you might develop a mrScriptBasic script for use as an Include
file in one of the Event sections of a data management script. You can run and debug your
mrScriptBasic scripts from the Base Professional IDE or by using the .
„ Interview Script (.mdd) files. These scripts are used to create interviews that can be activated
in version 3.0 (or later) of IBM® SPSS® Data Collection Interviewer Server. Interview
scripts have a Metadata section, and one or more Routing sections. The Metadata section
is used to define the questions that will be asked during the interview and is written in
mrScriptMetadata. A Routing section is written in mrScriptBasic and defines which of the
questions will be asked during an interview, and in what order they will be asked. You can
have individual routing sections for different interview environments, for example, “Web”
and “Paper”. For more information, see the topic Interview Scripting on p. 503.

In addition to Data Collection scripts, you can also use Base Professional to create and edit
the following types of files:
„ Text (.txt) files
„ HTML files
„ XML files
„ Rich text format (.rtf) files

You can also open other types of text files, such as log files.

If you have IBM SPSS Data Collection Survey Reporter Professional, you can use Base
Professional to create an .mtd file that can be opened in IBM® SPSS® Data Collection Survey
Tabulation. However, you do not create or edit .mtd files directly. Instead you create them using a
script. For more information, see the topic Creating a Simple Table of Age by Gender on p. 1144.
13

Base Professional

The IBM SPSS Data Collection Base Professional Window


The IBM® SPSS® Data Collection Base Professional window has a number of different panes and
toolbars. You can use standard Windows techniques to rearrange the various panes and toolbars.
For example, you can use the title bar at the top of each pane to drag them into different positions.
When a pane is “floating”, you can double-click its title bar to return it to the position in which it
was last docked. For more information, see the topic Changing the Layout of the Window on p. 16.

The picture below briefly describes the layout of the Base Professional window.

If you open an interview script (.mdd) file, the layout includes some additional features which
are described in the picture below.
14

Chapter 1

Figure 1-1

The individual panes are described in more detail in the table below.
Tab Pane Description
Edit This is the main part of the
desktop, where you edit your files.
If you open an interview script
(.mdd) file, the Edit pane is
separated into a Metadata section
and one or more Routing sections.
By default, the individual sections
can be selected by clicking
on the tabs at the bottom of
the Edit pane. However, you
can change the way that the
metadata and routing sections
are displayed—see Viewing and
navigating an interview script for
more information. By default,
Base Professional also opens a
metadata viewer automatically
whenever you open an interview
script.
15

Base Professional

Tab Pane Description


Breakpoints Lists the breakpoints that have
been set in a script. When you are
debugging, you use breakpoints
to indicate points in your code
at which you want to suspend
the running of the script—for
example, so that you can then step
through the following lines.
Output Displays status and other
information when you run a data
management script (.dms file).
Expressions Use this pane to evaluate an
expression or inspect the value
of a object property when you
are debugging a script. You
can also use this pane to change
the value of a variable.For
more information, see the topic
Examining the Values of Variables
on p. 49.
Find Use to search for text.

Replace Use to search for text and replace


with a different text.
Browser Use this pane to interact with
the interview when you run an
interview script (.mdd) file.
Workspace Lists the files in the current
workspace. The workspace
feature is particularly useful when
you are working on a number of
linked files—for example, a data
management script (.dms file)
that has a number of associated
Include files. You can add new
and existing files to the workspace
by right-clicking in the Workspace
pane and choosing Add New Item
to Workspace and Add Existing Item
to Workspace, respectively. For
more information, see the topic
Using the Workspace Feature on
p. 27.
Metadata Use to browse the questions,
variables, and categories in a
metadata document (.mdd) file.
This is useful when you need to
refer to individual variables and
categories in your script. For
more information, see the topic
Using the Metadata Viewer on p.
28.
16

Chapter 1

Tab Pane Description


Types Similar to the Visual Basic
Object Browser, this shows the
properties and methods of all of
the default objects and details
of the associated enumerated
constants. Similar information
is shown whenever possible for
objects created in the script.
When you click on an object, its
properties and methods are listed.
To insert a property or method
into your code, right-click it, and
from the shortcut menu, choose
Add member.
Functions Lists all of the functions in
the IBM SPSS Data Collection
Function Library, showing brief
details of their parameters.
To insert a function into your
code, right-click it, and from
the shortcut menu, choose Add
Function. For detailed information
about the functions, see .
Locals Shows the values of the
variables in your script when
you are debugging it. For
more information, see the topic
Examining the Values of Variables
on p. 49.
Auto Answer Shows a list of the questions that
have been answered when you
run an interview script (.mdd) file
in Auto Answer mode.
Repository Shows all currently defined
Question Repositories. You can
navigate for topics and survey
assets, and add asset metadata
and routing information to the
working .mdd file.
Help Shows context-sensitive
help topics. To access
context-sensitive help, select an
element of your script and press
F1, For more information, see
the topic Using Context-Sensitive
Help on p. 42.

Changing the Layout of the Window

The IBM® SPSS® Data Collection Base Professional window has a number of different panes.
The panes can be docked at any of the four edges of the main window or they can be floating
above the main window.
17

Base Professional

You can use standard Windows techniques to rearrange the various panes. For example:
„ You can use the title bar at the top of a pane or a group of panes to drag it into a different
position.
„ When a pane or group of panes is floating, you can double-click the title bar to return it
to the position in which it was last docked.
„ When a pane or group of panes is docked, you can double-click the title bar to return it to the
position in which it was last floating.
„ If you want to dock a floating pane or group of panes, click the title bar and drag it to the edge
of the window where you want it to be docked. When you move the mouse pointer over
a docking zone, Base Professional displays a gray outline that indicates the new docked
position. Note that it is the mouse pointer that is the trigger for this behavior and not the
edge of the pane you are dragging.
„ To move one pane out of a group of panes, click the pane’s tab and drag it into the desired
position.
„ To a move a floating pane into a group of panes, drag the floating pane’s title bar to the title
bar of the group.
„ You can dock a pane below another pane or a group of panes. This is just like docking at an
edge of the main window, except that you drag the pane you want to dock to the bottom of the
other pane. When you move the mouse pointer over the docking zone at the bottom of the
other pane, Base Professional displays the gray outline indicating the new docked position.

Restoring the Default Layout

You can restore the layout that existed when Base Professional was first installed on your
computer by deleting the three .bin files from the C:\Documents and Settings\<your Windows
user name>\Application Data\IBM\SPSS\DataCollection\6\Base Professional folder. Do not
delete any other files.

Creating Layouts for Different Tasks

When you run or debug a script, or run an interview script (.mdd) file using Auto Answer, IBM®
SPSS® Data Collection Base Professional automatically restores the window layout you were
using when you last performed the same task. When the script has finished executing, Base
Professional restores your original window layout. You can use this feature of Base Professional
to create individual window layouts for these different tasks.

For example, if you press Ctrl+F5 to run a data management script (.dms file), you might then
want to click on the Output pane tab and increase the size of the Output pane so that you could
follow the progress of the script more easily. When the script finishes running, Base Professional
will automatically decrease the size of the Output pane to the size it was before you ran the script
(and if the Output Pane was not visible to start with, it will be moved back behind another pane). If
you now run the data management script again by pressing Ctrl+F5, you will find that the Output
pane automatically appears and increases to the size that you set it to when you first ran the script.
18

Chapter 1

In a similar way, if you are working on an interview script you might want to reveal the Browser
pane and increase its size while you are debugging the script. When you are editing the script, you
might want to increase the size of the Edit pane and have the Find pane showing. The benefit
of this feature is that you only need to make those layout changes once, and not every time you
start and stop debugging.

Although this feature of Base Professional can be a little confusing at first, you might find it
useful once you have become more familiar with how it works. The important thing to remember
is to press Ctrl+F5, F5 or F6 first, and only then arrange the window layout to how you want it
to appear. In addition, wait for a script to finish executing before arranging your window layout
for editing tasks.

Viewing Large or Multiple Files

By default, the Edit pane displays a single view of one file, referred to as the current file. If you
wish, you can arrange the Edit pane so that you can view two different parts of the current file
simultaneously, or view multiple files simultaneously. You can also “bookmark” lines in your
file so that you can return to them easily without having to scroll through the file or use the
find command.

Viewing Different Parts of a File Simultaneously

When you are working on a large file, it is often useful to be able to browse and edit different parts
of the file at the same time, for example, if you want to copy text to another part of the file. You can
do this by choosing Split from the Window menu. This will split the Edit pane into two sections
that provide two separate views of the current file. Each section can then scroll independently.

The following picture shows an example of using this feature.


19

Base Professional

Figure 1-2

To change the relative sizes of the two sections, drag the split bar to the desired position. To return
to a single Edit pane, choose Remove Split from the Window menu.

Viewing Multiple Files Simultaneously

By default, IBM® SPSS® Data Collection Base Professional arranges multiple files in the Edit
pane as a single tab group. This means that only one file can be viewed at a time, but that you
can switch to any other file by clicking on its tab. If you want to view more than one file at the
same time, you can do so by creating additional tab groups. Each tab group will then appear
in its own section of the Edit pane.

To create an additional tab group, right-click on the tab of the file that you would like to be able to
view alongside the current file. From the shortcut menu, choose either New Horizontal Tab Group
or New Vertical Tab Group, depending on whether you want the files arranged horizontally or
vertically in the Edit pane.

The following picture shows an example of creating a new horizontal tab group.
20

Chapter 1

Figure 1-3

If you have more than two files open, you can move a file between groups by right-clicking on the
file’s tab, and from the shortcut menu, choosing Move to Next Tab Group or Move to Previous Tab
Group. You can also create more tab groups by choosing the relevant option from the shortcut
menu.

To change the relative sizes of tab groups, drag the split bar to the desired position. To return to a
single tab group, use the Move to Next Tab Group or Move to Previous Tab Group choices in the
shortcut menu to move all the files into a single group.

Using Bookmarks

You can use bookmarks to mark one or more lines in your file so that you can quickly find those
lines again. The following table lists the keyboard shortcuts that you can use to create and use
bookmarks.
Description Keyboard Shortcut
Insert a bookmark on the line where the cursor is Ctrl+Shift+B
positioned. If the line where the cursor is positioned
already has a bookmark, remove the bookmark.
Move to the next bookmark in the current file. Ctrl+Shift+N
Move to the previous bookmark in the current file. Ctrl+Shift+P
Remove all bookmarks from the current file. Ctrl+Shift+A
21

Base Professional

You can also create and use bookmarks using the bookmark toolbar buttons. For more information,
see the topic IBM SPSS Data Collection Base Professional toolbars on p. 93.

You can also use Base Professional’s Find feature to insert a bookmark on every line that contains
a text string. To do this, open the Find pane, enter a search string and click Mark All. A bookmark
is inserted on every line that contains the string. Note that if a line already contained a bookmark,
the bookmark will be removed.

Working with Advanced Editing Options

IBM® SPSS® Data Collection Base Professional provides a number of advanced editing features
that are available from the Edit menu:
Edit > Advanced

Advanced Option Description


Increase Line Indent Increases the indentation for the selected lines.
Decrease Line Indent Decreases the indentation for the selected lines.
Comment Selection Effectively comments-out script by adding an apostrophe character to
the beginning of each selected line.
Uncomment selection Removes the comment apostrophe characters from each selected line.
Tabify Selection Converts selected spaces into tabs. The number of spaces-to-tabs is
defined in the Tab Indent option in the IBM SPSS Data Collection Base
Professional options dialog.
Untabify Selection Converts selected tabs into spaces. The number of spaces-to-tabs is
defined in the Tab Indent option in the IBM SPSS Data Collection Base
Professional options dialog.

Working with Templates


IBM® SPSS® Data Collection Base Professional comes with a number of templates that you can
base new scripts on. The templates are organized into a number of folders. It is easy to amend the
templates and add new templates and template folders.
E To create a new script based on a template:

1. From the File menu, choose:


New > From Template

This opens the New File From Template dialog box. The left side of the dialog box lists the
available template folders.
2. On the left side of the dialog box, select the required folder. The right side of the dialog box now
displays the templates that are contained in the selected folder.
3. Select the template you want to use.
4. Click Open.
E To create a new template:

1. Create the file on which you want to base your template in the normal way.
22

Chapter 1

2. From the File menu, choose Save As.

3. Browse to the Templates folder in the folder where Base Professional was installed. In
the default installation, this is [INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\Base
Professional\Templates.

4. Browse to the required template folder within the main Templates folder or click the Create
New Folder tool to create a new template folder.

5. In the File name box, type a name for the new template, and then click Save.

E To edit an existing template:

1. From the File menu, choose:


Open > File

2. Browse to the Templates folder in the folder where Base Professional was installed. In
the default installation, this is [INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\Base
Professional\Templates.

3. Browse to the required template folder within the main Templates folder.

4. Select the template you want to edit and then click Open.

5. Edit and save the file in the normal way.

Project Templates

You can create a project template to preserve all of the files and settings for a specific IBM®
SPSS® Data Collection project that can be used again in the future. Project templates can include
survey files (.mdd), quota, sample, and layout templates. A project template can also include the
activation file, which stores all of the project activation settings, configured for deployment
to an interviewing player.

Project templates help you work more efficiently by minimizing repetition while leveraging
projects that have already been optimized and proven. You may currently have a project that is
utilized numerous times (for example, as a tracker), and have taken the time to tune the document
and modify the numerous activation settings that will be applied when the project is deployed to
a server (and/or locally). The project may also have consistent quota totals. A project template
would enable you to perform basic construction and configuration of a project once; the project
template would then be available for redeployment as needed.

When a project template is configured, and selected to be used for a new project in IBM® SPSS®
Data Collection Author or IBM® SPSS® Data Collection Base Professional, the .mdd file that is
stored within the template populates the authoring tool you are working in. Once the Activation
dialog is opened, the template’s activation settings are displayed, and Quota and Participants
information (within the Activation interface) are updated based on the template’s stored quota
and sample files. When activated, all of the files and settings are carried over to the server or
cluster to which the project is deployed.
23

Base Professional

Project templates can also be updated. A survey may require additional questions or categories,
different call times may be required to satisfy field constraints, an so on. You can save these
changes back to the original template, or create a new template if you have typical variations.

Creating and Saving Project Templates

IBM® SPSS® Data Collection Base Professional comes with a number of templates that you can
base new questionnaires on. The templates (stored as .ptz files) are organized into a number of
folders. It is easy to amend the templates and add new templates and template folders.

Creating a new questionnaire from a template

E From the menu, choose:


File > New > Project...

or press Alt+F, N, P.

This opens the Select Project Template dialog box. The left side of the dialog box lists the
available template folders.

E On the left side of the dialog box, select the required folder. The right side of the dialog box now
displays the templates that are contained in the selected folder.

E Select the template you want to use.

E Click Select.

Saving a questionnaire as a project template

E From the menu, choose:


File > Save As

or press Alt+F, A.

E Use the Save As dialog box to browse to the folder where you want to save the questionnaire as a
template.

Important: When the source .mdd file specifies a TemplateLocation, all files that exist in the
specified template location will be included in the template .ptz file. For example, if the source
.mdd file specifies your desktop as the TemplateLocation, all files on your desktop will be
packaged into the resulting project template .ptz file. As such, you must ensure that you select
an appropriate TemplateLocation.

E Select the file extension .ptz.

E Enter a name for the file and choose Save.

Note: Remember that depending on the folder location to which you saved the template, you may
need to copy or move the saved file to another location to be able to select it in Base Professional.
24

Chapter 1

Editing an existing project template

E From the menu, choose:


File > Open > File...

or press Alt+O, F.

E In the Open dialog box select the file extension .ptz and browse to the folder where your templates
are held.

E Select the template you want to edit and then click Open.

E Edit and save the file in the normal way.

Configure Project Templates dialog

Use the Configure Project Templates... dialog box to set defaults for the use of previously created
templates in IBM® SPSS® Data Collection Base Professional.

Fields on the Configure Project Templates dialog

Use local location: When your templates are stored locally, select this option and enter the details
of the template locations.

Project template folder: This is the location where all your project templates will be stored.
By default, the templates will be stored in C:\Documents and Settings\All Users\Application
Data\IBM\SPSS\DataCollection\6\Project Templates. If required, you can change the location to
point to your own files. To point to a different location for the templates, click Browse and then
use the Browse for Folder dialog box to select the required folder.

Important: When the source .mdd file specifies a TemplateLocation, all files that exist in the
specified template location will be included in the template .ptz file. For example, if the source
.mdd file specifies your desktop as the TemplateLocation, all files on your desktop will be
packaged into the resulting project template .ptz file. As such, you must ensure that you select
an appropriate TemplateLocation.

Default project template:For more information, see the topic Select Project Template dialog on
p. 25.

Use Question Repository: If your project templates are stored in IBM® SPSS® Data Collection
Question Repository, select this option and enter the details of the template locations.

Repository: Select the repository in which the templates are stored and connect to it.

Project template root topic: Select the root topic from the repository to identify the project
templates’ location. To point to a different root topic, click Browse and then use the Browse for
Folder dialog box to select the required topic.
25

Base Professional

Default Project template:For more information, see the topic Select Project Template dialog on
p. 25.

Working with unzipped project templates

To simplify the testing of project templates during development, unzipped templates will also be
supported. To deploy an unzipped template, you can simply copy or save the project files to the
correct project template location (local or server). In order to correctly recognize an unzipped
template, and to correctly distinguish it from a template folder and non-template files, the template
files must be correctly structured. More specifically, the following file structures for unzipped
templates will be supported:
MyProject
[MyProject.mdd]
Layout.htm
MyProject.mqd
MiscFiles
MyProjectSample.csv
MySMScript.mrs
OtherFileCopiedByUser.xyz

The unzipped project files must reside in a folder that matches the project name. This ensures that
project folder can be distinguished from a normal template folder. Any folder that contains files,
but does not contain template archives (.zip files) or Project_files, will be considered an unzipped
template. In this case, all files in the project folder are considered part of the template.

Select Project Template dialog

The Select Project Template dialog enables you to select the template to use when creating
a new project.

Template location: Displays the locations from which you can select templates. For example, this
may be on your machine, or from a server, but not both at the same time. Select the required
location to display the available templates in the Templates area.

Templates: The templates that you can use from the selected location are displayed. Click on the
one you want and click Select to open the new project.

Working with Macros


IBM® SPSS® Data Collection Base Professional comes with a number of macros to help you
build your scripts. There are three types of macros—, mrScriptBasic, Data Management Script,
and mrScriptMetadata. It is easy to add new macros and edit existing macros. For step-by-step
instructions on creating a macro and inserting it into your script, see Creating Your First
mrScriptBasic Script.

E To view, edit and add macros


26

Chapter 1

1. Open the AutoExpansion Macros dialog box by choosing Macros from the Tools menu.

2. Then select the type of macro that you want to work with by clicking on one of the three tabs
called mrScriptBasic, mrDataManagementScript or mrMetadataScript.

The following table lists the tools on the toolbar in the AutoExpansion Macros dialog box.
Tool Description
Adds the macro to the list.

Delete the selected macro.


Edit the selected macro. This copies the macro’s
code into the code text box, so that you can edit it.
Updates the select macro with the code defined in
the code text box.

E To insert a macro into your script:

1. In the main Edit window, place the cursor at the position in which you want to insert the macro.

2. Type the name of the macro followed by Ctrl+M. If the macro has parameters, enter the arguments
after the macro name, enclosed in parentheses and separated by commas.

For example, to insert the INP data management macro without specifying any arguments, type
INP, then without leaving a space, press Ctrl+M. To insert the FN1mrScriptBasic macro using i
and j as the arguments, type FN1(i,j), then without leaving a space press Ctrl+M.

Note that macro names are case-sensitive and you must not insert a space character between
the arguments.

E To create a new macro:

1. Open the AutoExpansion Macros dialog box, by choosing Macros from the Tools menu.
27

Base Professional

2. Click on the relevant tab for the type of your macro, either mrScriptBasic, mrDataManagementScript
or mrMetadataScript.
3. In the Macro Name text box, enter a name for the new macro.
4. In the Macro Text text box, type the code that you want the macro to insert. You can define
parameters by inserting numbers in braces. The parameters must be numbered in ascending order,
starting with 0 (for example, {0} and {1}). When you subsequently use the macro, you can
supply arguments to replace the parameters.
5. Click the Add Macro toolbar button.
E To edit an existing macro:

1. Open the AutoExpansion Macros dialog box, by choosing Macros from the Tools menu.
2. Click on the relevant tab for the type of macro, either mrScriptBasic, mrDataManagementScript
or mrMetadataScript.
3. Select the macro you want to edit in the list of macros.
4. Click the Edit Macro toolbar button. This displays the macro’s name and code in the text boxes at
the top of the dialog box.
5. Make the required changes.
6. Click the Update Macro toolbar button.

Using the Workspace Feature


A workspace is a container for the scripts and files that you are developing. Using a workspace
is particularly useful when you are working on a number of linked files—for example, a data
management script (.dms file) that has a number of associated Include files. However, using this
feature is optional. Although IBM® SPSS® Data Collection Base Professional automatically
creates an empty workspace when you open Base Professional, you can simply ignore this if
you don’t want to use the workspace feature.

When using this feature, you can begin by creating a workspace and adding files to it or by
creating files and then adding them to a workspace. The Workspace pane displays the files in
the workspace and makes it easy to switch between the files (you simply double-click a file to
open it in the Edit pane). You can add new and existing files to the workspace by right-clicking
the workspace folder in the Workspace pane and choosing Add New Item to Workspace and Add
Existing Item to Workspace, respectively. You can also add the current file to the workspace by
choosing Add Current Item to Workspace from the File menu, and you can remove a file from the
workspace by right-clicking it in the Workspace pane, and choosing Exclude from workspace.

When you save a workspace, it is saved as a file with an .sws filename extension. You can
subsequently open the workspace by double-clicking this file in Windows Explorer. Whenever
possible, the workspace stores the locations of the files relatively (provided they have a common
root folder). This makes it easy to share files with colleagues. For example, if you zip up a folder
containing a workspace and its associated files and send them to a colleague, the file locations
28

Chapter 1

specified in the workspace file should be valid, even if he or she unpacks the files into a different
folder.

To Create and Use a Workspace


To illustrate how the workspace feature works, we will make a workspace for the
IncludeExample.dms sample. This sample uses two include files—Include1.dms and Include2.dms.

1. Start a new workspace. You do this by choosing New > Workspace from the File menu or pressing
Ctrl+W.

2. Display the Workspace pane by clicking on the Workspace tab or pressing Alt+0 twice.

3. Now add the IncludeExample.dms file to the workspace. You can do this by right-clicking in the
Workspace pane and from the shortcut menu, choosing Add Existing Item to Workspace or by
choosing Add Existing Item to Workspace from the main File menu. Then in the Open dialog box,
browse to the location where the data management script samples are installed (typically this is
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Data Management\DMS) and
select the IncludeExample.dms sample file.

4. Now we will use different techniques to add the include files to the workspace.
First choose Open from the main File menu and in the Open dialog box, browse
to the location where the data management script sample include files are installed
(typically this is [INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Data
Management\DMS\Include), select the Include1.dms and Include2.dms samples, and click OK.
This opens the two files in Base Professional.

5. View the Include1.dms sample in the Edit window, by clicking the Include1.dms tab to bring it to
the front.

6. From the main File menu, choose Add Current Item to Workspace.

7. Finally, save the workspace. You can do this by choosing Save Workspace from the main File
menu, or right-clicking in the Workspace pane and choosing Save. In the Save Workspace
As dialog box, give the workspace a name, such as IncludeExample, and then click OK. The
workspace file will be saved with a .sws extension, for example, IncludeExample.sws.

For more information about using Include files with data management scripts, see Using Include
Files in the DMS file.

Using the Metadata Viewer


When you are working in IBM® SPSS® Data Collection Base Professional, you can use the
Metadata pane to view the objects in a Metadata Document (.mdd) file. This is useful when you
need to look up names of variables and categories for use in a filter or table.

If you have installed Base Professional with the Interview option, each interview script will
open with its own individual metadata viewer attached to the side of the script. See Using the
interview metadata viewer for more information, which also contains advice on using the viewer
to help you write interview scripts.
29

Base Professional

You display the Metadata pane by choosing Metadata from the View menu or pressing Alt+7
twice. You can open any Metadata Document (.mdd) file. You do this by selecting the Open
Metadata Document toolbar button and then selecting the file in the Open Metadata dialog box.
If the .mdd file contains multiple versions, the latest version is opened initially. However, you
can change the version by right-clicking in the Metadata pane, choosing Change Version from the
shortcut menu, and then selecting the required version from the drop-drown list. Note that the file
is always opened in read-only mode and so any changes you make will not be saved.

When you are using the Metadata pane, the Properties pane displays the properties of the object
selected in the Metadata pane. For example, when you select a variable or element in the Metadata
pane, you can view its Label property in the Properties pane. Initially the labels are displayed in
the default language, but you can change the language by right-clicking in the Metadata pane,
choosing Change Language from the shortcut menu, and then selecting the required language
from the drop-drown list.

The following table lists the buttons on the toolbar in the Metadata pane.
Button Description
Create a new Metadata Document (.mdd) file.

Open a Metadata Document (.mdd) file.

Save the metadata to a Metadata Document (.mdd)


file.
Show or hide the Properties Pane.

View Allows you to show or hide the various types of


objects that the metadata viewer can display.

E To view the full names of the variables you can use in a filter in a DMS file:

1. If you are using an MDSC to read the input metadata, you will need to create an .mdd file before
you can open it in the Metadata pane. You can do this in MDM Explorer. For more information,
see the topic Creating an .mdd File From Proprietary Metadata on p. 314.

2. Open the .mdd file in the Metadata pane.

3. Double-click the Document’s Variables folder. This lists all the variables you can use in a filter in
a DMS file, showing for each one, its full name and an icon that indicates its type. The full names
are the names you need to use in the select query in your DMS file.
30

Chapter 1

When you select a variable in the Metadata pane, its properties are displayed in the Properties
pane. If you scroll through the properties, you will see the variable’s FullName property. It is easy
to copy the full name into your script. You do this by selecting the full name text, right-clicking
and choosing Copy from the shortcut menu. If you then move the cursor back into the main Edit
window, you can paste the full name into the script.

E To view category full names for use in a filter in a DMS file:

1. Select the required variable from the list in the Document’s Variable’s folder.

2. Double-click the variables’s Categories folder. This lists all the categories for that variable. You
can copy a category’s full name into your script, in the same way that you can copy a variable’s
full name.
31

Base Professional

For information about using the Metadata Viewer to look up variables names for use when
scripting tables, see:
„ Using the Flat (VDATA) View
„ Using the Hierarchical (HDATA) View

Using the Connection String Builder

The Connection String Builder makes it easy to insert a connection string into a script. You
typically use the Connection String Builder to specify the ConnectionString parameter in the
InputDataSource and OutputDataSource sections of a DMS file. The ConnectionString parameter
specifies the name, type, and location of the input and output data. If you are new to data
management scripting, see 3. Transferring Different Types of Data in the Getting Started section
for step-by-step instructions on using the Connection String Builder.

The Connection String Builder uses the Data Link Properties dialog box to build the connection
string. This dialog box has four tabs:

1. Provider. This is automatically set up to use the IBM SPSS Data Collection OLE DB Provider,
but you can select a different provider when necessary.

2. Connection. This is where you define the name, type, and location of the data.
32

Chapter 1

3. Advanced. This is where you define additional connection properties, such as the validation
options.

4. All. This lists all of the connection properties that you can set in a connection string.

E To use the Connection String Builder:

1. In the main Edit window, place the cursor at the position in which you want to insert the
connection string.

2. From the Tools menu, choose Connection String Builder.

3. Use the features of the Data Link Properties dialog box to specify the details of the data to which
you want to connect.

4. Click OK.

Data Link Properties dialog box

The Data Link Properties dialog box has the following tabs:
„ Provider - provides options for selecting the appropriate OLE DB provider for the type of
data you want to access.
„ Connection - provides access to the Metadata Properties and Metadata Versions dialog boxes
„ Advanced - provides options for defining additional connection options.
„ All - provides options for editing the initialization properties that are available for the chosen
Provider.

Data Link Properties: Provider

You use the Provider tab in the Data Link Properties dialog box to select the provider you want
to use.
33

Base Professional

Selecting the appropriate OLE DB Provider


„ To connect to the IBM® SPSS® Data Collection Data Model, select IBM® SPSS® Data
Collection DM-n OLE DB Provider (where n is the version number) from the list of OLE DB
Providers.
„ To connect to another provider, select the appropriate entry from the list of OLE DB Providers
(for example to select a Microsoft SQL Server provider, select Microsoft OLE DB Provider for
SQL Server from the list).

Refer to the appropriate database provider documentation for information on configuring


connection strings.

Data Link Properties: Connection

You use the Connection tab in the Data Link Properties dialog box to define the name, location,
and type of the data to which you want to connect. When you select IBM® SPSS® Data
Collection DM-n OLE DB Provider (where n is the version number) on the Provider tab, an Data
Collection-specific Connection tab is displayed.
34

Chapter 1

Metadata Type. Defines the type of metadata. The drop-down list shows the types of metadata for
which you have a metadata source component (MDSC). The default options are:
„ None. Choose this option if you want to connect to case data only.
„ Data Collection Metadata Document. MR Metadata Document. Selects metadata that is in
the standard IBM® SPSS® Data Collection Data Model format, which is a questionnaire
definition (.mdd) file.
„ ADO Database. Selects metadata that is in an ActiveX Data Objects (ADO) data source.
„ Data Collection Log File. Selects metadata in a standard Data Collection log file.
„ Data Collection Participation Database. Selects metadata that is in a IBM® SPSS® Data
Collection Interviewer Server Administration project’s Sample and HistoryTable tables.
„ Data Collection Scripting File. Selects metadata that is in a mrScriptMetadata file.
„ In2data Database. Selects metadata that is in an In2data database (.i2d) file.
„ Quancept Definitions File (QDI). Selects metadata in a IBM® SPSS® Quancept™ .qdi file
using the QDI/DRS DSC.
„ Quancept Script File. Writes the metadata in an MDM document to a Quancept script (.qqc) file.
„ Quantum Specification. Writes the metadata in an MDM document to a IBM® SPSS®
Quantum™ specification.
„ Quanvert Database. Selects metadata that is in a IBM® SPSS® Quanvert™ database.
„ Routing Script File. Writes the routing section of an MDM document to a script that defines the
routing required for interviewing.
35

Base Professional

„ SPSS Statistics File (SAV). Selects metadata that is in an IBM® SPSS® Statistics .sav file.
„ Surveycraft File. Selects metadata that is in a IBM® SPSS® Surveycraft™ Validated
Questionnaire (.vq) file.
Metadata Location. The name and location of the metadata. The way you specify this depends on
the type of metadata that you selected in the previous drop-down list:
„ Data Collection Metadata Document. The name and location of the .mdd file.
„ ADO Database. The name and location of a .adoinfo file, which is an XML file that specifies the
connection string for the target data source and the name of the target table in that data source.
„ Data Collection Log File. The name and location of the log file. Typically log files have a
.tmp filename extension. However, some log files may have another filename extension. If
necessary, you can rename the file so that it has a .tmp filename extension.
„ Data Collection Participation Database. The name and location of a Participants Report
Document (.prd) file, which is an XML file that specifies the connection string and the names
of the table and columns to be used.
„ Data Collection Scripting File. The name and location of the mrScriptMetadata file. Typically
these files have an .mdd or .dms filename extension.
„ In2data Database. The name and location of the .i2d file.
„ Quancept Definitions File (QDI). The name and location of the .qdi file.
„ Quancept Script File. The name and location of the .qqc file.
„ Quantum Specification. The location of the Quantum specification files.
„ Quanvert Database. The name and location of the qvinfo or .pkd file.
„ Routing Script File. The name and location of the routing script file.
„ SPSS Statistics File (SAV). The name and location of the .sav file.
„ Surveycraft File. The name and location of the .vq file.

Click Browse to select the file in the Open dialog box.


Open Metadata Read/Write. By default, the metadata is opened in read-only mode. Select this
option if you want to be able to write to it. When you open some types of data (for example, a
Quanvert database) the metadata is always opened in read-only mode.
Properties. Edit MDM Properties. Click this button to open the MetadataMDM Properties dialog
box, in which you can specify the versions, language, context, and label type to use. For more
information, see the topic Data Link Properties: Metadata Properties on p. 73.
Case Data Type. Defines the type of case data. The drop-down list shows all of the types of case
data for which you have a case data source component (CDSC). The default options are:
„ ADO Database. Reads case data from an ActiveX Data Objects (ADO) data source.
„ Delimited Text File (Excel). Writes case data in tab-delimited format to a .csv file.
„ Data Collection Database (MS SQL Server). Reads and writes case data in a Data Collection
relational database in SQL Server. This option can be used to read data collected using IBM®
SPSS® Data Collection Interviewer Server.
„ Data Collection Log File. Selects the Log DSC, which enables you to read Data Collection
log files.
36

Chapter 1

„ Data Collection XML Data File. Reads and writes case data in an XML file. Typically, you use
this option when you want to transfer case data to another location.
„ In2data Database. Reads case data from an In2data Database (.i2d ) file.
„ Quancept Data File (DRS). Reads case data in a Quancept.drs, .drz, or .dru file using the
QDI/DRS DSC.
„ Quantum Data File (DAT). Selects the Quantum DSC, which reads and writes case data in a
Quantum-format ASCII file.
„ Quanvert Database. Selects the Quanvert DSC, which reads data in a Quanvert database.
„ SPSS Statistics File (SAV). Reads and writes case data in an SPSS Statistics .sav file.
„ Surveycraft File. Reads case data from a Surveycraft data file.

Tip: If you have specified a Metadata Type and a Metadata Location, and the default data source
in your metadata refers to the case data that you want to connect to, you don’t need to specify
a Case Data Type or a Case Data Location.
Case Data Location. The name and location of the case data. The way you specify this depends on
the type of case data that you selected in the previous drop-down list:
„ ADO Database. The OLE DB connection string for the ADO data source. To build this string,
click Browse, which opens a second Data Link Properties dialog box in which you can choose
the options for your data source. For example, to connect to a Microsoft Access database or a
Microsoft Excel file, select Microsoft OLE DB Provider for ODBC Drivers in the Provider tab and
click the Build button in the Connection tab to build a connection string that uses the Machine
Data Source called “MS Access Database” or “Excel Files” as appropriate. If your data source
is a Microsoft SQL Server database that is not a Data Collection relational database, select
Microsoft OLE DB Provider for SQL Server in the Provider tab and enter the server name and
database name in the Connection tab. Then click OK to close the second Data Link Properties
dialog box and return to the Connection tab of the first Data Link Properties dialog box.
„ Delimited Text File (Excel). The name and location of the .csv file.
„ Data Collection Database (MS SQL Server). This must be an OLE DB connection string.
„ Data Collection Log File. The name and location of the log file. Typically log files have a
.tmp filename extension. However, some log files may have another filename extension. If
necessary, you can rename the file so that it has a .tmp filename extension.
„ Data Collection XML Data File. The name and location of the .xml file.
„ In2data Database. The name and location of the .i2d file.
„ Quancept Data File (DRS). The name and location of the .drs, .drz, or .dru file.
„ Quantum Data File (DAT). The name and location of the .dat file. If a .dau file is created, it will
have the same name, but with the file name extension of .dau.
„ Quanvert Database. The name and location of the qvinfo or .pkd file.
„ SPSS Statistics File (SAV). The name and location of the .sav file.
„ Surveycraft File. The name and location of the Surveycraft Validated Questionnaire (.vq) file.
The Surveycraft .qdt file, which contains the actual case data, must be in the same folder
as the .vq file.
Click Browse if you want to browse to the location of the case data in a dialog box.
37

Base Professional

Case Data Project. This text box should be blank, unless you are connecting to one of the
following case data types:
„ ADO Database. If you are connecting to a Microsoft SQL Server database (that is not a Data
Collection relational database) or a Microsoft Access database, enter the name of the database
table that you want to use. If you are connecting to a Microsoft Excel file, enter the name of
the worksheet that you want to use, for example, Sheet1. Depending on the version of Excel
installed, you may have to add a dollar sign ($) after the worksheet name for the connection to
be successful, for example, Sheet1$.
„ Data Collection Database (MS SQL Server). Enter the name of the project that you want to use.
Test Connection. Click this button to test the connection and verify whether you have entered all
information correctly.

Data Link Properties: Metadata Properties

You use the Metadata Properties dialog box to define the version, language, context, and label type
that you want to use when you connect to a questionnaire definition (.mdd) file (also known as
IBM® SPSS® Data Collection Metadata Document file). You open this dialog box by clicking
the Properties button in the Metadata section on the Connection tab in the Data Link Properties
dialog box.

Version. Select the version or versions that you want to use. Questionnaire definition (.mdd) files
typically contain , which record any changes to the content of the questionnaire. Typically, when
the questionnaire changes (for example, a question or category is added or deleted) a new version
is created and when the changes are complete, the version is locked. The drop-down list box
displays all of the available versions plus three additional options:
„ All versions. Select this option if you want to use a combination (superset) of all of the
available versions. (This is sometimes called a superversion). When there is a conflict
between the versions, the most recent versions generally take precedence over the older
versions. For example, if a category label differs in any of the versions, the text in the latest
version will be used. However the order of questions and categories is always taken from the
most recent version and there is special handling of changes to loop definition ranges and the
minimum and maximum values of variables, similar to that described for the IBM® SPSS®
Data Collection Metadata Model Version Utility in . Use the Multiple Versions option if
you want to change the order of precedence.
38

Chapter 1

„ Multiple versions. Select this option if you want to use a combination (superset) of two or
more specific versions. For more information, see the topic Data Link Properties: Metadata
Versions on p. 74.
„ Latest version. Select this option if you want to use the most recent version.

Using a combination of some or all of the versions is useful when, for example, you want to export
case data for more than one version and there have been changes to the variable and category
definitions that mean that case data collected with one version is not valid in another version.
Selecting all of the versions for which you want to export the case data, means that generally
you can export the case data collected with the different versions at the same time without
encountering validity errors due to the differences between the versions. However, depending on
the version changes, some validity errors may still be encountered.

Language. Languages. Select the language you want to use. You can change the language only if
there is more than one language defined.

Context. Contexts. Select the user context you want to use. The user context controls which texts
are displayed. For example, select Question to display question texts, or Analysis to display shorter
texts suitable for displaying when analyzing the data.

LabelType. LabelTypes. Select the label type you want to use. You should generally select the
Label option.

Data Link Properties: Metadata Versions

You use the Metadata Versions dialog box when you want to select two or more versions of the
questionnaire definition (.mdd) file. You open this dialog box by selecting Multiple Versions in the
Version drop-down list box in the Metadata Properties dialog box.
39

Base Professional

Versions. The Metadata Versions dialog box lists all of the versions that are available. Click Select
All to select all of the versions. Click Clear All to deselect all of the versions and then select the
versions you want individually. For each version, the following information is shown:
„ Name. Version. The version name. Version names are made up of a combination of the major
version and minor version numbers in the form Major#:Minor#, where Major# is the number
of the major version and Minor# is the number of the minor version. Changes in the major
version number indicate that the structure of the case data has changed (for example, variables
or categories have been added or deleted) whereas changes in the minor version number
indicate that the changes affect the metadata only (for example, a question text has been
changed). Version names are created automatically when a version is locked. A version that
has not been locked is always called LATEST.
„ Created by. The ID of the user who created the version.
„ Created Date. This shows the date and time at which the version was locked.
„ Description. When present, this is a text that gives information about the version.

The order in which you select the versions controls the order of precedence that will generally be
used when there is a conflict between the versions. For example, if a category label differs in the
versions you select, the text in the version with the higher precedence will be used. However the
order of questions and categories is always taken from the most recent version and there is special
handling of changes to loop definition ranges and the minimum and maximum values of variables,
similar to that described for the IBM® SPSS® Data Collection Metadata Model Version Utility in

If you want the most recent version to take precedence, start selecting the versions at the top
and work down the list. If you want the oldest version to take precedence, start at the bottom
and work up the list.

Note that you can select multiple versions by pressing Ctrl or Shift while you click.

Tip. You can select individual or multiple versions by pressing Ctrl or Shift while you click,
provided the mouse is in the Description or Date/Time Locked column. You can then click in the
Version column to select or deselect the check boxes for all of the versions that you have selected.
You may find this useful when you are working in a file that has many versions.

Selected Versions. Displays an expression that represents the selection you have chosen. You can
optionally select the versions you want to use by typing an expression directly into this text box.
The order of precedence is taken from the order in which versions are specified, with the rightmost
versions taking precedence over the leftmost.
Syntax Description
.. Specifies all versions
v1, v2, v3, v4 Specifies individual versions
v1..v2 Specifies an inclusive range of versions
^v1..v2 Excludes a range of versions
Specifies the most recent version.
40

Chapter 1

You can specify a combination of individual versions, and ranges to include or exclude. For
example, the following specifies version 3:2 and all versions from 4:5 to 7:3 with the exception
of versions 7 through 7:2:

3:2, 4:5..7:3, ^7..7:2

Data Link Properties: Advanced

You use the Advanced tab in the Data Link Properties dialog box to define additional connection
options. When you select IBM® SPSS® Data Collection DM-n OLE DB Provider (where n is the
version number) on the Provider tab, an Data Collection-specific Advanced tab is displayed.

Metadata Source when Location is Different. If existing Data Source has a different location. The
Data Model uses the DataSource object to store details about case data that is associated with an
MDM Document (.mdd file). This option specifies what should happen if there is no DataSource
object in the MDM Document with the same case data type whose location matches the case
data location specified on the Connection tab:
„ Use the Data Source (except for location). This is the default behavior. Select this option if
you want to use the first DataSource object of the same type that is encountered and do not
want to store the new case data location in it.
„ Use the Data Source and store the new location. Select this option if you want to use the first
DataSource object of the same type that is encountered and store the new case data location
in it.
41

Base Professional

„ Create a new Data Source. Select this option if you want to create a new DataSource object.
This is useful when you do not want to use the same variable names when exporting to SPSS
.sav as used previously.
„ Raise an Error. Select this option if you want the connection to fail.

For more information, see the IBM® SPSS® Data Collection Developer Library.

Categorical variables. Specifies whether to display the categories of categorical variables as


numeric values or names.

Preserve source column definitions. Select this option if you want the native objects in the
underlying database to be exposed directly as Data Model variables without any interpretation.
For example, if you select this option, a multiple dichotomy set in a .sav file would be represented
as several long or text variables instead of one categorical variable.

Reading categorical data. Specifies whether to display the categories of categorical variables as
numeric values or names.

Writing data. Specifies whether the CDSC deletes the output data, if it exists, before writing
new data. The options are as follows:
„ Append to existing data. This is the default behavior. Select this option if you want to append
to the existing data if it exists.
„ Replace existing data. Select this option if you want to delete the existing data and schema.
This will allow data to be created with a different schema.
„ Replace existing data but preserve schema. Select this option if you want to delete the existing
data, but preserve the existing schema if possible. Note that for some CDSCs, such as SPSS
SAV and Delimited Text, the schema will not be preserved because deleting the data results
in the loss of the schema.

Validation. Perform data validation. Select if you want case data to be validated before it is written.
Deselect if you do not want any validity checks to be performed on case data before it is written.

Allow Dirty. Allow dirty data. Select if you have chosen data validation, and you want to run in
dirty mode. This means that data is accepted even if it has some inconsistencies. Deselect this
option to run in clean mode, which means that data is rejected if it contains any inconsistencies
(for example, if more than one response has been selected in answer to a single response question).
The validation that is performed varies according to the CDSC that is selected.

User name. If required, enter your User ID.

Password. If required, enter your password.

Data Link Properties: All

You can use the All tab in the Data Link Properties dialog box to edit all of the initialization
properties that are available for the Provider. However, generally you define the values for the
properties on the Connection and Advanced tabs.
42

Chapter 1

The All tab lists all of the initialization properties and you can edit the values by selecting
a property and clicking Edit Value.
For detailed information about the connection properties, see Connection Properties in the IBM®
SPSS® Data Collection Data Model section of the DDL.

Using Context-Sensitive Help

If you have the IBM® SPSS® Data Collection Developer Library (DDL) installed on your
computer, you can access specific help for any element of your IBM® SPSS® Data Collection
Base Professional script. To do this, position the cursor in the element in the Base Professional
Edit pane (or select the element by double clicking on it) and press the F1 key. The most relevant
help topic in the DDL will then appear.

Specific help topics are included for all mrScriptBasic and mrScriptMetadata keywords, data
management script keywords, Data Model functions, preprocessor directives, and IBM® SPSS®
Data Collection object models. If a variable in your script references a Data Collection object,
placing the cursor in the variable name and pressing F1 will open the help topic for the object
type. If Base Professional cannot find a specific topic, for example, because a variable does not
reference an object, the overview topic for the type of script that you are editing will appear
instead. If the displayed help topic does not provide you with the information you need, you can
use the help system’s Contents, Index, and Search features.

To copy text from a help topic to the Windows clipboard, select the text, right click on it, and
choose Copy from the shortcut menu.
43

Base Professional

Using ScriptAssist
ScriptAssist is an automatic feature of IBM® SPSS® Data Collection Base Professional that can
help you to write scripts more quickly. Whenever you type a dot (.) after the name of a variable,
ScriptAssist displays a list of the members (properties and methods) or global functions that are
valid for that variable. Using the mouse or keyboard, you can select one of the entries in the
suggestion list, and it will be pasted into your script. This feature makes it easy to include objects
in your scripts without having to remember the names of all the valid properties and methods.

When the ScriptAssist suggestion list appears, you can select an item from the list by
double-clicking on the item, or by highlighting the item using the arrow keys and typing a dot (.).

If your variable is a question in the routing section of an interview script, the suggestion list might
show category names or sub questions, depending on the definition of the question.

Items in the Suggestion List

The following table list the different types of items that can appear in a suggestion list. The items
that actually appear will depend on the type of script and the variable name that has just been typed.
Icon Description
Property of an object.

Default property of an object.

Method of an object, or Function in the .

Enumerated type.

Question, or Built-in variable.

Category in a categorical question.

You can specify which items you would like to see in the suggestion list, or even stop the
suggestion list from appearing. For more information, see “ScriptAssist Options” in IBM SPSS
Data Collection Base Professional options.

Activating ScriptAssist Manually

You can use the following keyboard shortcuts to force the ScriptAssist suggestion list to appear.
The contents of the suggestion list will depend on the type of script, and the position of the
cursor when the keyboard shortcut is pressed.
Keyboard Shortcut Description Where to use
Ctrl+Space Display a global list of all Anywhere in a mrScriptBasic
IBM® SPSS® Data Collection script (.mrs) file, in the routing
functions, built-in variables, and section of an interview script
enumerators in Data Collection (.mdd) file, or in an event section
Type libraries. If pressed in the of a data management script
routing section of an interview (.dms) file.
script, the list will include all
44

Chapter 1

questions defined in the metadata


section.
Ctrl+Q Display a list of all the questions In the routing section of an
defined in the metadata section. interview script.
If the cursor was positioned after
the name of a question, Ctrl+Q
will display a list of all the sub
questions for that question.
Ctrl+R Display a list of all the categories In the routing section of an
for a question. interview script, with the cursor
positioned after the name of a
categorical question.

Finding and Replacing Text

You use IBM® SPSS® Data Collection Base Professional’s Find and Replace panes to find and
replace text in your scripts. You can open these panes from the Base Professional Edit menu or by
pressing Ctrl+F to open the Find pane and Ctrl+H to open the Replace pane.
Several advanced options exist to help you find exactly what you are looking for and these are
described below. You can also use regular expressions to search for patterns of text.

Searching Multiple Scripts

To search for a text string in all your open scripts, open the Find pane, enter a search string, and
select All open documents before clicking Find Next.

Listing all Occurrences of a Text String

To display a list of the lines in your script that contain the text you are looking for, open the Find
pane, enter your search string and click Find All. The Find Results pane opens and displays a list of
the lines that contain one, or more than one, occurrence of the string. Double clicking on any line
(or selecting the line and clicking Goto Current) displays the corresponding line in the Edit pane
and highlights the first occurrence of the string. To display each line in turn, click Goto Next.
If you select All open documents before clicking Find All, the search results will include all
your open scripts. The “File” and “Location” columns in the Find Results pane show you in
which script the string was found.

Bookmarking all Occurrences of a Text String

To add a bookmark to every line that contains the text you are looking for, enter a search string,
and click Mark All. A bookmark is inserted on every line that contains the string. Note that if a
line already contained a bookmark, the bookmark will be removed. To add bookmarks to all your
open scripts, select All open documents before clicking Mark All.
For more information about bookmarks, see “Using Bookmarks” in Viewing Large or Multiple
Files.
45

Base Professional

Using Regular Expressions to Find Text

You can use regular expressions to search for text in your script. Regular expressions are a flexible
and powerful notation for finding patterns of text. To use regular expressions, open either the Find
pane or the Replace pane and select the Use check box. When you next click Find Next, your
search string will be evaluated as a regular expression.
When a regular expression contains characters, it usually means that the text being searched
must match those characters. However, regular expressions use a number of characters that have a
special meaning. The following table provides a summary of the most common special characters
used in regular expressions. The special characters are shown in bold and are case sensitive—for
example, \U is not the same as \u.
Regular Expression Description
. Any character (including newline).
[abcn-z] Any of the characters a, b, c, n, o, p, ..., z.
[^abcn-z] Any characters except a, b, c, n, o, p, ..., z.
\w Any alphanumeric character (including accents) or
underscore (_).
\l Any lower-case character (including accents).
\u Any upper-case character (including accents).
\d Any numeric character.
\s A whitespace character.
^xxx xxx at the beginning of a line.
xxx\r$ xxx at the end of a line. In IBM® SPSS® Data
Collection Base Professional, you must use \r$ to
find text at the end of a line instead of the more
typical $.
xxx|yyy Either xxx or yyy.
(xxx) Grouping (subexpression).
x* Zero or more occurrences of x.
x+ One or more occurrences of x.
x? Zero or one occurrences of x.
(xxx){m} Exactly m occurrences of xxx.
(xxx){m,n} At least m and at most n occurrences of xxx.
\ The escape character that you use to match
characters that have a special meaning in regular
expressions, such as the following characters , . ? {
} [ ] ( ) $ ^ *. For example, to match the { character,
you would specify \{.

Examples

Example Matches
abcd The character sequence abcd anywhere in a line.
^abcd The character sequence abcd at the beginning of
a line.
^\s*abcd The character sequence abcd at the beginning of a
line after zero or more spaces.
46

Chapter 1

Example Matches
abcd\r$ The character sequence abcd at the end of a line.
\.txt\r$ The character sequence .txt at the end of a line.
[^A-Z]+ Any character that is not an uppercase English letter.
[0-9]+\r$ Any digits in the range 0-9 that appear at the end
of a line.
^\$ A dollar sign at the beginning of a line.
\[\] The [ character and the ] character.

For a more detailed description of the regular expression syntax, see


http://www.boost.org/libs/regex/doc/syntax.html

Debugging
IBM® SPSS® Data Collection Base Professional has a number of features to help you identify
and correct bugs (errors) in your mrScriptBasic code. You can debug mrScriptBasic script
(.mrs) files, the routing sections of interview script (.mdd) files, and the Event sections of data
management script (.dms) files. Other sections in a .dms file are not suitable for debugging because
they are properties sections and do not contain mrScriptBasic code.

Bugs fall into two main categories:

Syntax errors. These are improperly formed statements that don’t conform to the rules of the
scripting language. Syntax errors include spelling and typographical errors, and are generally
caught when parsing the script before executing it. For more information, see the topic Syntax
Errors on p. 46.

Semantic errors. These occur when your script’s syntax is correct, but the semantics or meaning
are not what you intended. Semantic errors might not be caught during the parsing stage, but will
cause the script to execute improperly. Semantic errors might cause the script to crash, hang, or
produce an unintended result when executed.

When your script is not producing the results that you expect, it usually means that you have made
a semantic error. This type of error is less easy to track down than a simple syntax error. However,
Base Professional provides a number of features to help you:
„ Stepping Through the Code
„ Setting Breakpoints
„ Examining the Values of Variables
„ Showing Line Numbers

Syntax Errors

IBM® SPSS® Data Collection Base Professional can detect some syntax errors when parsing
the script before executing it. Examples of these errors are mispelling a keyword or the name of
a variable.
47

Base Professional

The following code includes two syntax errors—a spelling error on line 3 (the final “n” has been
omitted from the name of the Position variable) and the “Then” keyword has been omitted from
the If statement on line 5.

Dim Position

Positio = Find("ABDCDEFGHIJKLM", "H")

If Position > 5
Position = Position + 3
End If

If you attempt to run this code, Base Professional will detect and underline the errors during the
initial parsing stage and will stop before executing the code, displaying a message describing the
problem below the error. If the error is due to a specific word then that word will be underlined,
otherwise the entire line will be underlined.

When you press Esc, to close the error message, the underlining remains, making it easy to
subsequently identify the exact position of the errors.

Note that you cannot edit your script while you are in debugging mode—you need to stop the
debugging session first. Do this by pressing Ctrl+F5 or choosing Stop from the Debug menu. If
there has been an error, debugging will stop automatically, putting you instantly in edit mode.

Some syntax errors cannot be detected until the code is actually executed. An example of this
type of error is spelling a function name incorrectly. For example, suppose we correct the above
example and then deliberately mispell the Find function as “Fnd”.

Dim Position

Position = Fnd("ABDCDEFGHIJKLM", "H")

If Position > 5 Then


Position = Position + 3
End If
48

Chapter 1

When we attempt to run the code, Base Professional finds that there is no function called “Fnd”,
so it underlines it and displays an execute error message similar to the parser error message we
saw before. However, the text of the message indicates that it is an execute error.

Because Base Professional pinpoints the exact position of any syntax errors, identifying and
correcting the problems is usually fairly straight forward. Identifying and correcting semantic
errors of your script is generally more challenging.

Stepping Through the Code

Stepping through code means executing the code one line at a time. Stepping allows you to see
the exact sequence of execution and the value of the various variables at each step. This can be
invaluable when your script is not giving the results that you expect.

When stepping through a script reaches an Include file, the Include file is opened so that it can
be stepped through as well.
E To step through the code one line at a time, press F10 or choose Single Step from the Debug menu.

IBM® SPSS® Data Collection Base Professional then executes the current line of code and
moves to the next line, which it highlights.
E To execute the next line of code, press F10 again or choose Single Step from the Debug menu.

Note that if you are stepping through an OnNextCase Event section in a DMS file, you will step
through each case, which could be a lengthy process if there are many cases.

Setting Breakpoints

When you do not want to step through an entire script line by line, you can set breakpoints at the
points in the code at which you want the execution to stop, so that you can step through the
following lines. This is particularly useful when you have a rough idea where the problem lies.
E There are several ways to set a breakpoint. Move the cursor to the line on which you want to
set the breakpoint, and then:
„ Choose Toggle Breakpoints from the Debug menu.

-or-
„ Press Ctrl+B or F9.

-or-
„ Click the Edit pane to the left of the line number.
49

Base Professional

IBM® SPSS® Data Collection Base Professional then highlights the line in red and adds the
breakpoint to the list in the Breakpoints pane.

E To run the code up to the next breakpoint, press F5 or choose Start or Continue from the Debug
menu.
Base Professional executes the code up to the breakpoint and then it stops, so that you can examine
the values of any variables and object properties and step through the following lines.

E To clear a breakpoint, move the cursor to the breakpoint, and then:


„ Choose Toggle Breakpoints from the Debug menu.

-or-
„ Press Ctrl+B or F9.

-or-
„ Click the Edit pane to the left of the line number.

E To clear all breakpoints, press Ctrl+Shift+F9 or choose Clear All Breakpoints from the Debug menu.

Examining the Values of Variables

When you are debugging a script and its execution is halted (for example, at a breakpoint or when
you are stepping through the code line by line), you can move your mouse over any variable to get
IBM® SPSS® Data Collection Base Professional to display its current value.
50

Chapter 1

Checking the Syntax of Functions

If you have used any of the functions in the , you can move your mouse over them to get Base
Professional to display the correct syntax. This is particularly useful when functions have several
parameters and you want to check that you have specified them in the correct order, for example.

Using the Locals Pane to Examine Values

When you are debugging a script, Base Professional displays in the Locals pane the current
value of all of the variables in the current scope.

In addition, Base Professional shows in the Locals pane the current value of the properties of
the objects in the current scope. You can expand and inspect objects by clicking the plus icon
next to the object, but note that an object can only be expanded while the script is halted during
debugging, not when the script has completed.
51

Base Professional

You can also use the Locals pane to check the value of the Err object’s properties. This is
particularly useful when you are using error handling to stop the script failing and an error occurs.
52

Chapter 1

Using the Expressions Pane

You can use the Expressions pane to evaluate an expression or to inspect the value of an object’s
properties. There are two ways of doing this:
„ Type the expression in the text box and click Evaluate.
„ Type a question mark, then type the expression and press Enter. For example, type ?a and
press Enter to display the value of the a variable.

You can also re-evaluate an expression that you previously evaluated by selecting the text of the
expression in the text box and clicking Evaluate.

For example, here is the Expressions pane after using the Sqrt function to evaluate the square
root of the a variable:

Note that sometimes you might need to resize the pane in order to be able to see the output. Here
is the Expressions pane after evaluating the value of an MDM property:

You can also use the Expression pane to change the value of a variable. For example, if you type
a = 15 into the text box and press Enter (or alternatively, click Execute), Base Professional will
set the value of the a variable to 15.

The Expression pane can also be used to declare variables, which you can then use to store the
current value of another variable when you are debugging a script. For example, type dim temp
in the text box and press Enter to declare a variable called temp. Then assign the value of a to
temp by typing temp = a and pressing Enter. To restore the original value of a at any point, type
a = temp and press Enter.

Showing Line Numbers

IBM® SPSS® Data Collection Base Professional can optionally show line numbers on the left
side of the Edit pane. This is particularly useful when you use mrScript Command Line Runner to
run your mrScriptBasic files, because it reports the line number in the error messages.
53

Base Professional

E To show line numbers, choose Options from the Tools menu.

E Click on Show Line Numbers, and then select True from the drop-down list.

E Click OK.

IBM SPSS Data Collection Base Professional in Other Languages


You can display the application in a language other than English. You can change the language at
any time by following the appropriate instructions below for your computer’s operating system.
Close the application before making these changes. You can change the language back to English
at any time or even switch back and forward between various languages.

To Display the application in a Different Language

On a Windows XP Professional or Windows Server computer:

1. Refer to the Microsoft article Set up Windows XP for multiple languages


(http://www.microsoft.com/windowsxp/using/setup/winxp/yourlanguage.mspx).

2. Delete all files (for example, Default_DockingLayout.xml and Options.xml) from the following
folders:
„ For IBM® SPSS® Data Collection Author: C:\Documents and Settings\<Windows user
name>\Application Data\SPSSInc\IBM\SPSS\DataCollection\6\Author\
„ For IBM® SPSS® Data Collection Survey Reporter: C:\Documents and Settings\<Windows
user name>\Application DataSPSSInc\IBM\SPSS\DataCollection\6\Survey Reporter\

3. Modify the appropriate application configuration file to include the desired culture value. For
example, add the following to change the language to French:
<appSettings>
<add key="Culture" value="fr-FR" />
</appSettings>

Language Culture Value


Chinese ZH-cn
English en-US, or blank
French fr-FR
German de-DE
Italian it-IT
Japanese ja-JP
Spanish es-ES

„ For Author: [INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\\Author\Author.exe.config


„ For Survey Reporter: [INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\\Survey
Reporter\Reporter.exe.config
„ For IBM® SPSS® Data Collection Base Professional:
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\Base Professional\mrstudio.exe.config
54

Chapter 1

On a Windows Vista Ultimate Edition computer:

1. In the Windows Control Panel, open Regional and Language Options.

2. Click the Keyboard and Languages tab. In Display languages, click Install/uninstall languages…

3. Click How can I install additional languages? for information on installing additional languages (for
example Japanese or Simplified Chinese) in Windows Vista.

4. Follow the instructions for installing other display languages.

5. Follow the above steps 2 and 3 in On a Windows XP Professional or Windows Server


computer.

Using IBM SPSS Data Collection Base Professional to develop interviews


This section describes the features of the IBM® SPSS® Data Collection Base Professional
integrated development environment (IDE) that can help you with the development and testing of
interviews that will be activated in version 6.0.1 of IBM® SPSS® Data Collection Interviewer
Server. Note that many of the features described are only accessible after you have installed the
Base Professional Interview Option.
The IBM® SPSS® Data Collection Developer Library also includes a detailed description of
the syntax that is used for interview scripting, and a tutorial that demonstrates how to write an
interview script. For more information, see the topic Interview Scripting on p. 503.

Which file type should I use?

You create interview scripts as .mdd files. This is the standard file type used by IBM® SPSS®
Data Collection products to store metadata (that is, the questions, other variables, and routing
information for a survey). In different contexts, a .mdd file might also be known as a metadata
document file or a questionnaire definition file.

Creating an interview script (.mdd) file

E From the File menu, choose:


New > File

This opens the New File dialog box.

E From the list of available file types, select Metadata File, and click Open.

Importing metadata from other file types

You can create an interview script (.mdd) file by importing metadata from any proprietary data
format for which a read-enabled Metadata Source Component (MDSC) is available (refer to
the Available DSCs topic in the IBM® SPSS® Data Collection Developer Library for more
information.). For example, you can import metadata from a .sav file. To import metadata from a
proprietary data format, choose Import Metadata from the IBM® SPSS® Data Collection Base
Professional File menu.
55

Base Professional

You can also export the metadata in a .mdd file to any proprietary data format for which a
write-enabled MDSC is available. To export metadata, choose Export Metadata from the Base
Professional File menu.

Working with IVS files

A .ivs file contains only the script portion of an interview script (.mdd) file. Unlike a .mdd file, a
.ivs file contains only plain text and can therefore be opened and edited in a text editor such as
Notepad. You might want to create .ivs files so that you can store them in a version management
system and easily see the differences between versions of your scripts. To create a .ivs file, open
the .mdd file in Base Professional and choose Export Metadata from the File menu. Note that
you can also use the IBM® SPSS® Data Collection Metadata Model - Compare accessory to
compare .mdd files. Refer to the MDM Compare topic in the Data Collection Developer Library
for more information.
You can also import a .ivs file into Base Professional and save it as a .mdd file. To do this,
choose Import Metadata from the Base Professional File menu. However, a .ivs file is not a
replacement for a .mdd file as it contains only a subset of the information stored in a .mdd file. For
example, .ivs files do not store translations or custom properties. For that reason, do not import
a .ivs file and save it with the same name as the .mdd file that you originally exported it from,
because you will lose any additional information that was stored in the .mdd file.
If you attempt to save a .mdd file that contains syntax errors in the metadata section, Base
Professional instead saves the script portion of your interview script as a .tmp.ivs file, as these
types of errors cannot be saved in an .mdd file. The .tmp.ivs file is saved in the same location as
your .mdd file. You can recover your latest changes by importing the .tmp.ivs file into Base
Professional, but remember that the .tmp.ivs file might not contain all the information stored
in your original .mdd file.

How a metadata script is displayed in IBM SPSS Data Collection Base Professional

When you close and reopen an interview script you might find that the script in the metadata
section does not look the same as when you closed it. This is because the IBM® SPSS® Data
Collection Data Model component used to display mrScriptMetadata (which is the language used
in the metadata section) determines how the syntax of the language is displayed. This is designed
to make mrScriptMetadata easier to read, for example, by defining a single question over several
lines rather than on one long line. Other changes that you might notice are:
„ Defined lists (also known as shared lists) are moved to the top of the script so that they appear
before any questions that use them.
„ Page fields are moved to the bottom of the script so that they appear after any questions
that they reference.
„ A label is added to any question or category that does not have one. The value of the label
is made the same as the question or category name. This makes it easier for you to enter
your own label texts at a later date.

You can continue to amend your metadata script, or write any new interview scripts, in any
style you prefer, as the instructions that determine how the syntax is displayed are only applied
when the interview script is opened.
56

Chapter 1

Working with multiple contexts

Contexts (or “user contexts”) define different uses for metadata. When you create an interview
script (.mdd) file in IBM® SPSS® Data Collection Base Professional, Question and Analysis
contexts are added to the metadata. The Question context is intended to store texts and custom
properties that are to be used in the data collection phase of a survey, and the Analysis context is
designed for texts and properties that are to be used when analyzing the collected case data. For
example, a question text in the Question context might say “What is your name?”, whereas the
text for the same question in the Analysis context might just say “Participant’s name”.
When you create an interview script file, Base Professional makes the Question context the
default (or “base”) context. Therefore, when you add question and category labels to the metadata
section of a new interview script, you are actually defining these texts in the Question context.
To define labels and custom properties in other contexts, select Contexts from the Base
Professional View menu, and select the context or contexts that you want to view. For each
context that you choose, Base Professional adds an “area-code” definition (which defines a
combination of language, context, and label type) near the top of the interview script’s metadata
section, like that in the following example:
ANALYSIS lcl(en-US, Analysis, Label);

The name of the area-code, known as the “area name”, is also added to the label of every
question and category in the metadata section, as highlighted in the following example:
name "What is your name?" ANALYSIS: - text;

To set the label text in the alternative context, simply replace the dash with the text, for example:
name "What is your name?" ANALYSIS: "Participant's name" text;

To set a custom property in the alternative context, replace the dash with syntax like that in the
following example, which sets the Max property in the SAV context:
name "What is your name?" SAV: [ Max = 30 ] text;

If you want to set both a label text and a custom property in an alternative context, or you want
to set label texts and custom properties in multiple alternative contexts, make sure that you add the
area name before every label text or custom property. For example:
name "What is your name?"
ANALYSIS: "Participant's name"
SAV: "Name"
SAV: [ Max = 30 ]
text;

To remove an existing text or custom property from an alternative context, replace the text
string (including the quotation marks) or property setting (including the brackets) with a dash, but
do not delete the area name or the colon that comes after it. To hide a context, select Contexts from
the Base Professional View menu, and cancel the selection for that context.

Note: The Contexts menu option can be used only to view or hide contexts that are already present
in the interview script file. To add or remove contexts, see Working with multiple languages.
57

Base Professional

Working with multiple languages

Interview script (.mdd) files can store texts in different languages. For example, your interview
script might contain Spanish and Japanese translations of the question text “What is your name?”.
In IBM® SPSS® Data Collection Base Professional, you can use the MDM Label Manager
to add languages to an interview script file. You can also use MDM Label Manager to remove
languages. To start MDM Label Manager, open a .mdd file in Base Professional and then choose
Manage Languages and Labels from the Base Professional Tools menu. For more information
about MDM Label Manager, see the Configuring Languages topic in the IBM® SPSS® Data
Collection Developer Library.
Once you have added a language to your interview script, you can use the Data Collection
Translation Utility (ms-its:mrTranslate.chm::/mrtranslate_overview.htm)Data Collection
Translation Utility to translate the question texts.

Note: You can also use the MDM Label Manager to add contexts to an interview script file, or
remove contexts. For more information about contexts, see Working with multiple contexts.

Adding a routing context

By default, each new interview script is created with a single routing context called Web, which is
intended to be used for an Internet or intranet-based survey. If your interview will be used for
more than one type of survey, you can create additional routing contexts by choosing Add Routing
Context from the Tools menu. Make sure that you specify a name for your routing context in the
Add Routing Context dialog box, as routing contexts cannot be renamed.
For example, you might want your interview to also be used for a phone-based survey or
a paper-based survey. If so, you should add routing contexts called CATI and Paper to your
interview script.
If you don’t want your script to include the Web routing context, delete that routing context
before you add any code to its routing section and create a new routing context with the name
that you want.

Deleting a routing context

When you delete a routing context, any code in the routing section will be lost, so copy the code
elsewhere if you want to keep it. Then follow these steps:
E In the Edit pane, click on the tab of the routing context that you want to delete.

E From the Tools menu, choose Remove Routing Context.

Amending the Paper routing context

If you used other IBM® SPSS® Data Collection authoring tools to create your interview script,
the script might already contain a read-only routing context called Paper. Although you cannot
change this routing context, you can copy its code into a new routing context and amend the code
there. If you do this, you will need to change the activation options when you activate your
interview script so that the interview will use your new routing context. For more information, see
the topic Activating an interview on p. 87.
58

Chapter 1

Viewing and navigating an interview script

By default, IBM® SPSS® Data Collection Base Professional displays an interview script (.mdd)
file’s metadata and routing sections as separate tabs in the Edit pane. However, if you want to
view the metadata section and one of the routing sections at the same time (that is, as two halves
of the Edit pane), you can do so by changing the value of Base Professional’s View Metadata
and Routing option. Note that you will still need to use the tabs at the bottom of the Edit pane
to select different routing contexts.
If you use Base Professional’s find and replace features, they will apply to the section (metadata
or routing) that has focus.

Navigating an interview script

The following aids to navigation are available when you open an interview script in Base
Professional:
To: Do:
Switch between the metadata and routing sections Press Ctrl+PageUp or Ctrl+PageDown.
using the keyboard.
Find a question in the metadata section. Right-click on the question in the Fields folder of
the attached metadata viewer and on the shortcut
menu choose Goto Question. Note that this also
highlights the same question in the routing section.
Alternatively, with the routing section visible,
right-click on the name of the question in the
routing section and on the shortcut menu choose
Goto Definition.
Find a question in the routing section. Right-click on the question in the Fields folder of
the attached metadata viewer and on the shortcut
menu choose Goto Question. Note that this also
highlights the same question in the metadata section.

Using the interview metadata viewer

By default, each interview script opens in IBM® SPSS® Data Collection Base Professional with a
Metadata Model (MDM) metadata viewer attached on the right-hand edge of the Edit pane. The
metadata viewer can be useful when you are writing and debugging your interview script. For
example, you can use the Copy as Ask shortcut menu option to generate mrScriptBasic code to ask
some or all of the questions defined in the metadata section of your script. You can then paste
this code into the routing section of the script.
The metadata viewer displays the metadata only for the interview script that it is attached to
and cannot be used to display any other metadata document. If you need to view another metadata
document, you can either use Base Professional’s independent metadata viewer (in the Base
Professional Metadata pane), or open the other metadata document from the File menu.
Apart from adding a data source and changing the current data source, the metadata viewers
in Base Professional cannot be used to make permanent changes to the metadata. If you need to
make permanent changes, use MDM Explorer. Refer to the MDM Explorer topic in the IBM®
SPSS® Data Collection Developer Library for more information.
59

Base Professional

If you do not want a metadata viewer to open automatically whenever you open an interview
script, change the setting of Base Professional’s Initially Show Metadata View option. If an open
interview script does not have a metadata viewer attached, you can open one by right-clicking in
the Edit pane and from the shortcut menu choosing Show Metadata.

Toolbar buttons

The following table lists the toolbar buttons in the metadata viewer.
Button Description
Show or hide the Properties Pane.

View Allows you to show or hide the various types of


objects that the metadata viewer can display.

Shortcut menu options

The following table describes all the shortcut menu options that are available when you right-click
in a metadata viewer. Note that these options are also available in the independent metadata
viewer in Base Professional’s Metadata pane.
Shortcut Menu Option Description
Change Language If your metadata document contains multiple
languages, you can choose another language.
Change Version If your metadata document contains multiple
versions, you can choose another version.
Goto Question If you are viewing the metadata section of
your interview script, you can quickly locate
the mrScriptMetadata code for any question.
Right-click on the question name in the Fields
folder, and choose this option. This option is useful
when working on large scripts.
Copy Node Copies to the Windows clipboard all the object and
folder names under a folder (node). For example,
if you right-click on the Fields folder and choose
this option, the clipboard will contain a list of all
the question names.
Copy as Select Statement Copies to the Windows clipboard the template
code for a mrScriptBasicSelect Case statement.
The object or folder name that you right-clicked
will become the expression that the Select Case
statement will test. If you choose this option for the
Categories folder of a categorical question, each
category name becomes a separate Case clause. You
can then paste the code into the routing section of
your interview script.
Copy as Ask Copies to the Windows clipboard the mrScriptBasic
code to ask a question. If you selected more than
one question, the code includes individual Ask
statements for all the questions that you selected.
You can then paste the code into the routing section
of your interview script. This menu option is only
available for questions in the Fields folder.
60

Chapter 1

Copy as Show Copies to the Windows clipboard the mrScriptBasic


code to ask a question which cannot be answered.
If you selected more than one question, the code
includes individual Show statements for all the
questions that you selected. You can then paste
the code into the routing section of your interview
script. This menu option is only available for
questions in the Fields folder.
Copy as Script Copies to the Windows clipboard the
mrScriptMetadata definition of a question.
If you selected more than one question, the code
includes definitions for all the questions that you
selected. This option is useful for creating a similar
question in the metadata section of your interview
script or copying one or more questions to another
script. This menu option is only available for
questions in the Fields folder.
Add Datasource Opens the Data Link Properties dialog box, which
you can use to add a data source to the interview
script. The data source added automatically
becomes the current (default) data source for the
interview script. Data sources are used to store
case data when you run your script. For more
information about creating case data and how to use
the Data Link Properties dialog box, see Creating
case data.
Change Datasource If your interview script has more than one data
source, you can choose another data source to be
the current (default) data source. Data sources are
used to store case data when you run your script.
For more information, see the topic Creating case
data on p. 78.

Testing an interview

You can test your interview script in three different ways:


„ Using the Debug option. The script will pause when it reaches the first question in the current
routing context. When you answer the first question, the script will then pause at the next
question, and so on. Use this option to test the appearance of the interview, and to interact
with the interview in a web browser. See below for details of how to use this option.
„ Using the Single Step option. The script will pause on the first line of code in the current routing
context, and you can then continue to single step through the rest of the code. This option is
useful if your routing section contains mrScriptBasic statements such as If and Select and you
want to be able to follow the flow of your code. See below for details of how to use this option.
„ Using the Auto Answer option. IBM® SPSS® Data Collection Base Professional will run the
script and automatically generate random answers to the questions in the current routing
context. For more information on using this option, see Running an interview automatically.
61

Base Professional

Selecting a Web browser

By default, Base Professional will display the interview questions in the Browser pane. To display
the interview in an external web browser, change the setting of Base Professional’s Use Built-in
Browser option.
At present, the only external browser that is supported by Base Professional is Microsoft
Internet Explorer. Future versions of Base Professional might include support for other browsers.

Testing your interview using the Debug option

1. In the Edit pane, click on the tab of the routing context that you want to run. You can skip this step
if your interview script has only one routing context.

2. From the Debug menu, choose Start. Alternatively, press F5.


Note: You can also run an interview by choosing Start Without Debugging from the Base
Professional Debug menu or pressing Ctrl+F5. The difference between the two methods is that the
“Start” (F5) option will stop execution if it encounters a breakpoint in the current routing context.

3. To reveal the Browser pane, click on the Browser pane tab or press Alt+9 twice. If you have
chosen to use an external browser, a browser window will open automatically.

4. As each question appears, you can choose to answer the question and click the Next button on the
browser page, or press F5 to let Auto Answer generate a random answer to the question. If you are
using an external browser, you must switch back to Base Professional before pressing F5.

5. To stop the interview at any time, click the Stop button on the browser page or press Shift+F5. If
you are using an external browser, be aware that closing the browser window while an interview is
running does not stop the interview.

Testing your interview using the Single Step option

1. In the Edit pane, click on the tab of the routing context that you want to step through. You can
skip this step if your interview script has only one routing context.

2. From the Debug menu, choose Single Step. Alternatively, press F10.

3. As the script pauses on each line of code, press F10 to execute that line and continue to the
next line.

4. To stop the script at any time, press Shift+F5.


You might find it useful to have the Locals pane visible while you are stepping through your code
so that you can see the current values of all the variables in your routing context.

Running an interview automatically

You can use the Auto Answer feature of IBM® SPSS® Data Collection Base Professional to
generate random answers to the questions in the current routing context.
62

Chapter 1

„ The Auto Answer dialog box provides options that control the automatic generation of
answers when you test questionnaires. For more information, see the topic Auto Answer
dialog box on p. 66.
„ After running an interview using Auto Answer, details of the questions and answers are listed
in the Answer Log pane. You can save these details to a data source. For more information,
see the topic Saving auto answer data to a data source on p. 67.

How Auto Answer works

Base Professional will choose a valid answer for each question based on the definition of the
question in the metadata section of your interview script. For example, if you have specified a
valid range of 10 to 20 for a numeric question, Base Professional will randomly choose a number
from within that range. If no valid range has been specified, Base Professional will choose any
valid value for the type of question.
Text questions are treated slightly differently. Base Professional will create an answer
consisting of part or all of a standard phrase, which is repeated as many times as required to fill the
answer. If the length of the text question is specified (or implied) as a range, the length of the
answer will be any value within that range. If a fixed length has been specified, the length of the
answer will be that value.
If a non-categorical question includes factors, one of the factor values will be chosen at random.
If any type of question includes special responses such as Don’t Know and Not Answered, Base
Professional may choose one of those responses as an answer.

Note that in the following situations, Base Professional might generate an invalid answer:
„ When a question contains a validation expression, because expressions are ignored.
„ When a question contains a range expression that consists of two or more ranges, for example
10..20, 40..50. Only the lowest and highest values in the expression are observed, so that in this
example Base Professional will generate any answer between 10 and 50.
In these situations, you might want to specify hints to reduce the range of answers that Base
Professional can choose from. See “Using Hints” below for more information.

Because Base Professional is unable to generate a year value that begins with a 0, it will never
choose an answer earlier than 01-Jan-1000, even if the specified or implied range for a date
question includes earlier dates. In addition, if the range expression for a date question specifies a
maximum date earlier than 01-Jan-1000, an invalid answer will be generated.

Running Auto Answer

Each time that you run Auto Answer, you can specify the following options:
„ The number of times that the interview will be run in succession.
„ The maximum number of attempts that Base Professional can have at generating a valid answer
to a question. If that limit is exceeded, Auto Answer stops and displays an error message.
„ Whether Base Professional should refer to hints in the metadata document (.mdd) file when
answering a question.
63

Base Professional

You can follow the progress of Auto Answer using the Auto Answer pane. Initially, this
displays a list of all of the questions defined in the metadata section of your interview script. As
Auto Answer progresses, questions that have been answered are highlighted in blue, and a number
alongside each question shows the number of times that it has been answered.
The answers that Base Professional has generated are displayed in the Output pane. If you
have enabled the option to create case data from the interview, you will also be able to see the
answers in the case data once Auto Answer has finished running. For more information, see the
topic Creating case data on p. 78.

1. In the Edit pane, click on the tab of the routing context that you want to run Auto Answer for. You
can skip this step if your interview script has only one routing context.

2. From the Debug menu, choose Start With Auto Answer. Alternatively, press F6.
This displays the Auto Answer dialog box.
Note: You can also run Auto Answer by choosing Auto Answer Data Generation from the Base
Professional Tools menu. The difference between the two methods is that the “Start With Auto
Answer” (F6) option will stop execution if it encounters a breakpoint in the current routing context.

3. Change any of the options in the dialog box to the settings that you want, and click Start.

4. To reveal the Auto Answer pane, click on the Auto Answer pane tab or press Alt+8 twice. To
reveal the Output pane, click on the Output pane tab or press Alt+6 twice.

5. To stop Auto Answer at any time, press Shift+F5.

Toolbar Buttons

The following table lists the toolbar buttons in the Auto Answer pane.
Button Description
Use this button to see the progress of individual
questions.
Use this button to see the overall progress of the
interview.
Do not show questions within loops.

Show questions within loops.

Using auto answer playback

When auto answer playback is enabled, questions values can be provided via a data source.

Use the Auto Answer dialog box to set options that control automatic generation of answers
when you test questionnaires.
64

Chapter 1

Response creation

Create responses from a data source. When selected, auto answer playback is enabled during
interviewing, and the auto answer questions are generated from a specified data source (the
question values are set according to what is found in the data source). This setting is not selected
by default. Random values are generated for questions that do not exist in the specified data
source. When this setting is enabled, valid Data source connection values are required.

Data source connection. Use this field to specify the data source connection string used in auto
answer playback (when the Create responses from a data source option is selected). Clicking the
Edit button displays the Data Link Properties dialog that prompts you to input the appropriate
data source information. Refer to For more information, see the topic Data Link Properties dialog
box on p. 68. for information on working with the Data Link Properties dialog.
„ Select table. Indicate whether HDATA or VDATA will be used for querying the data source.
„ Select columns. Select which data source columns will be used in the auto answer playback.
„ Specify where clause. Use this field to provide a valid select clause for querying the data
source.

Create responses using hints from .mdd. Select this option if you want IBM® SPSS® Data
Collection Author to refer to hints in the Advanced Properties tab when answering a question.
When the Create responses from a data source setting is enabled, the data source’s question values
take priority over random values.

Number of cases

Create the same number of interviews as specified in the source above. Select this option when you
want to create the same number of interviews as the record count in the specified data source.
When selected, the total number of interviews created by ALL threads is equal to the source’s
record count. The option is only available when the Create responses from a data source option
is selected.

Interviews to create. Enter the number of times you want the application to run the interview
and fill in random answers for each question.

Maximum attempts to answer a question. Enter the maximum number of attempts that the
application can have at generating a valid answer to a question. If that limit is exceeded, Auto
Answer stops and displays an error message.

Using hints

You can use hints to reduce the range of values that Base Professional can choose from when
generating an answer in Auto Answer.
To be able to use hints, you must be comfortable using MDM Explorer to create and amend
custom properties in your .mdd file. Note that you cannot use the metadata viewers in Base
Professional to update custom properties. Make sure that you close your interview script in
Base Professional before using MDM Explorer.
65

Base Professional

You can create hints for either Field or VariableInstance objects. Always use the Field object
unless you are creating different hints for each question in a loop. All custom properties used to
specify hints must be created in the AutoAnswer context, so check that this context exists in your
.mdd file and create it if necessary before trying to create hints.

The following table describes the custom properties that you can create.
Custom Property Name Description
AllowedCategories For categorical questions, this string property
defines a subset of categories from which Base
Professional can choose an answer. For example, if
a question’s category names are the seven days of
the week, you could set the value of this property
to monday,tuesday,saturday to specify that Base
Professional can choose the answer only from those
three categories.
When specifying the value of this property, do not
enclose the value within braces. When specifying
more than one category name, separate the category
names using commas. If the question includes
factors, specify the category names as described
above and not the factor values.
Max For numeric and date questions, this property
defines the maximum value that can be used as an
answer. For text questions, this property defines
the maximum length of the answer. For categorical
questions, this property defines the maximum
number of categories that Base Professional should
choose.
For numeric and date questions, set the data type of
this property to match the question type (either long,
double, or date). For text and categorical questions,
set the data type to long.
Min For numeric and date questions, this property
defines the minimum value that can be used as an
answer. For text questions, this property defines
the minimum length of the answer. For categorical
questions, this property defines the minimum
number of categories that Base Professional should
choose.
For numeric and date questions, set the data type of
this property to match the question type (either long,
double, or date). For text and categorical questions,
set the data type to long.
Value The actual value that Base Professional should
use for the answer. You can use this hint for any
type of question. If the hint is for a categorical
question, specify the category name or names
using the same method as that described above for
AllowedCategories.
For numeric and date questions, set the data type of
this property to match the question type (either long,
double, or date). For text and categorical questions,
set the data type to string.

Notes
66

Chapter 1

„ Decimal values for the Min, Max, and Value custom properties must always be specified using
a dot (.) for the separator, and dates must always be specified in the yyyy/mm/dd format.
„ To ensure that Auto Answer does not select any categorical responses, set the Max value to
zero (0).
For non-categorical questions, you can specify a special response such as Don’t Know as the
answer. To do this, create the Value custom property on the question and set the value of the
property to Codes. Then create a Value custom property on the Codes object for the question, and
set the value of the property to the name of the code that defines the special response.
When you run Auto Answer, you can confirm that a hint has been used by looking at the
answers in the Output pane. Note that if you specify an invalid hint for the type of question, such
as a text value for a numeric question, Base Professional will write a warning message to the
Output pane and ignore the hint. However, if the Value property specifies an invalid category or
code for a question, Auto Answer stops and an error message is displayed.

Auto Answer dialog box

Use the Auto Answer dialog box to set options that control automatic generation of answers
when you test questionnaires.

Response creation

Create responses from a data source. When selected, auto answer playback is enabled during
interviewing, and the auto answer questions are generated from a specified data source (the
question values are set according to what is found in the data source). This setting is not selected
by default. Random values are generated for questions that do not exist in the specified data
source. When this setting is enabled, valid Data source connection values are required.

Data source connection. Use this field to specify the data source connection string used in auto
answer playback (when the Create responses from a data source option is selected). Clicking the
Edit button displays the Data Link Properties dialog that prompts you to input the appropriate
data source information. Refer to For more information, see the topic Data Link Properties dialog
box on p. 68. for information on working with the Data Link Properties dialog.
„ Select table. Indicate whether HDATA or VDATA will be used for querying the data source.
„ Select columns. Select which data source columns will be used in the auto answer playback.
„ Specify where clause. Use this field to provide a valid select clause for querying the data
source.

Create responses using hints from .mdd. Select this option if you want IBM® SPSS® Data
Collection Author to refer to hints in the Advanced Properties tab when answering a question.
When the Create responses from a data source setting is enabled, the data source’s question values
take priority over random values.
67

Base Professional

Number of cases

Create the same number of interviews as specified in the source above. Select this option when you
want to create the same number of interviews as the record count in the specified data source.
When selected, the total number of interviews created by ALL threads is equal to the source’s
record count. The option is only available when the Create responses from a data source option
is selected.

Interviews to create. Enter the number of times you want the application to run the interview
and fill in random answers for each question.

Maximum attempts to answer a question. Enter the maximum number of attempts that the
application can have at generating a valid answer to a question. If that limit is exceeded, Auto
Answer stops and displays an error message.

Saving auto answer data to a data source

After running an interview using Auto Answer, details of the questions and answers are listed in
the Answer Log pane. You can save these details to a data source.

Adding a data source definition

The Configure Data Source dialog provides options for adding, editing, and removing data sources.

E From the menu, choose


Tools > Configure Data Sources

or press Alt+T, F.

The Configure Data Source dialog is displayed.

E Click Add… or press Alt+A. The Data Link Properties dialog displays.

E Enter the required information on each tab of the Data Link Properties dialog and click OK when
finished. For more information, see the topic Data Link Properties dialog box on p. 68.

E Click OK to exit the Configure Data Source dialog.

Selecting a different data source definition

E Select the appropriate data source from the Configure Data Source dialog’s Data Source list, then
click Set As Current or press Alt+S. The selected data source is set as the current data source.

E Click OK to exit the Configure Data Source dialog.

Editing a data source definition

E Select the appropriate data source from the Configure Data Source dialog’s Data Source list, then
click Edit... or press Alt+E. The Data Link Properties dialog displays.
68

Chapter 1

E Enter the required information on each tab of the Data Link Properties dialog and click OK when
finished. For more information, see the topic Data Link Properties dialog box on p. 68.

E Click OK to exit the Configure Data Source dialog.

Removing a data source definition

E Select the appropriate data source from the Configure Data Source dialog’s Data Source list, then
click Remove or press Alt+R. The selected data source is removed.

E Click OK to exit the Configure Data Source dialog.

Saving the auto answer details to a data source

To save the results to a data source when you run Auto Answer, you must first perform the
following step after setting up your data source:

E From the menu, choose


Tools > Write Data to Database

or press Alt+T, W.

Data Link Properties dialog box

The Data Link Properties dialog box has the following tabs:
„ Provider - provides options for selecting the appropriate OLE DB provider for the type of
data you want to access.
„ Connection - provides access to the Metadata Properties and Metadata Versions dialog boxes
„ Advanced - provides options for defining additional connection options.
„ All - provides options for editing the initialization properties that are available for the chosen
Provider.

Data Link Properties: Provider

You use the Provider tab in the Data Link Properties dialog box to select the provider you want
to use.
69

Base Professional

Selecting the appropriate OLE DB Provider


„ To connect to the IBM® SPSS® Data Collection Data Model, select IBM® SPSS® Data
Collection DM-n OLE DB Provider (where n is the version number) from the list of OLE DB
Providers.
„ To connect to another provider, select the appropriate entry from the list of OLE DB Providers
(for example to select a Microsoft SQL Server provider, select Microsoft OLE DB Provider for
SQL Server from the list).

Refer to the appropriate database provider documentation for information on configuring


connection strings.

Data Link Properties: Connection

You use the Connection tab in the Data Link Properties dialog box to define the name, location,
and type of the data to which you want to connect. When you select IBM® SPSS® Data
Collection DM-n OLE DB Provider (where n is the version number) on the Provider tab, an Data
Collection-specific Connection tab is displayed.
70

Chapter 1

Metadata Type. Defines the type of metadata. The drop-down list shows the types of metadata for
which you have a metadata source component (MDSC). The default options are:
„ None. Choose this option if you want to connect to case data only.
„ Data Collection Metadata Document. MR Metadata Document. Selects metadata that is in
the standard IBM® SPSS® Data Collection Data Model format, which is a questionnaire
definition (.mdd) file.
„ ADO Database. Selects metadata that is in an ActiveX Data Objects (ADO) data source.
„ Data Collection Log File. Selects metadata in a standard Data Collection log file.
„ Data Collection Participation Database. Selects metadata that is in a IBM® SPSS® Data
Collection Interviewer Server Administration project’s Sample and HistoryTable tables.
„ Data Collection Scripting File. Selects metadata that is in a mrScriptMetadata file.
„ In2data Database. Selects metadata that is in an In2data database (.i2d) file.
„ Quancept Definitions File (QDI). Selects metadata in a IBM® SPSS® Quancept™ .qdi file
using the QDI/DRS DSC.
„ Quancept Script File. Writes the metadata in an MDM document to a Quancept script (.qqc) file.
„ Quantum Specification. Writes the metadata in an MDM document to a IBM® SPSS®
Quantum™ specification.
„ Quanvert Database. Selects metadata that is in a IBM® SPSS® Quanvert™ database.
„ Routing Script File. Writes the routing section of an MDM document to a script that defines the
routing required for interviewing.
71

Base Professional

„ SPSS Statistics File (SAV). Selects metadata that is in an IBM® SPSS® Statistics .sav file.
„ Surveycraft File. Selects metadata that is in a IBM® SPSS® Surveycraft™ Validated
Questionnaire (.vq) file.
Metadata Location. The name and location of the metadata. The way you specify this depends on
the type of metadata that you selected in the previous drop-down list:
„ Data Collection Metadata Document. The name and location of the .mdd file.
„ ADO Database. The name and location of a .adoinfo file, which is an XML file that specifies the
connection string for the target data source and the name of the target table in that data source.
„ Data Collection Log File. The name and location of the log file. Typically log files have a
.tmp filename extension. However, some log files may have another filename extension. If
necessary, you can rename the file so that it has a .tmp filename extension.
„ Data Collection Participation Database. The name and location of a Participants Report
Document (.prd) file, which is an XML file that specifies the connection string and the names
of the table and columns to be used.
„ Data Collection Scripting File. The name and location of the mrScriptMetadata file. Typically
these files have an .mdd or .dms filename extension.
„ In2data Database. The name and location of the .i2d file.
„ Quancept Definitions File (QDI). The name and location of the .qdi file.
„ Quancept Script File. The name and location of the .qqc file.
„ Quantum Specification. The location of the Quantum specification files.
„ Quanvert Database. The name and location of the qvinfo or .pkd file.
„ Routing Script File. The name and location of the routing script file.
„ SPSS Statistics File (SAV). The name and location of the .sav file.
„ Surveycraft File. The name and location of the .vq file.

Click Browse to select the file in the Open dialog box.


Open Metadata Read/Write. By default, the metadata is opened in read-only mode. Select this
option if you want to be able to write to it. When you open some types of data (for example, a
Quanvert database) the metadata is always opened in read-only mode.
Properties. Edit MDM Properties. Click this button to open the MetadataMDM Properties dialog
box, in which you can specify the versions, language, context, and label type to use. For more
information, see the topic Data Link Properties: Metadata Properties on p. 73.
Case Data Type. Defines the type of case data. The drop-down list shows all of the types of case
data for which you have a case data source component (CDSC). The default options are:
„ ADO Database. Reads case data from an ActiveX Data Objects (ADO) data source.
„ Delimited Text File (Excel). Writes case data in tab-delimited format to a .csv file.
„ Data Collection Database (MS SQL Server). Reads and writes case data in a Data Collection
relational database in SQL Server. This option can be used to read data collected using IBM®
SPSS® Data Collection Interviewer Server.
„ Data Collection Log File. Selects the Log DSC, which enables you to read Data Collection
log files.
72

Chapter 1

„ Data Collection XML Data File. Reads and writes case data in an XML file. Typically, you use
this option when you want to transfer case data to another location.
„ In2data Database. Reads case data from an In2data Database (.i2d ) file.
„ Quancept Data File (DRS). Reads case data in a Quancept.drs, .drz, or .dru file using the
QDI/DRS DSC.
„ Quantum Data File (DAT). Selects the Quantum DSC, which reads and writes case data in a
Quantum-format ASCII file.
„ Quanvert Database. Selects the Quanvert DSC, which reads data in a Quanvert database.
„ SPSS Statistics File (SAV). Reads and writes case data in an SPSS Statistics .sav file.
„ Surveycraft File. Reads case data from a Surveycraft data file.

Tip: If you have specified a Metadata Type and a Metadata Location, and the default data source
in your metadata refers to the case data that you want to connect to, you don’t need to specify
a Case Data Type or a Case Data Location.
Case Data Location. The name and location of the case data. The way you specify this depends on
the type of case data that you selected in the previous drop-down list:
„ ADO Database. The OLE DB connection string for the ADO data source. To build this string,
click Browse, which opens a second Data Link Properties dialog box in which you can choose
the options for your data source. For example, to connect to a Microsoft Access database or a
Microsoft Excel file, select Microsoft OLE DB Provider for ODBC Drivers in the Provider tab and
click the Build button in the Connection tab to build a connection string that uses the Machine
Data Source called “MS Access Database” or “Excel Files” as appropriate. If your data source
is a Microsoft SQL Server database that is not a Data Collection relational database, select
Microsoft OLE DB Provider for SQL Server in the Provider tab and enter the server name and
database name in the Connection tab. Then click OK to close the second Data Link Properties
dialog box and return to the Connection tab of the first Data Link Properties dialog box.
„ Delimited Text File (Excel). The name and location of the .csv file.
„ Data Collection Database (MS SQL Server). This must be an OLE DB connection string.
„ Data Collection Log File. The name and location of the log file. Typically log files have a
.tmp filename extension. However, some log files may have another filename extension. If
necessary, you can rename the file so that it has a .tmp filename extension.
„ Data Collection XML Data File. The name and location of the .xml file.
„ In2data Database. The name and location of the .i2d file.
„ Quancept Data File (DRS). The name and location of the .drs, .drz, or .dru file.
„ Quantum Data File (DAT). The name and location of the .dat file. If a .dau file is created, it will
have the same name, but with the file name extension of .dau.
„ Quanvert Database. The name and location of the qvinfo or .pkd file.
„ SPSS Statistics File (SAV). The name and location of the .sav file.
„ Surveycraft File. The name and location of the Surveycraft Validated Questionnaire (.vq) file.
The Surveycraft .qdt file, which contains the actual case data, must be in the same folder
as the .vq file.
Click Browse if you want to browse to the location of the case data in a dialog box.
73

Base Professional

Case Data Project. This text box should be blank, unless you are connecting to one of the
following case data types:
„ ADO Database. If you are connecting to a Microsoft SQL Server database (that is not a Data
Collection relational database) or a Microsoft Access database, enter the name of the database
table that you want to use. If you are connecting to a Microsoft Excel file, enter the name of
the worksheet that you want to use, for example, Sheet1. Depending on the version of Excel
installed, you may have to add a dollar sign ($) after the worksheet name for the connection to
be successful, for example, Sheet1$.
„ Data Collection Database (MS SQL Server). Enter the name of the project that you want to use.
Test Connection. Click this button to test the connection and verify whether you have entered all
information correctly.

Data Link Properties: Metadata Properties

You use the Metadata Properties dialog box to define the version, language, context, and label type
that you want to use when you connect to a questionnaire definition (.mdd) file (also known as
IBM® SPSS® Data Collection Metadata Document file). You open this dialog box by clicking
the Properties button in the Metadata section on the Connection tab in the Data Link Properties
dialog box.

Version. Select the version or versions that you want to use. Questionnaire definition (.mdd) files
typically contain , which record any changes to the content of the questionnaire. Typically, when
the questionnaire changes (for example, a question or category is added or deleted) a new version
is created and when the changes are complete, the version is locked. The drop-down list box
displays all of the available versions plus three additional options:
„ All versions. Select this option if you want to use a combination (superset) of all of the
available versions. (This is sometimes called a superversion). When there is a conflict
between the versions, the most recent versions generally take precedence over the older
versions. For example, if a category label differs in any of the versions, the text in the latest
version will be used. However the order of questions and categories is always taken from the
most recent version and there is special handling of changes to loop definition ranges and the
minimum and maximum values of variables, similar to that described for the IBM® SPSS®
Data Collection Metadata Model Version Utility in . Use the Multiple Versions option if
you want to change the order of precedence.
74

Chapter 1

„ Multiple versions. Select this option if you want to use a combination (superset) of two or
more specific versions. For more information, see the topic Data Link Properties: Metadata
Versions on p. 74.
„ Latest version. Select this option if you want to use the most recent version.

Using a combination of some or all of the versions is useful when, for example, you want to export
case data for more than one version and there have been changes to the variable and category
definitions that mean that case data collected with one version is not valid in another version.
Selecting all of the versions for which you want to export the case data, means that generally
you can export the case data collected with the different versions at the same time without
encountering validity errors due to the differences between the versions. However, depending on
the version changes, some validity errors may still be encountered.

Language. Languages. Select the language you want to use. You can change the language only if
there is more than one language defined.

Context. Contexts. Select the user context you want to use. The user context controls which texts
are displayed. For example, select Question to display question texts, or Analysis to display shorter
texts suitable for displaying when analyzing the data.

LabelType. LabelTypes. Select the label type you want to use. You should generally select the
Label option.

Data Link Properties: Metadata Versions

You use the Metadata Versions dialog box when you want to select two or more versions of the
questionnaire definition (.mdd) file. You open this dialog box by selecting Multiple Versions in the
Version drop-down list box in the Metadata Properties dialog box.
75

Base Professional

Versions. The Metadata Versions dialog box lists all of the versions that are available. Click Select
All to select all of the versions. Click Clear All to deselect all of the versions and then select the
versions you want individually. For each version, the following information is shown:
„ Name. Version. The version name. Version names are made up of a combination of the major
version and minor version numbers in the form Major#:Minor#, where Major# is the number
of the major version and Minor# is the number of the minor version. Changes in the major
version number indicate that the structure of the case data has changed (for example, variables
or categories have been added or deleted) whereas changes in the minor version number
indicate that the changes affect the metadata only (for example, a question text has been
changed). Version names are created automatically when a version is locked. A version that
has not been locked is always called LATEST.
„ Created by. The ID of the user who created the version.
„ Created Date. This shows the date and time at which the version was locked.
„ Description. When present, this is a text that gives information about the version.

The order in which you select the versions controls the order of precedence that will generally be
used when there is a conflict between the versions. For example, if a category label differs in the
versions you select, the text in the version with the higher precedence will be used. However the
order of questions and categories is always taken from the most recent version and there is special
handling of changes to loop definition ranges and the minimum and maximum values of variables,
similar to that described for the IBM® SPSS® Data Collection Metadata Model Version Utility in

If you want the most recent version to take precedence, start selecting the versions at the top
and work down the list. If you want the oldest version to take precedence, start at the bottom
and work up the list.

Note that you can select multiple versions by pressing Ctrl or Shift while you click.

Tip. You can select individual or multiple versions by pressing Ctrl or Shift while you click,
provided the mouse is in the Description or Date/Time Locked column. You can then click in the
Version column to select or deselect the check boxes for all of the versions that you have selected.
You may find this useful when you are working in a file that has many versions.

Selected Versions. Displays an expression that represents the selection you have chosen. You can
optionally select the versions you want to use by typing an expression directly into this text box.
The order of precedence is taken from the order in which versions are specified, with the rightmost
versions taking precedence over the leftmost.
Syntax Description
.. Specifies all versions
v1, v2, v3, v4 Specifies individual versions
v1..v2 Specifies an inclusive range of versions
^v1..v2 Excludes a range of versions
Specifies the most recent version.
76

Chapter 1

You can specify a combination of individual versions, and ranges to include or exclude. For
example, the following specifies version 3:2 and all versions from 4:5 to 7:3 with the exception
of versions 7 through 7:2:

3:2, 4:5..7:3, ^7..7:2

Data Link Properties: Advanced

You use the Advanced tab in the Data Link Properties dialog box to define additional connection
options. When you select IBM® SPSS® Data Collection DM-n OLE DB Provider (where n is the
version number) on the Provider tab, an Data Collection-specific Advanced tab is displayed.

Metadata Source when Location is Different. If existing Data Source has a different location. The
Data Model uses the DataSource object to store details about case data that is associated with an
MDM Document (.mdd file). This option specifies what should happen if there is no DataSource
object in the MDM Document with the same case data type whose location matches the case
data location specified on the Connection tab:
„ Use the Data Source (except for location). This is the default behavior. Select this option if
you want to use the first DataSource object of the same type that is encountered and do not
want to store the new case data location in it.
„ Use the Data Source and store the new location. Select this option if you want to use the first
DataSource object of the same type that is encountered and store the new case data location
in it.
77

Base Professional

„ Create a new Data Source. Select this option if you want to create a new DataSource object.
This is useful when you do not want to use the same variable names when exporting to SPSS
.sav as used previously.
„ Raise an Error. Select this option if you want the connection to fail.

For more information, see the IBM® SPSS® Data Collection Developer Library.

Categorical variables. Specifies whether to display the categories of categorical variables as


numeric values or names.

Preserve source column definitions. Select this option if you want the native objects in the
underlying database to be exposed directly as Data Model variables without any interpretation.
For example, if you select this option, a multiple dichotomy set in a .sav file would be represented
as several long or text variables instead of one categorical variable.

Reading categorical data. Specifies whether to display the categories of categorical variables as
numeric values or names.

Writing data. Specifies whether the CDSC deletes the output data, if it exists, before writing
new data. The options are as follows:
„ Append to existing data. This is the default behavior. Select this option if you want to append
to the existing data if it exists.
„ Replace existing data. Select this option if you want to delete the existing data and schema.
This will allow data to be created with a different schema.
„ Replace existing data but preserve schema. Select this option if you want to delete the existing
data, but preserve the existing schema if possible. Note that for some CDSCs, such as SPSS
SAV and Delimited Text, the schema will not be preserved because deleting the data results
in the loss of the schema.

Validation. Perform data validation. Select if you want case data to be validated before it is written.
Deselect if you do not want any validity checks to be performed on case data before it is written.

Allow Dirty. Allow dirty data. Select if you have chosen data validation, and you want to run in
dirty mode. This means that data is accepted even if it has some inconsistencies. Deselect this
option to run in clean mode, which means that data is rejected if it contains any inconsistencies
(for example, if more than one response has been selected in answer to a single response question).
The validation that is performed varies according to the CDSC that is selected.

User name. If required, enter your User ID.

Password. If required, enter your password.

Data Link Properties: All

You can use the All tab in the Data Link Properties dialog box to edit all of the initialization
properties that are available for the Provider. However, generally you define the values for the
properties on the Connection and Advanced tabs.
78

Chapter 1

The All tab lists all of the initialization properties and you can edit the values by selecting
a property and clicking Edit Value.

For detailed information about the connection properties, see Connection Properties in the IBM®
SPSS® Data Collection Data Model section of the DDL.

Creating case data

When you run or debug an interview, or run the interview using Auto Answer, you can choose
to write the answers, also known as the Case Data, to an output format supported by the IBM®
SPSS® Data Collection Data Model. At present, the formats you can write to are IBM SPSS
Data Collection Data File, IBM® SPSS® Statistics SAV file, IBM® SPSS® Data Collection
RDB database, and Data Collection XML file.
You define a target location for your case data by adding a data source to your interview script
(.mdd) file. Your script can contain more than one data source, so that you can easily change from
using a IBM SPSS Data Collection Data File to an RDB database, for example.

Note: You can choose to mark your case data as Live or Test data. In the routing section of your
interview script, you can use the IsTest property of the IInterviewInfo object to make your code
execution conditional on whether you have marked the data as live or test data.

Creating case data

E Open the Tools menu, and make sure that Write To Database is selected.
79

Base Professional

E If you have not already configured at least one data source, the Configure Data Sources dialog
displays, allowing to add new or edit existing data sources. If you have already defined a data
source, skip the next six steps.

E Click Add... to define a new data source.

The Data Link Properties dialog will appear, as shown below.

E In the Case Data Type drop-down list, select IBM SPSS Data Collection Data File, SPSS Statistics
File (SAV), Data Collection Database (MS SQL Server), or Data Collection XML Data File.

E Click on the Browse button to the right of the Case Data Location edit box.

E If you selected “IBM SPSS Data Collection Data File”, “SPSS Statistics File (SAV)” or “Data
Collection XML Data File” as your case data source, an Open File dialog appears. Enter the name
and location of a new or existing .ddf, .sav, or .xml file, as appropriate, and click Open.

E If you selected “Data Collection Database (MS SQL Server)” as your case data source, enter
the server name, database name, and any other information that is required for your server
environment. Click OK.

E In the Data Link Properties dialog, click OK.

E From the Tools menu, choose Options, and change the Collect Live Data option to True or False as
required.

E Start your interview using F5 (debug), F10 (single step), or F6 (Auto Answer).

The information about your case data source will be saved in your interview script (.mdd) file, and
you will not be prompted for that information again.
80

Chapter 1

The interview will now run and collect the case data in the data source that you specified.

Configuring data sources

After running an interview using Auto Answer, details of the questions and answers are listed in
the Answer Log pane. You can save these details to a data source.

Note: When working with questionnaire files that are opened from an IBM® SPSS® Data
Collection Interviewer Server, auto answer always generates a <project name>.ddf file in the
User_Data directory (located in the user project directory). A data source definition is not
provided.

Adding a data source definition

You might want to add another case data source to your interview script if, for example, you have
been using a IBM SPSS Data Collection Data File to develop your script and you now want to
change to using an RDB database. You might also need to use this option if your script was
created outside of IBM® SPSS® Data Collection Base Professional and the default data source is
not a IBM SPSS Data Collection Data File, an SPSS Statistics SAV file, a Data Collection XML
file, or an RDB database.

The Configure Data Source dialog provides options for adding, editing, and removing data sources.

E From the menu, choose


Tools > Auto Answer > Configure Data Source

This opens the Data Link Properties dialog.

E In the Case Data Type drop-down list, select IBM SPSS Data Collection Data File, SPSS Statistics
File (SAV), Data Collection Database (MS SQL Server), or Data Collection XML Data File. Note that
your interview script can contain only one data source of each case data type, so select only a case
data type that your script does not already have.

E Click on the Browse button to the right of the Case Data Location edit box.

E If you selected IBM SPSS Data Collection Data File, SPSS Statistics File (SAV), or Data Collection
XML Data File as your case data source, an Open File dialog appears. Enter the name and location
of a new or existing .ddf, .sav, or .xml file, as appropriate, and click Open.

E If you selected Data Collection Database (MS SQL Server) as your case data source, enter the server
name, database name, and any other information that is required for your server environment.
Click OK.

E In the Data Link Properties dialog, click OK.

E Enter the required information on each tab of the Data Link Properties dialog and click OK when
finished. For more information, see the topic Data Link Properties dialog box on p. 68.

E Click OK to exit the Configure Data Source dialog.


81

Base Professional

Selecting a different data source definition

E Select the appropriate data source from the Configure Data Source dialog’s Data Source list, then
click Set As Current or press Alt+S. The selected data source is set as the current data source.
E Click OK to exit the Configure Data Source dialog.

Editing a data source definition

E Select the appropriate data source from the Configure Data Source dialog’s Data Source list, then
click Edit... or press Alt+E. The Data Link Properties dialog displays.
E Enter the required information on each tab of the Data Link Properties dialog and click OK when
finished. For more information, see the topic Data Link Properties dialog box on p. 68.
E Click OK to exit the Configure Data Source dialog.

Removing a data source definition

E Select the appropriate data source from the Configure Data Source dialog’s Data Source list, then
click Remove or press Alt+R. The selected data source is removed.
E Click OK to exit the Configure Data Source dialog.

Accessing sample management data

If your interview will use Sample Management when activated in IBM® SPSS® Data Collection
Interviewer Server, and the routing section of your interview script (.mdd) file accesses sample
data, you can test your interview script in IBM® SPSS® Data Collection Base Professional
before activating it.
In Base Professional, Sample Management data is stored in a Sample User XML (.xsu) file,
which contains a single sample record with one or more fields. The records and fields are defined
using XML syntax. Each field has a name, a type and a value. Note that you cannot have more
than one sample record in a .xsu file.

The following sample record is from a file called DefaultSample.xsu, which by default can be
found in the [INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\Base Professional\Interview
folder. The sample record contains three fields called ID, UserName and Password.
<SampleRecord ID="ID1">
<SampleFields>
<SampleField>
<name>ID</name>
<type>8</type>
<value>ID1</value>
</SampleField>
<SampleField>
<name>UserName</name>
<type>8</type>
<value>Name1</value>
</SampleField>
82

Chapter 1

<SampleField>
<name>Password</name>
<type>8</type>
<value>Password1</value>
</SampleField>
</SampleFields>
</SampleRecord>

When you create your own sample fields, you must give each field a type. A list of the valid
values for the <type> element is shown in the following table. Make sure that the types in your
Sample User XML file match the equivalent SQL field types for the sample table (sometimes
known as the participants table) that your interview will use after activation.
Value of <type> Equivalent SQL Field Type
2 Smallint
3 Int
7 Datetime
8 Char, Text, Ntext, or Nvarchar
11 Bit

For an example of the code you need to write to access the data in the sample record, see
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Interview\Sample Management -
Dimensions script\Basic\Basic.mdd, which is intended to be used with DefaultSample.xsu. The
relevant parts of the script are shown below. When the script is run, the browser should display
the message “Welcome to the Drinks Survey Name1”, as Name1 is the value of the UserName
field in DefaultSample.xsu.
Metadata(en-US, Question, label)

...

Welcome "Welcome to the Drinks survey {UserName}." info;

...

End Metadata

Routing(Web)

' Get the UserName from the sample record


Welcome.Label.Inserts["UserName"].Text = IOM.SampleRecord.Item["UserName"]
Welcome.Show()

...

End Routing

Using sample management

1. Create a .xsu file and add a sample record. You can use DefaultSample.xsu as a template for your
own sample record.
83

Base Professional

2. From the Tools menu, choose Options and change the Sample Management Record option to the
name and location of your .xsu file.
3. Run your interview script. The script should now be able to access the sample data in your .xsu file.

Sample scripts

For an example of using Sample Management in Base Professional, open


[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Interview\CATI\MultiMode1\MultiMode1.mdd.
The script demonstrates how code execution can be made conditional on the content of the sample
data, and is intended to be used with either CATIMode.xsu or WebMode.xsu, which are both
located in the same folder as the script. The file MultiMode1ReadMe.htm, which is also in the
same folder as the script, contains more information about using the script.
Further examples of using Sample Management can be found in the
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Interview\Sample Management -
IBM® SPSS® Data Collection script folder.

Testing quotas

If your interview will use quota control, you can use IBM® SPSS® Data Collection Base
Professional to test your quota categories and targets before you activate your interview in IBM®
SPSS® Data Collection Interviewer Server. Note that Base Professional can only test a quota that
is based on the answers to questions in your interview script (.mdd) file. Quotas based on sample
management data are ignored by Base Professional.

Overview

You define quotas by creating a quota definition (.mqd) file. If you have already created a quota
definition file for your Interviewer Server project, you might want to use that file for testing in
Base Professional. Alternatively, you can create a new file from within Base Professional.
The quota targets and counts are stored in a quota database. If you have already activated your
interview and you selected the activation option to use quotas, a suitable quota database might
already exist on your Interviewer Server cluster. Alternatively, Base Professional can create a
quota database for you using any edition of Microsoft SQL Server 2005, including the free SQL
Server 2005 Express Edition (http://www.microsoft.com/sql/editions/express/default.mspx) and
Microsoft SQL Server Desktop Engine (MSDE).
To test a quota, you specify the quota database name and location in the Base Professional
options, and then select Debug Quotas from the Base Professional Tools menu. This synchronizes
the categories and targets in the quota definition file with the quota database. When you now test
your interview, Base Professional increments the counts in the quota database. To check the quota
counts, you must stop your interview script and run an mrScriptBasic file to produce a report.
The rest of this topic describes the following procedures in more detail:
„ Creating or updating a Quota Definition file
„ Testing a quota
„ Checking the quota counts
„ Resetting the quota counts
84

Chapter 1

„ Changing the quota targets


„ Accessing a quota from an interview script

Creating or updating a Quota Definition file

E In Base Professional, create or open an interview script (.mdd) file.

E From the Base Professional menu, choose:


Tools > Quota

This opens the IBM® SPSS® Data Collection Quota Setup Window.

E In the Quota Setup Window, define the quota categories and targets that you want to use for your
interview. If the quota definition file already existed, you can amend the existing categories
and targets.
For more information about using the Quota Setup Window and defining quotas, see Quota Setup
(UserGuide.chm::/Intug-features.htm)search for the topic “Quota Setup” in the Interviewer Server
User’s Guide. Note that when testing a quota, Base Professional ignores any quotas that are
based on sample management data.

E In the Quota Setup Window, save the quota definition (.mqd) file with the same name (except for
the extension) and to the same location as your interview script (.mdd) file.

Testing a quota

E From the Base Professional menu, choose:


Tools > Options

This opens the Base Professional Options dialog box.

E Change the “Debug Quotas: Data Source Name” option to the name of the SQL Server instance
that contains an existing quota database that you want to use. Alternatively, specify the name of
the SQL Server instance in which you want Base Professional to create a new quota database.
If the SQL Server instance is the default instance on the computer in which SQL Server is
installed, set the value of “Debug Quotas: Data Source Name” to the name of the computer.
If the SQL Server instance is a named instance, the format of the “Debug Quotas: Data
Source Name” option must be <computer_name>\<instance_name>, for example,
MyComputer\MyNamedInstance01.
Note: If you have installed SQL Server 2005 Express Edition to use for testing quotas, be aware
that the default installation of SQL Server 2005 Express Edition creates a named instance called
SQLExpress rather than a default instance.

E Change the “Debug Quotas: Database Name” option to the name of the existing or new quota
database.

E Click OK to close the Base Professional Options dialog box.

E In Base Professional, open your interview script (.mdd) file if it is not already open.
85

Base Professional

E From the Base Professional menu, choose:


Tools > Debug Quotas

If the quota database does not already exist, Base Professional creates it. There might be a delay
while this happens. The Debug Quotas menu option should now be selected.

When you next test your interview script, Base Professional will maintain completed and pending
counts in the quota database. If the database already existed, Base Professional will increment
the existing counts.

Checking the quota counts

E If your interview script is running in Base Professional, complete or stop the interview.

E In Base Professional, open the mrScriptBasic script called


DebugQuotaReport.mrs, which by default is installed in the
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\General\mrScriptBasic folder.

E Edit the mrScriptBasic script to change the values of the following parameters:

Parameter Name Value


PROJECT_NAME The name of your interview script file, without the
.mdd extension.
DEBUGQUOTA_DATASOURCE The value you assigned to “Debug Quotas: Data
Source Name” in Base Professional Options. See
“To Test a Quota” above for more information about
this option.
DEBUGQUOTA_DATABASE The value you assigned to “Debug Quotas: Database
Name” in Base Professional Options. See “To Test
a Quota” above for more information about this
option.

E Run the mrScriptBasic script.

After a brief pause, an Excel worksheet opens. For each quota category, the worksheet shows the
count of completed and pending interviews and the target value.

Resetting the quota counts

E Make sure that the Base Professional options refer to the quota database that you want to reset.
See “To Test a Quota” above for more information about these options.

E In Base Professional, open your interview script (.mdd) file if it is not already open, and select
Debug Quotas from the Tools menu.
86

Chapter 1

E From the Base Professional menu, choose:


Tools > Reset Debug Quota Counters

After a brief pause, a message confirms that all the counts in the quota database have been set to
zero.

Changing the quota targets

E Make sure that the Base Professional options refer to the quota database whose targets you want to
change. See “To Test a Quota” above for more information about these options.

E If the Debug Quotas menu option (in the Tools menu) is already selected, clear the selection.

E Follow the steps in “To Create or Update a Quota Definition File” above to open the quota
definition (.mqd) file.

E Amend the targets as required, then save and close the quota definition file.

E From the Base Professional menu, choose:


Tools > Debug Quotas

After a brief pause, the new targets are written to the quota database.

Accessing a quota from an interview script

You access the quota from the routing section of your interview script (.mdd) file by using the
Quota Object Model. For example, the following mrScriptBasic code adds the current respondent
to the relevant pending count in the quota group called “Gender”:
Dim quota_pend_result

Gender.Ask()

' Make sure that the IBM SPSS Data Collection Base Professional Debug Quotas menu option has been selected
If Not IsNullObject(QuotaEngine) Then

' Increment the pending count


quota_pend_result = QuotaEngine.QuotaGroups["Gender"].Pend()

End If

The IBM® SPSS® Data Collection Developer Library contains several example interview script
(.mdd) files that demonstrate the use of quotas. By default, these files are installed in folder
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Interview\Quotas.

Comparing interview scripts

You can use IBM® SPSS® Data Collection Base Professional to do the following:
„ Compare different metadata versions of your interview script (.mdd) file.
„ Compare your interview script file with another interview script file.
87

Base Professional

A IBM® SPSS® Data Collection Data Model accessory called IBM® SPSS® Data Collection
Metadata Model - Compare is used to carry out the comparison. To start Metadata Model
- Compare, open a .mdd file in Base Professional and then choose Compare from the Base
Professional Tools menu. When the Metadata Model - Compare window opens, the “Original
file” settings will automatically have been set to refer to your .mdd file. For more information
about setting up and using Metadata Model - Compare, see the MDM Compare topic in the IBM®
SPSS® Data Collection Developer Library.
A metadata version is added to your interview script file each time that you activate it. You can
also add metadata versions manually to your interview script file by using MDM Explorer. It is
not possible to add metadata versions manually in Base Professional.

Validating an interview template file

If you have created templates to control the appearance of your interview, you can use the
HTML Tidy utility in IBM® SPSS® Data Collection Base Professional to ensure that the HTML
statements in your interview templates are valid.
To work successfully, interview template files must contain XHTML, which is sometimes
referred to as “well formed” HTML and has strict rules governing the syntax of HTML statements.
However, you can write your interview templates in standard HTML and then use HTML Tidy to
convert them to XHTML.

Using HTML Tidy

1. Open your interview template file in Base Professional.

2. From the Tools menu, choose HTML Tidy.

3. Select Current File.

4. Click OK.

Activating an interview

When you have finished developing and testing your interview, you can activate it so that it can
be used for interviewing in version 6.0.1 of IBM® SPSS® Data Collection Interviewer Server.

Activating an interview

E From the Tools menu, choose Activate.

This opens the activate Login dialog.

E Complete the fields in the Login dialog as follows, depending on whether you are connected to
the same local network as the Interviewer Server cluster and have access to the FMRoot shared
folder on the primary DPM server:
„ If you are connected to the local network. In “Destination Server or IBM® SPSS® Data
Collection Interviewer Server Administration URL”, enter the name of the primary DPM
server in the target Interviewer Server cluster, or select the name from the drop down list if you
88

Chapter 1

have activated to this server before. Then select Login using my Windows account or Login As
IBM® SPSS® Data Collection User1, and if required enter the user name and password to use.
„ If you are not connected to the local network. In “Destination Server or Interviewer
Server Administration URL”, enter the URL that you use to login to Interviewer
Server Administration on the target Interviewer Server cluster, for example,
http://primary_dpm_server_name/SPSSMR/DimensionNet/default.aspx. Alternatively, if you
have activated to this URL before, you can select it from the drop down list. Then select Login
As Data Collection User and enter the user name and password to use.

E Click Login.

If the login was successful, the activation dialog will start and prompt you for further information.
For more information on completing the dialog, see Using the Activation Option in IBM SPSS
Data Collection Base Professional.

Note: The Login dialog might not appear if IBM® SPSS® Data Collection Base Professional’s
Show Login Dialog option has been set to False.

Activating an interview from the Start Menu

You can also activate an interview from the Windows Start menu without having to start Base
Professional. This option might be useful for organizations in which activation is performed by
someone other than the scriptwriters. To activate from the Start menu, choose the following
options :
Programs > Data Collection > IBM® SPSS® Data Collection Base Professional 6.0.1 > Activate

This opens the activate Login dialog. To complete the activation dialog, follow the instructions in
“To Activate an Interview” above.

For information about customizing the Activate shortcut in the Windows Start menu, see
Activating Projects from the Command Line.
1 If you select Login As Data Collection User, the user name that you enter will automatically
have access to the interview (project) in Interviewer Server Administration.

Displaying different languages

When using IBM® SPSS® Data Collection Base Professional to develop interviews, you can
display different languages in the following ways:
„ Change the language that is used in window titles, menu names, ToolTips, and other elements
of the Base Professional graphical user interface (GUI). For more information, see the topic
IBM SPSS Data Collection Base Professional in Other Languages on p. 53.
„ Change the browser pages that appear when you run an interview in Base Professional. By
default, these two pages contain the English text “No interviews are currently running” and
“Executing routing logic”. However, you can create individual browser pages for any number
of different languages. The pages whose language matches the system locale setting in your
computer’s operating system will then appear when you run an interview.
89

Base Professional

Creating browser pages for different languages

1. In the Interview folder, which by default can be found in the


[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\Base Professional folder, create a sub
folder whose name is the four letter language code for the required language. Typically,
you will want to create a folder that matches the system locale for your computer’s
operating system. For example, for French-Canada, you would create a folder called
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\Base Professional\Interview\fr-CA.

2. Copy the following five files from [INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\Base


Professional\Interview to the sub folder that you created in the previous step:
„ BrowserDefault.htm
„ InterviewDefault.htm
„ RedArrow.gif
„ Routing.gif
„ SPSSBoxesTop.gif

3. In Windows Explorer, navigate to your new sub folder, right-click BrowserDefault.htm,


choose Properties from the shortcut menu, and clear the Read-only checkbox. Do the same for
InterviewDefault.htm.

4. You can now edit the BrowserDefault.htm and InterviewDefault.htm files in your new sub folder
to change the text to the language you require. To edit these files, you can use either a text editor
such as Notepad or an HTML editor. Alternatively, you can replace these files with your own
HTML files that have the same file names.

5. After you have saved the two files, quit and restart Base Professional.

When you now run an interview, Base Professional will use the browser pages in the sub folder
whose name matches the system locale setting. To change back to the English browser pages,
rename the sub folder that you created so that its name no longer matches the system locale
setting and restart Base Professional.

Setting the default language for interviews

When you create an interview script (.mdd) file, the current language of the metadata is set, by
default, to the system locale setting in your computer’s operating system. If you do not have a
language pack installed, you can specify the default language for new interview scripts by creating
a configuration (.exe.config) file for IBM® SPSS® Data Collection Base Professional.

Note: Do not follow the instructions below if you have a language pack installed. It is not possible
to set a default language for new interview scripts that is different from the language installed
by a language pack.
90

Chapter 1

Setting the default language for new interview scripts

1. If you have changed any of the default options in Base Professional, make a note of your changes.
To do this, choose Options from the Tools menu and make a note of the names and values of all
options whose values are displayed in bold text.

2. Quit Base Professional.

3. Delete the files DockingLayout.bin, DockingModes.bin, mrStudioOptions.xml, and


ToolbarLayout.bin from the C:\Documents and Settings\<your Windows user name>\Application
Data\IBM\SPSS\DataCollection\6\Base Professional folder.

4. In the Base Professional installation folder, which by default is


[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\Base Professional, use Notepad or a similar
text editor to create a file called mrStudio.exe.config.

5. Insert the following XML elements into mrStudio.exe.config, replacing en-us with the for the
language that you want to set as the default:

<configuration>
<appSettings>
<add key="Culture" value="en-us" />
</appSettings>
</configuration>

6. Save mrStudio.exe.config and start Base Professional.

7. Using your notes from the first step, amend any of the default options in Base Professional.

When you now create an interview script in Base Professional, the current language of the
metadata is set to the language that you specified in the mrStudio.exe.config file.

Setting the default language back to your system locale setting

1. If you have changed any of the default options in Base Professional, make a note of your changes.
To do this, choose Options from the Tools menu and make a note of the names and values of all
options whose values are displayed in bold text.

2. Quit Base Professional.

3. Delete the files DockingLayout.bin, DockingModes.bin, mrStudioOptions.xml, and


ToolbarLayout.bin from the C:\Documents and Settings\<your Windows user name>\Application
Data\IBM\SPSS\DataCollection\6\Base Professional folder.

4. Delete the mrStudio.exe.config file that you previously created.

5. Start Base Professional.

6. Using your notes from the first step, amend any of the default options in Base Professional.
91

Base Professional

The IBM SPSS Data Collection Base Professional menu

The IBM® SPSS® Data Collection Base Professional menu contains the following options:
Menu Submenu Submenu Keyboard Shortcut
File New File Ctrl+N
Project Alt+F, N, P
From Template Alt+F, N, T
Workspace Ctrl+W
Open File Ctrl+O
Workspace Alt+F, O, W
Save Ctrl+S
Save As Alt+F, A
Close Ctrl+F
Close All Alt+F, L
Save All Ctrl+Shift, S
Add Current Item to Alt+F, D
Workspace
Add New Item to Alt+F, T
Workspace
Add Existing Item to Alt+F, G
Workspace
Save Workspace Alt+F, W
Save Workspace As Alt+F, K
Import Metadata Alt+F, I
Export Metadata Alt+F, M
Store to Repository Alt+F, P, S
Retrieve from Alt+F, P, R
Repository
Print Ctrl+P
Print Preview Alt+F, V
Page Setup Alt+F, U
Recent Files Alt+F, F
Recent Workspaces Alt+F, R
Exit Alt+F, X
Edit Undo Ctrl+Z
Redo Ctrl+Y
Cut Ctrl+X
Copy Ctrl+C
Paste Ctrl+V
Delete Delete
Find Ctrl+F
Find Next F3
Replace Ctrl+H
Go To Ctrl+G
Select All Ctrl+A
Advanced Increase Line Indent Ctrl+Shift+I
92

Chapter 1

Menu Submenu Submenu Keyboard Shortcut


Decrease Line Indent Ctrl+Shift+D
Comment Selection Ctrl+Shift+C
Uncomment Selection Ctrl+Shift+U
Tabify Selection Alt+E, V, T
Untabify Selection Alt+E, V, U
Folding Expand All Foldings Alt+E, O, E
Collapse All Foldings Alt+E, O, C
ScriptAssist Show Members Alt+E, S, M
Expand Macro Ctrl+M
Parameter Info Ctrl+Shift+O
Bookmarks Toggle Bookmark Ctrl+Shift+B
Next Bookmark Ctrl+Shift+N
Previous Bookmark Ctrl+Shift+P
Clear All Bookmarks Ctrl+Shift+A
View Workspace Alt+0
Types Alt+1
Functions Alt+2
Breakpoints Alt+3
Locals Alt+4
Expressions Alt+5
Output Alt+6
Metadata Explorer Alt+7
Auto Answer Start Auto Answer Alt+T, U
Write Data to Data Alt+T, W
Source
Configure Data Source Alt+T, F
Browser Alt+9
Repository Alt+F1
Show All Alt+V, S
Hide All Alt+V, H
Pin All Alt+V, P
Unpin All Alt+V, U
Toolbars File Alt+V, T, F
Edit Alt+V, T, E
View Alt+V, T, V
Debug Alt+V, T, D
Formatting Alt+V, T, F
Workspace Alt+V, T, W
Interview Options Alt+V, T, I
Contexts Analysis Alt+V, C
CARDCOL
Debug Start F5
Restart Ctrl+Shift+F5
Start With Auto Answer F6
Start Without Debugging Ctrl+F5
93

Base Professional

Menu Submenu Submenu Keyboard Shortcut


Show Next Statement Alt+D, H
Single Step F10
Toggle Breakpoint Ctrl+B
Clear All Breakpoints Ctrl+Shift+F9
Tools Macros Alt+T, M
Connection String Alt+T, C
Builder
Add Routing Context Alt+T, R
Remove Routing Alt+T, M
Context
Label Manager Alt+T, L
Metadata Model Alt+T, P
Compare
HTML Tidy Alt+T, H
Write to Database Alt+T, W
Debug Quotas Alt+T, Q
Reset Debug Quotas Alt+T, D
Counter
Auto Answer Data Alt+T, U
Generation
Quota Alt+T, Q
Deploy Locally Alt+T, Y
Activate Alt+T, A
Activation Console Alt+T, V
Store to Question Alt+T, S
Repository
Configure Question Alt+T, C
Repository Connections
Configure Project Alt+T, T
Templates
Options Alt+T, O
Window Split Alt+W, S
Next Window Alt+T, N
Previous Window Alt+T, P
Help Contents F1
About Base Professional Alt+H, A

IBM SPSS Data Collection Base Professional toolbars


By default, the seven toolbars in IBM® SPSS® Data Collection Base Professional are docked at
the top and right hand edges of the window. All the toolbars can be moved individually, so that if
you wish, you can dock them on any of the four edges of the window, or they can remain “floating”.
To move a toolbar, click on its grab handle and drag the toolbar to its new position. If you double
click on the title bar of a “floating” toolbar, the toolbar will return to its last docked position.
94

Chapter 1

To show or hide buttons on a docked toolbar, click on the drop-down arrow at the end of the
toolbar, click Add or Remove Buttons, and in the shortcut menu, click the name of the toolbar you
want to change. You can then show or hide each toolbar button by selecting or clearing the
relevant checkbox. You can also do the same on a floating toolbar by clicking on the title bar with
the right mouse button. It is not currently possible to add new buttons to a toolbar, although this
may be a feature of a future version of Base Professional.

The seven toolbars are described in detail below.

File Toolbar

Button Description Keyboard Shortcut


Create a new file. Ctrl+N

Open an existing file. Ctrl+O

Save the current file. Ctrl+S

Save all of the files that are open. Alt+F, N, A

Print the current file. Ctrl+P

Display the current file in the Alt+F, V


Print Preview dialog box.
Open the Page Setup dialog box. Alt+F, U

Edit toolbar

Button Description Keyboard Shortcut


Cut the selection and copy it to Ctrl+X
the Windows clipboard. Note that
this applies to the Edit window
only. If you want to cut text
from one of the other panes (for
example, the Properties pane),
select the text and right-click.
Then choose the Cut command on
the shortcut menu.
Copy the selection to the Windows Ctrl+C
clipboard. Note that this applies
to the Edit window only. If you
want to copy text from one of
the other panes (for example, the
Properties pane), select the text
and right-click. Then choose the
Copy command on the shortcut
menu.
95

Base Professional

Button Description Keyboard Shortcut


Paste the contents of the Windows Ctrl+V
clipboard at the current cursor
position.
Undo the last change you made to Ctrl+Z
the current file.
Redo the last change that you Ctrl+Y
canceled using Undo.
Show the Find pane. Ctrl+F

Show the Replace pane. Ctrl+H

Advanced Edit toolbar

Button Description Keyboard Shortcut


Decrease the indentation of the Ctrl+Shift+D
selected lines by the number of
character spaces specified by the
TabIndent option.
Increase the indentation of the Ctrl+Shift+I
selected lines by the number
of character spaces specified
by the TabIndent option. For
more information, see the topic
IBM SPSS Data Collection Base
Professional options on p. 101.
Insert a comment symbol (‘) at Ctrl+Shift+C
the start of all of the selected
lines, so that they are considered
comments. This is useful when
you want to remove some code,
but may want to reinstate it later.
Uncomment the selected lines by Ctrl+Shift+U
removing the comment symbols.
Open the AutoExpansion Macros Alt+T, M
dialog box.
Open the Base Professional Alt+T, O
Options dialog box.
Bookmarks Open the Bookmarks menu,
which contains the four buttons
shown below. You use bookmarks
to quickly navigate through the
code.
If the line where the cursor Ctrl+Shift+B
is currently positioned has a
bookmark, remove the bookmark,
and if not, insert a bookmark on
the line.
Move to the next bookmark in the Ctrl+Shift+N
current file.
96

Chapter 1

Button Description Keyboard Shortcut


Move back to the previous Ctrl+Shift+P
bookmark in the current file.
Remove all bookmarks from the Ctrl+Shift+A
current file.

Debugging toolbar

Button Description Keyboard Shortcut


Run the current script in F5
debugging mode. If a script has
stopped at a breakpoint, you
can click this button to continue
execution.
When running an interview script
(.mdd) file, you can use this
button to display the next question
without answering the current
question (that is, the Auto Answer
feature will answer the current
question for you).
Run the current interview script F6
(.mdd) file in Auto Answer
mode. If an interview script
has stopped at a breakpoint, you
can click this button to continue
execution in Auto Answer mode.
For more information, see the
topic Running an interview
automatically on p. 61.
Run the current script in normal Ctrl+F5
(non-debugging) mode. Any
breakpoints in the script will be
ignored.
Stop the debugging session. If Shift+F5
you are running an interview
script (.mdd) file, or a data
management script (.dms) file,
you can also use this button at any
time to stop the run. There may
be a delay before the run stops.
This button is hidden when no
scripts are running.
Restart the debugging session Ctrl+Shift+F5
from the beginning of the script.
Step through the script one line of F10
code at a time.
In a debugging session, scroll Alt+D, H
the script in the Edit pane so that
the next statement to be executed
can be seen. The statement is
highlighted in yellow.
97

Base Professional

Button Description Keyboard Shortcut


Breakpoints Open the Breakpoints menu,
which contains the two buttons
shown below.
If the line where the cursor Ctrl+B or F9
is currently positioned has
a breakpoint, remove the
breakpoint, and if not, insert a
breakpoint on the line. Lines on
which a breakpoint has been set
are highlighted in red.
Remove all breakpoints from the Ctrl+Shift+F9
current script.

Workspace toolbar

Button Description Keyboard Shortcut


Create a new workspace. If Ctrl+W
you have updated the current
workspace, you will be prompted
to save it.
Open an existing workspace. If Alt+F, O, W
you have updated the current
workspace, you will be prompted
to save it.
Add the current file to the Alt+F, D
workspace.
The New File dialog box appears. Alt+F, T
When you confirm the details of
the new file, it will be opened and
added to the workspace.
The Open file dialog box appears. Alt+F, G
When you confirm the name of
the existing file, it will be opened
and added to the workspace.
Save the workspace. Alt+F, W

Interview Options toolbar

Button Description Keyboard Shortcut


Add a routing context to the Alt+T, R
current interview script (.mdd)
file. For more information, see
the topic Adding a routing context
on p. 57.
98

Chapter 1

Button Description Keyboard Shortcut


Remove the current routing Alt+T, M
context from the current interview
script (.mdd) file. For more
information, see the topic Adding
a routing context on p. 57.
Launch MDM Label Manager.
For more information, see the
topic Working with multiple
languages on p. 57.
Launch IBM® SPSS® Data
Collection Metadata Model -
Compare. For more information,
see the topic Comparing interview
scripts on p. 86.
Run HTML Tidy. You can use Alt+T, H
HTML Tidy to make sure that
your interview template files
contain only “well formed”
HTML, otherwise known as
XHTML. For more information,
see the topic Validating an
interview template file on p. 87.
Open the IBM® SPSS® Data Alt+T, Q
Collection Quota Setup Window.
For more information, see the
topic Testing quotas on p. 83.
Open the Local Deployment Alt+T, D
Wizard. For more information,
see the topic Local Deployment
Wizard overview on p. 199.
Activate the current interview Alt+T, A
script (.mdd) file. For more
information, see the topic
Activating an interview on p. 87.
Opens the IBM® SPSS® Data Alt+T, V
Collection Activation Console.

View toolbar

Button Description Keyboard Shortcut


Show or hide the Workspace pane. Alt+0

Show or hide the Types pane. Alt+1

Show or hide the Functions pane. Alt+2

Show or hide the Breakpoints Alt+3


pane.
Show or hide the Locals pane. Alt+4
Show or hide the Expressions Alt+5
pane.
99

Base Professional

Button Description Keyboard Shortcut


Show or hide the Output pane. Alt+6

Show or hide the Metadata pane. Alt+7

Show or hide the Auto Answer Alt+8


pane.
Show or hide the Browser pane. Alt+9

Show or hide the Repository pane.

Show the Find pane. Ctrl+F

Show the Replace pane. Ctrl+H

IBM SPSS Data Collection Base Professional keyboard shortcuts

Help

To: Press:
Open the IBM® SPSS® Data Collection Developer F1
Library.

General

To: Press:
Create a new file. Ctrl+N
Create a new workspace. Ctrl+W
Open an existing file. Ctrl+O
Save the current file. Ctrl+S
Close the current file. Ctrl+F4
Print the current file. Ctrl+P

Editing

To: Press:
Cut the selection in the Edit pane and copy it to Ctrl+X
the Windows clipboard. Note that this applies to
the Edit window only. If you want to cut text from
one of the other panes (for example, the Properties
pane), select the text and right-click. Then choose
the Cut command on the shortcut menu.
Copy the selection in the Edit pane to the Windows Ctrl+C
clipboard. Note that this applies to the Edit window
only. If you want to copy text from one of the other
panes (for example, the Properties pane), select
the text and right-click. Then choose the Copy
command on the shortcut menu.
Paste the contents of the Windows clipboard at the Ctrl+V
current cursor position.
100

Chapter 1

Undo the last change you made to the current file. Ctrl+Z
Redo the last change that you canceled using Undo. Ctrl+Y
Show the Find pane to find text in the document, Ctrl+F
or in all open documents.
Find the next instance of the text entered in the Find F3
pane.
Show the Replace pane to replace text with other Ctrl+H
text in the document, or in all open documents.
Open the Go To Line dialog box to jump to a line Ctrl+G
number in the document.
Select all text in the document. Ctrl+A
Increase the indent of a line. Ctrl+Shift+I
Decrease the indent of a line. Ctrl+Shift+D
Insert comment marks around a line. Place the cursor on the appropriate line and press
Ctrl+Shift+C
Remove comment marks around a line. Place the cursor on the appropriate line and press
Ctrl+Shift+U
Insert a bookmark or remove an existing bookmark. Place the cursor on the appropriate line and press
Ctrl+Shift+B
Move to the next bookmark in the current file. Ctrl+Shift+N
Move back to the previous bookmark in the current Ctrl+Shift+P
file.
Remove all bookmarks from the current file. Ctrl+Shift+A

Navigation

To: Press:
Switch between the Metadata section and the Ctrl+PageUp/PageDown
Routing section in an interview script (.mdd) file.
Switch to the file in the Edit pane that was Ctrl+Tab (You can use Ctrl+Tab to alternate
previously the current file. between two files. Also note that pressing Ctrl+Tab
and then releasing Tab displays a dialog box in
which you can use the Tab and arrow keys to select
any open file or pane.)
Switch to the previous file in the Edit pane. Ctrl+Shift+Tab

Viewing

Note: To move to a pane that is not hidden, press the keyboard shortcut twice.
To: Press:
Show or hide the Workspace pane. Alt+0
Show or hide the Types pane. Alt+1
Show or hide the Functions pane. Alt+2
Show or hide the Breakpoints pane. Alt+3
Show or hide the Locals pane. Alt+4
Show or hide the Expressions pane. Alt+5
Show or hide the Output pane. Alt+6
Show or hide the Metadata pane. Alt+7
101

Base Professional

Show or hide the Auto Answer pane. Alt+8


Show or hide the Browser pane. Alt+9

Macros

To: Press:
Insert a macro. Type the macro name and any arguments followed
by Ctrl+M

ScriptAssist

To: Press:
Open the autosuggest drop-down list. Ctrl+Space
Close the autosuggest drop-down list Esc
Navigate the autosuggest drop-down list. Down arrow, Up arrow, Page up, Page down, or
type the first letter to jump to items that start with
that letter.
Insert into the script the item that is selected in the Space or dot (.) or left parenthesis ( ( ) or Tab or
autosuggest drop-down list. Enter.
Display a list of all the questions defined in the Ctrl+Q
metadata section of an interview script (.mdd) file.
Display a list of all the sub questions for a question Type the question name (or select using Ctrl+Q),
in an interview script (.mdd) file. type a dot (.) followed by Ctrl+Q
Display the category list for a categorical question Type the question name (or select using Ctrl+Q),
in an interview script (.mdd) file. type a dot (.) followed by Ctrl+R

Debugging

To: Press:
Start running a script in debugging mode. F5
Start running an interview script (.mdd) file in Auto F6
Answer mode.
Start running a script without debugging. Ctrl+F5
Stop the script that is running. Shift+F5
Restart a script in debugging mode after its Ctrl+Shift+F5
execution has been stopped.
Step through the script one line of code at a time. F10
Insert or remove a breakpoint on the current line. Ctrl+B or F9
Remove all breakpoints from the current script. Ctrl+Shift+F9
Close the error message box. Esc

IBM SPSS Data Collection Base Professional options


You can use the IBM® SPSS® Data Collection Base Professional Options dialog box to define
options that control the appearance and behavior of Base Professional. You can open the dialog
box by choosing Options from the Tools menu. If you click on an option, a description of the
option appears at the bottom of the dialog box.
102

Chapter 1

How you change an option depends on the type of option. For numeric options, you simply click
on the option and enter the new numeric value. For options that have a limited number of valid
values, click on the drop-down list to select the value you require. For options that require a file or
folder name, you can either type in the name directly or click on the small button at the rightmost
end of the option to select a file or folder from a list.

The options are described in detail below.

Appearance options

Option Description
Docking Appearance For panes, controls the look of the title bar and the
caption buttons.
Tab Appearance For open files in the Edit pane, controls the look
of the tab.
Toolbar Appearance Controls the look of the toolbars.

Application options

Option Description
Number of Recent Files Controls the number of files that are shown in the
list of recent files on the File menu.
Number of Recent Workspaces Controls the number of workspaces that are shown
in the list of recent files on the File menu.

Debugging options

Option Description
Save Before Execution Controls whether files are automatically saved when
you run or debug them.
103

Base Professional

Interview options

Option Description
Activate Server or URL Specifies the target IBM® SPSS® Data Collection
Interviewer Server cluster to use when you choose
Activate from the Base Professional Tools menu.
The value of this option depends on whether you are
connected to the same local network as the cluster
and have access to the FMRoot shared folder on the
primary DPM server. If you are connected to the
same local network, enter the name of the primary
DPM server. If you are not connected to the same
local network, enter the URL that you use to login
to IBM® SPSS® Data Collection Interviewer
Server Administration on that cluster, for example,
http://primary_dpm_server_name/SPSSMR/Dimen-
sionNet/default.aspx.
If you typically activate to only one cluster, and
are connected to the same local network as that
cluster, you can stop the activate Login dialog from
appearing by entering the name of the primary DPM
server in this option and then setting the “Show
Login Dialog” option below to False. For more
information, see the topic Activating an interview
on p. 87.
Base Folder for Cache The location for interview cache files. Typically,
you should not need to change this option.
Collect Live Data Controls whether answers (case data) collected
during the running of an interview should be marked
as Live or Test. For more information, see the topic
Creating case data on p. 78.
Compare Tool The name and location of the third-party file
compare tool that will be used when you choose
Compare from the Base Professional Tools menu.
For more information, see the topic Comparing
interview scripts on p. 86.
Debug Quotas: Data Source Name The SQL Server instance to be used for storing
quota databases. For more information, see the
topic Testing quotas on p. 83.
Debug Quotas: Database Name The database name you want to use for your quota
database. For more information, see the topic
Testing quotas on p. 83.
Debug Quotas: Display Enabling Message Determines whether Base Professional displays a
warning message when you choose Debug Quotas
from the Base Professional Tools menu. The
message says that creating the quota database might
take a long time. For more information, see the
topic Testing quotas on p. 83.
Default Browser Page The HTML page that will appear in the Browser
pane when no interview is running. If you create
your own page, change this value to the name and
location of your .htm file.
Default Interview Page The HTML page that will appear in the Browser
pane when an interview is running. If you create
your own page, change this value to the name and
location of your .htm file.
104

Chapter 1

Option Description
Error Messages Location The metadata document (.mdd) file that contains the
text for interview error messages. If you create your
own error message texts, change this value to the
name and location of your .mdd file.
Global Template Folder The default location for interview template files.
HTML Doctype The HTML document type declaration to be used
for interview pages. Typically, you should not need
to change this option.
HTMLOptions: NoAutoJump Auto jump is disabled for CATI Player questions.
HTMLOptions: NoExpiryMetaTags Excludes the expiry meta tags in the HTML output.
HTMLOptions: UsePredefinedKeycodes Uses predefined keycodes for CATI Player
questions.
HTMLOptions: UseTablesLayout Uses table layout for single row/column categorical
lists.
HTMLOptions: WebTVSupport Provides WebTV support.
HTTP Ports The HTTP ports to use when running interviews.
Typically, you should not need to change this option.
Initially Show Metadata View Determines whether Base Professional opens a
metadata viewer automatically whenever you open
an interview script (.mdd) file.
Routing Selection Mode Determines whether the standard HTML player
or the CATI HTML player will be used to present
the interview. The CATI HTML player allows the
interview to be completed using the keyboard only.
If you select FromRoutingContext, routing contexts
called CATI will automatically use the CATI HTML
player and all other routing contexts will use the
standard HTML player.
Sample Management Record The name of the Sample User XML (.xsu) file,
which contains a Sample Management record
that can be used to test an interview. For more
information, see the topic Accessing sample
management data on p. 81.
Shared Content Folder The location for Shared Content files. This option
supports the <mrSharedRef> tag.
Show Login Dialog Determines whether the Login dialog appears when
you choose Activate from the Base Professional
Tools menu. Note that if you set Show Login Dialog
to False, but enter a URL in the “Activate Server
or URL” option above (or do not enter any value
for “Activate Server or URL”), the Login dialog
will still appear. For more information, see the topic
Activating an interview on p. 87.
Use Built-in Browser Determines whether a running interview will appear
in the Browser pane or in a standalone Internet
Explorer browser. For more information, see the
topic Testing an interview on p. 60.
Use Hints During Auto Answer Controls whether the Use hints from .mdd checkbox
in the Auto Answer dialog box is selected by
default. Note that using hints can degrade the
performance of the interview script. For more
information, see the topic Running an interview
automatically on p. 61.
105

Base Professional

Option Description
Use In-Memory Cache Controls whether caching will take place in memory
or on disk. Typically, you should not need to change
this option.
View Metadata and Routing Controls how Base Professional displays the
metadata and routing sections of an interview script
(.mdd) file. The two sections can be displayed on
separate tabs in the Edit pane, or can appear as two
halves (top and bottom, or left and right) of the Edit
pane. For more information, see the topic Viewing
and navigating an interview script on p. 58.

ScriptAssist options

Option Description
Show Auto Signature This option controls whether Base Professional
displays the syntax of a function or method when
you type the opening parenthesis.
Show Auto Suggest This option controls whether Base Professional
displays a list of the valid global functions,
properties and methods for a variable when you
type a dot after the variable’s name.
Show Enums Controls whether the ScriptAssist suggestion list
will include enumerators from IBM® SPSS® Data
Collection Type libraries.
Show Function List Controls whether the ScriptAssist suggestion list
will include functions from the IBM SPSS Data
Collection Function Library.
Show Hidden Members Controls whether the ScriptAssist suggestion list
will include hidden items.
Show ToolTips When you move your mouse pointer in the Edit
window over a function or object property or
method, Base Professional can display a ToolTip
showing the correct syntax. You can use this option
to turn this feature on and off.

Text Display options

These options control different aspects of the Edit pane.


Option Description
Convert Tabs to Spaces Controls whether tabs are automatically converted
to spaces. The number of spaces is determined by
the setting of the Tab Indent option.
Default Font The default font that is used in the Edit pane. You
can change this by clicking the ... button on the right
side. This opens the standard Windows Font dialog
box, in which you can select the font name, size,
and other options. Alternatively, you can expand
the DefaultFont options by clicking the + on the left
side. You can then change the suboptions in the
normal way. Note: Italics are not recommended for
the Base Professional default font. Italics may be
difficult to read in the Edit pane.
106

Chapter 1

Option Description
Enable Folding This option enables you to expand and collapse
sections of indented code (such as loops and
subroutines).
Show End Of Line Markers Controls whether end of line markers are shown.
Show Horizontal Ruler Controls whether a horizontal ruler is shown at the
top of the Edit pane.
Show Icon Bar This option displays a bar on the left side of the
Edit pane on which icons indicate the presence of
bookmarks and breakpoints.
Show Invalid Lines This option indicates the presence of empty lines at
the end of a script.
Show Line Numbers Controls whether line numbers are shown.
Show Matching Braces Controls whether a pair of matching left and right
braces ( “[” and “]” ) are highlighted as you type
in the second brace.
Show Spaces Controls whether space characters are shown as
a blue dot.
Show Tabs Controls whether tab characters are shown
Show Vertical Ruler Controls whether a vertical ruler is shown on the
left side of the Edit pane.
Tab Indent Controls the number of spaces between tab stops.

Local Deployment Wizard overview


The Local Deployment Wizard allows you to deploy a survey to one or more IBM® SPSS®
Data Collection Interviewer installations without requiring an IBM® SPSS® Data Collection
Interviewer Server. The wizard provides a simpler alternative to the Activate dialog that is
commonly used to deploy surveys to Interviewer.

The wizard contains the following steps:


„ Usage options – allows you to select how the project will be used.
„ Validation options – provides data entry validation options.
„ Routing options - data entry – provides data entry routing options.
„ Routing options - live interviewing – provides live interviewing routing options.
„ Display options – allows you to select which fields are visible in the case data, and select
which field is used to uniquely identify each case.
„ Deployment options – provides options for deploying the survey to a deployment package or
directly to the local Interviewer installation.
„ Expiry date and time options – provides options for defining the project expiration data and
time.
„ Summary options – provides a summary of all selected options prior to starting project
deployment.

Note: If a project was previously activated, the wizard provides the previous activation options. If
a survey was not previously activated, the wizard provides default values.
107

Base Professional

Usage options

The usage options step allows you to select how the project will be used. Options include:
„ Data entry (default setting) – select this option when the project will be used for entering
response data from paper surveys.
„ Live interviewing – select this option when the project will be used to conduct face-to-face
interviewing.
„ Include subdirectories – select this option if you have subdirectories that include additional
files, such as templates and images.

E After selecting the appropriate usage option, click Next to continue to Validation options (when
Data entry is selected) or Routing options - live interviewing (when Live interviewing is selected).

Validation options

The validation options step allows you to select the data entry validation method. This step is only
available when you select Data entry in the Usage options step.

Options include:
„ Full validation – when selected, all responses require validation.
„ Partial validation – when selected, only a subset of responses require validation. Partial
validation is not available for surveys that contain only one routing.
„ Require two-user validation – when selected, operators are not allowed to validate their own
entries. A second operator is required to validate initial entries.

E After selecting the appropriate validation options, click Next to continue to Routing options -
data entry.

Routing options - data entry

The data entry routing options step allows you specify the routing used for data entry. This step is
only available when you select Data entry in the Usage options step.

E Select the appropriate routing context for each data entry option:
„ Initial data entry – the drop-down menu provides all available routings.
„ Full validation – the drop-down menu provides all available routings. This option is only
available when you select Full validation in the Validation options step.
„ Partial validation – the drop-down menu provides all available routings. This option is only
available when you select Partial validation in the Validation options step.
Note: Partial validation is not available for surveys that contain only one routing.
108

Chapter 1

Notes
„ You will receive an error when the same routing is selected for Partial validation and Initial
data entry or Full validation.
„ The Initial data entry and Full validation (if applicable) routing options are automatically selected
when the survey contains only one routing context.

E After selecting the appropriate routing options, click Next to continue to Display options.

Routing options - live interviewing

The live interviewing routing options step allows you specify the routing used for live interviewing.
This step is only available when you select Live interviewing in the Usage options step.

E Select the appropriate routing options for each project task:


„ Routing – the drop-down menu provides all available routings.
„ Renderer – the drop-down menu provides all available renderers. The selected renderer
controls which display renderer is used for live interviewing. The default value is Web.

Notes
„ The Routing option is automatically selected when the survey has only one routing context.

E After selecting the appropriate routing options, click Next to continue to Display options.

Display options

The display options step allows you to select which fields are visible in the case data, and select
which field is used to uniquely identify each case.
„ Identify unique surveys with this variable – select an appropriate variable that will be used to
uniquely identify each survey. The drop-down menu provides all user variables that can be
used as unique IDs. Boolean and categorical variables are excluded from this list.
„ Display fields – select the appropriate display fields. Selected fields are included in the IBM®
SPSS® Data Collection Interviewer Case List. The fields are displayed in the order in which
they appear in the Display fields list. Use Move Up and Move Down to reorder the list.

Notes
„ Respondent.ID and DataCollection.Status are selected by default.
„ DataCollection.Status is a required field and cannot be deselected.

E After selecting the appropriate display options, click Next to continue to Deployment options.

Deployment options

The deployment options step allows you to select whether to deploy the survey to a deployment
package or directly to the local IBM® SPSS® Data Collection Interviewer installation.
109

Base Professional

Options include:
„ Create a deployment package for this project (default setting) – when selected, the project is
saved as a deployment package, allowing it to be loaded into other Interviewer installations.
Enter a save location in the provided field, or click ... to browse for an appropriate save
location. The deployment package is saved to the location you specify.
„ Deploy this project to local Interviewer – when selected, the project is deployed to the local
Interviewer installation. This option requires an Interviewer installation on the local machine.
„ Data file type – allows you to select the deployment package save file format. The drop-down
menu provides the following save file options:
– Data Collection Data File (.ddf)
– Statistics File (.sav)

E After selecting the appropriate deployment options, click Next to continue to Expiry date and
time options.

Expiry date and time options

The expiry date and time step allows you to define the project’s expiration date and time (UTC
time). Defining a project expiration date and time allows interviewers to easily identify expired
projects.

Options include:
„ Date: The project expiration date. You can manually enter a date, in the format mm/dd/yyyy, or
you can click the down arrow to display a calendar and select a date.
„ Time: The project expiration time. This indicates the exact time of day, for the selected date,
that the project will expire. Enter an appropriate time in the 24-hour format hh:mm (for
example 17:00 for 5:00 PM).

E After selecting the appropriate deployment options, click Next to continue to Summary options.

Summary options

The Summary Options step provides a summary of the options selected in each wizard step.

E After reviewing the selected options, click Finish to exit the Deployment Wizard.
„ If you selected Create a deployment package for this project in the Deployment options step,
the deployment package is saved to the specified location.
„ If you selected Deploy this project to local Interviewer, the project is deployed to the local IBM®
SPSS® Data Collection Interviewer installation.

Note: If you want to change any of the selected options, click Previous until the appropriate wizard
step displays. After changing the appropriate option(s), click Next until the Summary Options step
displays. Review the selected options, and click Finish.
110

Chapter 1

Activation Settings

Using the File Management component

Most users who activate projects using either an IBM® SPSS® Data Collection Interviewer
Server Administration activity such as Launch or a desktop program such asIBM® SPSS®
Data Collection Base Professional have access to the shared FMRoot folder. Users whose
computers are not connected to the network cannot access FMRoot and therefore need to use the
File Management component for activation instead. When you install Base Professional, the
installation procedure asks whether the user has access to FMRoot and configures the user’s
machine accordingly. You can change this manually at any time simply by changing the value of
a registry key.
The registry key is called UseFileManagerWebService and it is located in
HKEY_LOCAL_MACHINE\SOFTWARE\SPSS\COMMON\FileManager. Its default value is
0 meaning that activation will use FMRoot. To use the File Management component instead of
FMRoot, change the value of this key to 1.
Users who do not have access to FMRoot and whose files are copied using the File Management
component may notice that activation run slightly slower than for users with access to FMRoot.

Option to select .sam sample management scripts

The activation procedure does not normally allow users to select sample management scripts
written in VBScript (.sam files). If your company has an overriding requirement to use .sam sample
management scripts with IBM® SPSS® Data Collection projects, you may reinstate the option to
select .sam files by setting the ShowVBScriptProvider key to 1 in the registry. This key is of type
DWORD and is located in \HKEY_LOCAL_MACHINE\Software\SPSS\mrInterview\3\Activate.
If the key is not defined or has a value of zero, .sam files cannot be selected.

Specifying which files are copied during local deployment

The IVFilesToBeCopied registry entry controls which files and file extensions are copied during
local deployment. By default, IVFilesToBeCopied includes the following files and extensions that
are automatically copied during local deployment:
„ .mdd
„ .sif
„ .htm
„ .html
„ .xml
„ .mqd
„ .gif
„ .jpg
„ .jpeg
„ .png
„ .mov
111

Base Professional

„ .bmp
„ .avi
„ catifields_*.mdd
„ .css
„ .js
„ catiCallOutcomes_*.mdd
„ projectinfo.xml

You can define additional files and/or file extensions by updating the
IVFilesToBeCopied user registry entry. The IVFilesToBeCopied registry entry is
located at: HKEY_CURRENT_USER\Software\SPSS\mrInterview\3\Activate.

The IVFilesToBeCopied rules are as follows:


E When the localdeployconfig.xml file is available, the file’s IVFilesToBeCopied value is used.

E When the localdeployconfig.xml file is not available, the


IVFilesToBeCopied value is retrieved from the user registry
(HKEY_CURRENT_USER\Software\SPSS\mrInterview\3\Activate\IVFilesToBeCopied)
and written to the local config.xml file.
E When the IVFilesToBeCopied user registry key is not found,
IVFilesToBeCopied is read from the local machine key
(\HKEY_LOCAL_MACHINE\Software\SPSS\mrInterview\3\Activate\IVFilesToBeCopied),
copied to the current user registry key
(HKEY_CURRENT_USER\Software\SPSS\mrInterview\3\Activate\IVFilesToBeCopied), and
then written to the local config.xml file.

Note: Registry key changes will not take effect until you manually remove any existing references
to IVFilesToBeCopied in the local config xml file. For example:
<?xml version="1.0" encoding="utf-8" ?>
<properties>
<IVFilesToBeCopied> <![CDATA[mdd;*.htm;*.html;*.xml;mqd;*.gif;*.jpg;*.jpeg;*.png;*.mov;*.bmp;*.avi;catifields_*.mdd;*.css;*.js;catiCa
</properties>

The default local activation directory is C:\Documents and Settings\<your Windows user
name>\Application Data\IBM\SPSS\DataCollection\Activate.

Notes for IBM SPSS Quantum Users


This section has been designed to help IBM® SPSS® Quantum™ users get started with IBM®
SPSS® Data Collection Base Professional. Quantum is a sophisticated tabulation package that
enables you to manage and tabulate your survey data. For example, routine tasks that you can
perform in Quantum include:
„ Checking and validating survey data
„ Cleaning and manipulating survey data
112

Chapter 1

„ Creating sophisticated schemes for weighting survey data


„ Producing market research tables
„ Performing statistical calculations
„ Exporting data to a variety of formats for further analysis
„ Generating IBM® SPSS® Quanvert™ databases

With the exception of generating a Quanvert database, you can also perform all of these tasks
using Base Professional. Although Quantum has been developed over many years and has a
number of detailed features that not yet available in Base Professional, already Base Professional
has a number of advantages. For example:
„ Unlike Quantum, you are not restricted to any one specific data format—Base Professional
can work with data in any format for which a suitable Data Source Component (DSC) is
available. In addition, Base Professional can import data directly from any data format for
which you have a suitable OLE DB provider (this means that you can easily import data from
Access and Excel if you have Microsoft Office installed).
„ Similarly, exporting data is easy and flexible. You can export data to any format for which a
suitable Data Source Component (DSC) is available or for which you have a suitable OLE
DB provider (this means that you can easily export data to Access and Excel provided you
have Microsoft Office installed).
„ You can set up rule-based cleaning routines based on the variable definitions.
„ Base Professional makes working with multiple languages easy and, unlike Quantum,
supports languages that use double-byte character sets (DBCS), such as many of the East
Asian languages, including Japanese.
„ Publishing tables is easy and flexible. The Base Professional Tables Option comes with a
number of components that make it easy to publish tables in HTML and Excel. In addition,
further export components are planned for future releases and it is possible to create your own
export component.
„ When you run or debug an interview, or run the interview using Auto Answer, you can choose
to write the answers, also known as the Case Data, to an output format supported by the
IBM® SPSS® Data CollectionIBM® SPSS® Data Collection Data Model. At present, the
formats you can write to are Data Collection Data File, IBM® SPSS® Statistics SAV file,
Data Collection RDB database, and Data Collection XML file. For more information , see
Creating case data.

Learning Base Professional takes time and effort, just like learning Quantum did. However, Base
Professional has lots of features to help you. For example, it has a number of templates and
macros that simplify setting up routine jobs. In addition, the IBM® SPSS® Data Collection
Developer Library comes with numerous samples, most of which are designed to run “right
out of the box” against the sample data. You can use these samples as a basis for your own
jobs. In addition, because Base Professional uses industry-standard technology, the skills you
develop using Base Professional will be more easy to utilize in other scripting and technology
environments than some of your Quantum skills.
113

Base Professional

The Big Picture


You can perform many of the same tasks in IBM® SPSS® Data Collection Base Professional
that you can in IBM® SPSS® Quantum™. However, the way you do them is different in Base
Professional. This topic is designed to help you understand in general terms the Base Professional
approach.

First let’s categorize the Quantum tasks, by looking at the contents of the four volumes of the
Quantum 5.7 User’s Guide:

1. Data Editing. This covers listing, validating, checking, and cleaning data, and setting up new
variables.

2. Basic Tables. This covers the basics of creating tables.

3. Advanced Tables. This covers more advanced table features, such as setting up weighting for
your tables, dealing with hierarchical data, adding statistical tests, and customizing the output.

4. Administrative Functions. This covers converting and transferring data, as well as setting up a
IBM® SPSS® Quanvert™ database.

The Base Professional documentation is divided into three main sections. The first section is an
introduction to using Base Professional and the other two sections reflect the two main areas
of functionality:

Using Base Professional. This provides a general introduction to Base Professional and working in
its integrated development environment (IDE). For more information, see the topic Using IBM
SPSS Data Collection Base Professional on p. 11.

Data Management Scripting. This covers functions that generally correspond to those covered in
Volumes 1 and 4 of the Quantum User’s Guide, but with the addition of setting up weighting. For
more information, see the topic Data Management Scripting on p. 204.

Table Scripting. This covers functions that generally correspond to those covered in Volumes 2 and
3 of the Quantum User’s Guide, with the exception of setting up weighting. For more information,
see the topic Table Scripting on p. 1140.

Activating questionnaires
To release a questionnaire online, you need to activate it. The Activation process uploads the
questionnaire file to an IBM® SPSS® Data Collection Interviewer Server and creates an IBM®
SPSS® Data Collection Interviewer Server Administration project for it. It also creates a set of
web pages for the questionnaire and provides a URL link to the web site containing the pages.
Respondents can then access the web site and take the questionnaire by following the link.

Once you activate an interview, either in test mode or in live mode, any further changes that you
make to the questionnaire file are saved in a separate version of the file. Interviewer Server
updates the file and saves the changes with a new version number. The content of the file before
the changes is always retained in an earlier version. This includes adding or deleting questions,
114

Chapter 1

and changing question types. It is therefore recommended that you test your interview using the
Interview Preview option before activating the questionnaire, so that additions or deletions that
you make in the course of designing your questionnaire are not permanently saved in the file.

You can activate the questionnaire either as a test or live interview. A test interview works in the
same way as creating a live interview, except that any data collected is flagged as test data.

Activation user role implications


„ You cannot activate questionnaires (the Activate option is disabled) unless you are assigned to
the Can activate in test mode activity role feature.
„ You cannot activate questionnaires in Go live mode (the Go live option is disabled) unless you
are assigned to the Can activate in active mode activity role feature.
„ You cannot access advanced activation features (the More option is disabled) unless you are
assigned to the Can view advanced activation settings activity role feature.
„ You cannot modify advanced activation settings (the More option is enabled, but each setting
is read-only and the Load local file... option is disabled) unless you are assigned to the Can edit
advanced activation settings activity role feature.

Refer to the Assigning users or roles to activity features topic in the IBM SPSS Data Collection
Interviewer Server Administration User’s Guide for more information on user roles.

Activation permissions
„ Project does not exist in the Distributed Property Management (DPM) server: If the login user
is assigned the canCreateProject activity feature, the user can activate the project (otherwise
the user cannot activate the project).
„ Project exists in the Distributed Property Management (DPM) server: If the login user is the
owner of the project, the user can activate the project. Otherwise, the following rules apply:
– If the project is not assigned to the login user, that user cannot activate the project.
– If the project is assigned to login user, a check is made to determine if the project is
currently locked by login user. If yes, the user can activate the project, otherwise a check is
made to determine if the user is assigned the canUnlockProject activity feature. If assigned
the canUnlockProject activity feature, the user can activate the project (otherwise the user
cannot activate the project).
Refer to the Assigning users or roles to activity features topic in the IBM SPSS Data Collection
Interviewer Server Administration User’s Guide for more information on user roles.
The Activate button is disabled when unable to activate a project. Activation information is
located in the IBM® SPSS® Data Collection Desktop log file.

Activating a questionnaire

E If you have not done so, save the questionnaire file.

E Select the routing that you want to activate.


115

Base Professional

E From the menu, choose


Tools > Activate

or press Alt+T, A.

E In the Data Collection Login dialog box, enter (or select from the drop-down list) the following:
„ Destination Server or Interviewer Server URL: Enter the name or URL of the
server where Interviewer Server Administration is located (for example,
http://server_name/SPSSMR/DimensionNet/default.aspx). Use this to connect to a server
using an internet or intranet link.
„ User name: Enter a valid Windows or Interviewer Server user name.
„ Password: Enter a valid password for the defined user name.
„ Authentication: Select Interviewer Server Authentication or Windows Authentication (if
Interviewer Server Administration is configured for Active Directory).
„ Login using my Windows Account: When selected, the User name, Password, and Authentication
fields are disabled and your current Windows login credentials are used.

E Click Login. If the login credentials are valid, you are presented with the Activate - Current Project
dialog.
Figure 1-4
Activate dialog

E You can activate projects using either Basic or Advanced mode.


„ Basic mode: Provides options for activating a project in either Test or Live mode, and allows
you to select an activation template from which to pull activation settings.
„ Advanced mode: Provides options for configuring various activation and project settings. For
more information, see the topic Activate Current Project - Project settings on p. 121.

E Select Test mode, Go live, or Inactive from the Basic settings section.

Note: Inactive only displays when the Status after activation option is set to Inactive. For more
information, see the topic Activate Current Project - Project settings on p. 121.
116

Chapter 1

E Select Apply activation settings from activation template if you want to use activation settings from an
existing template. When this option is selected, the Activation Template displays on right-hand side
of the dialog, allowing to select an existing activation template. Select an appropriate template and
click Accept to use the selected template’s settings (click Preview to view the selected template’s
settings). For more information, see the topic Activation templates on p. 117.

Note: Refer to Activate Current Project - Project settings if you want to configure additional
activation settings.

E Click Activate to activate the questionnaire to the Interviewer Server. The Activate dialog closes
and a message indicating that the activation request was sent to the server displays.

You can monitor the activation status via the Activation Console. The console provides options
for viewing pending and completed activations, and creating activation history filters. For more
information, see the topic IBM SPSS Data Collection Activation Console on p. 195.

Notes

E In order for the activation process to function properly, the server software version must be the
same (or higher) as the client software version.

E When activating a new project from a desktop application (IBM® SPSS® Data Collection Author,
IBM® SPSS® Data Collection Base Professional), a warning message will display during
activation when there is an existing ActivateDocument.xml file with unmatched information in
the project’s local folder:
The latest activation settings in the project folder are for a different project. Do you want to update the activation settings based on the curre
[Yes] [No]

If you select Yes, all unmatched information will be replaced with the current project information.
If you select No, the unmatched information will be preserved.

E When you activate a questionnaire in Author or Base Professional, the .mdd file is copied into the
FileName_files folder beneath the questionnaire file. The activation process uploads all the files
from this folder into the Interviewer Server Administration project.

E If you attempt to reactivate a project before receiving the successful activation message from the
IBM® SPSS® Data Collection Activation Console, you may not retrieve the most up-to-date
information from the server.
The .NET Framework’s default encoding reads the registry’s ANSI codepage when encoding. As
a result you may encounter errors when activating questionnaires that include characters such
as umlauts (for example, when the project name contains the character Ä). You can resolve this
issue by updating the server’s ANSI codepage:

1. Access the registry on the server (Start > Run >regedit).

2. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Nls\CodePage\ACP

3. For servers running a German operating system, enter a value of 850; for Chinese, enter a value of
936; for Japanese, enter a value of 932.
117

Base Professional

Refer to Encoding Class (http://msdn.microsoft.com/en-us/library/system.text.encoding.aspx) on


the Microsoft MSDN site for more information.

Activation templates
You can set up activation options for use in specific circumstances, and save the options as
activation templates that you can later select when activating other projects.

Creating an activation template

E If you have not done so, save the questionnaire file.

E Select the routing that you want to activate.

E From the menu, choose


Tools > Activate

or press Alt+T, A.

E In the IBM® SPSS® Data Collection Login dialog box, enter (or select from the drop-down list)
the following:
„ Destination Server or IBM® SPSS® Data Collection Interviewer Server URL: Enter the name or
URL of the server where IBM® SPSS® Data Collection Interviewer Server Administration is
located (for example, http://server_name/SPSSMR/DimensionNet/default.aspx). Use this to
connect to a server using an internet or intranet link.
„ User name: Enter a valid Windows or Interviewer Server user name.
„ Password: Enter a valid password for the defined user name.
„ Authentication: Select Interviewer Server Authentication or Windows Authentication (if
Interviewer Server Administration is configured for Active Directory).
„ Login using my Windows Account: When selected, the User name, Password, and Authentication
fields are disabled and your current Windows login credentials are used.

E Click Login. If the login credentials are valid, you are presented with the Activate - Current Project
dialog.

E By default, the Activate - Current Project dialog displays in the Basic settings mode. In order
to create a activation template, you will need to switch the dialog to Advanced mode. This is
accomplished by clicking More >>.
118

Chapter 1

Figure 1-5
Activate - Current Project dialog

Advanced mode provides options for configuring various activation and project settings. For more
information, see the topic Activate Current Project - Project settings on p. 121.

E After configuring the appropriate activation settings, you can choose to save the activation
template to the local file system or to the Interviewer Server.
119

Base Professional

Note: You can use the Page Up and Page Down keys to navigate through the Activate advanced
mode options on the left.

Saving activation templates to the local file system

E From the Activate - Current Project dialog, select the following:


File > Save as File...

The Save As dialog displays, allowing you to save the activation template to the local file system.
Select a file system location and provide an appropriate template file name.
E Click Save to save the activation template to the local file system.

Saving activation templates to the IBM SPSS Data Collection Interviewer Server

E From the Activate - Current Project dialog, select the following:


File > Save as Template...

The Save Template dialog displays, allowing you to save the activation template to the Interviewer
Server.
Figure 1-6
Save Template dialog

Activation templates: Lists existing activation templates on the Interviewer Server.

Template name: Enter an appropriate template name in the field, ensuring the name is not the same
as an existing activation template name (unless you want to overwrite an existing template).
E Click Save to save the activation template to the Interviewer Server.

Loading activation templates from the local file system

E From the Activate - Current Project dialog, select the following:


File > Load Local File...

The Open dialog displays, allowing you to select the appropriate activation template from the
local file system.
120

Chapter 1

E Navigate to the appropriate file system directory, select the desired activation template, and click
Open to load the template’s settings into the Activate - Current Project dialog.

The loaded activation settings will be used for each project activation during the active session, or
until you load settings from another activation template.

Notes

The following properties will not overwrite the current property values after an activation template
is loaded from the server or local file system:
„ Project ID
„ Project name
„ Project database
„ Project description
„ Use reporting database
„ Reporting database location

Login information (user name, ticket, server name, web service URL) and queued activation
information, which is stored in the activation XML, will not overwrite current settings after a
document is loaded from the server or local file system.
121

Base Professional

Activate Current Project - Project settings

The Project settings is where you enter general information that details where and how the
project is to be activated:
Figure 1-7
Project details dialog
122

Chapter 1

The Project settings dialog provides details about the current project. You should not need to
change these fields. You can choose a different project here, but it is easier to close the dialog
and select a different project, as this will automatically update the other fields with the correct
project information.

Project details

ID: The ID is automatically generated from the name of your .mdd file. When the project is
activated in IBM® SPSS® Data Collection Interviewer Server Administration, this is used as the
unique identifier for the project. The drop-down list displays the IDs of any projects that have
previously been activated. To change the settings for a project that does not relate to the file you
currently have open, select the project name from the drop-down list.

Name: The name is automatically generated from the name of your .mdd file. When the project
is activated in Interviewer Server Administration, this is used as the project name. You can
change the name if required.

Description: Enter a description for the project.

Status after activation: The status that you want the project to have once it has been activated.
Choose one of the following:
„ Active. The project is available for live interviewing.
„ Inactive. The project cannot be used for test or live interviewing.
„ Test. The project is available for testing, and any data collected will be flagged as test data.

Project database: The name of the case data database. The default is to store each project’s data
in a separate database with the same name as the project. However, if your site is configured to
allow it, several projects can write case data to the same database.

You may also have authority to create a new database for the project (with a name of your
choice). If so, the drop-down list will contain the Create New Custom Database option. Select
this option and then enter a name for this database in the Custom box that appears next to the
Project Database field.

Activation notes: Background information about the project that you want to save in the project
database for reference by other users. You can leave this box blank.

Source files: The location of the source files for this project (that is, the name of the folder
containing the .mdd file). Select Include sub-folders if the project’s folder contains localization
sub-folders that must be copied to the Shared and Master project folders along with the main
project files.

Interview server project folder: Identifies the Interviewer Server Administration folder into which
the project will be activated. You can select one of the following options:
„ Select an existing Interviewer Server Administration folder (if any currently exist).
123

Base Professional

„ Select the default <Top Level> setting. When selected, this option has no affect on where the
project will be activated.
„ Select the <Create new folder> option and then enter an appropriate folder name. Upon
activation, the specified folder name will be created on the interview server, and the project
will be activated into this folder.
Latest Version Label: The version label to assign to the version of the questionnaire that will be
created during activation.

Project Expiry (UTC Time). Provides options for specifying a project expiration time.
„ Date: The project expiration date. You can manually enter a date, in the format mm/dd/yyyy, or
you can click the down arrow to display a calendar and select a date.
„ Time: The project expiration time. This indicates the exact time of day, for the selected date,
that the project will expire. Enter an appropriate time in the 24-hour format hh:mm (for
example 17:00 for 5:00 PM).

Options

The following options enable and disable specific project settings. When enabled, any user defined
settings are applied to the project during activation. When disabled, any user defined settings are
not applied to the project during activation.
Use disconnected interviewing: When selected, the Disconnected node displays under the Activate
tree. The Disconnected node provides options for deploying a survey to one of more IBM®
SPSS® Data Collection Interviewer installations without requiring an IBM® SPSS® Data
Collection Interviewer Server. For more information, see the topic Activate Current Project
- Disconnected settings on p. 126.
Use participants to specify who to interview: When selected, the Participants node displays under
the Activate tree. The Participants node provides options for specifying the project’s sample
management parameters. This option is only available if you have permission to work with
participants and the project has already been activated with sample management data. For more
information, see the topic Activate Current Project - Participants settings on p. 131.
Use telephone interviewing: When selected, the Telephone node displays under the Activate
tree. The Telephone node provides options specific to projects used for telephone interviewing
(autodialer settings, calling rules, and so on). This option is only available if you have permission
to work with phone surveys and the project has already been activated with CATI sample data.
For more information, see the topic Activate Current Project - Telephone settings on p. 152.
Use quota control: When selected, the Quota node displays under the Activate tree. The Quota node
provides options that allow you to create a new, or using an existing, quota database for the project.
This option is only available if you have permission to work with quota and the server has a quota
database. For more information, see the topic Activate Current Project - Quota settings on p. 186.
124

Chapter 1

Project - Interview

The Interview settings allows you specify the default questionnaire language, define which version
of the questionnaire should be used for interviews run in test and active mode, and decide whether
to restart stopped interviews using the latest version of the questionnaire. This is particularly useful
when you need to make changes to a questionnaire that is already being used for live interviewing.

Figure 1-8
Interview settings

Default Questionnaire Language. The default (base) language for the questionnaire.

With multilingual questionnaires, the language in which you write the questionnaire automatically
becomes the default language for that script. If the questionnaire does not specify the language
in which it is to run, and the information cannot be obtained from the participant record, the
interview will run in this language. Once you start translating a script, other languages are added
to the questionnaire definition file and you may want to select one of those languages as the
default language for the questionnaire.

The language list contains only languages that are present in the questionnaire definition file. If
the computer’s default language does not appear in the questionnaire definition file, the language
list defaults to US English.

Test version. Choose Latest Version to use the latest activated version of the questionnaire for test
interviews. Choose Current Version to use the version of the questionnaire that is being used
now, before activation generates a new version.
125

Base Professional

Active version. Choose Latest Version to use the latest activated version of the questionnaire for
live interviews. Choose Current Version to use the version of the questionnaire that is being used
now, before activation generates a new version.
Restart interviews using new version. Deselect this box if you want to restart interviews using
the version of the questionnaire with which the interviews were originally started, rather than
the latest version.
Default Routing Context. If you have more than one routing context defined, select the one you
want to use as the default. The activation process activates all routing contexts it finds, but only
sets one as the default.

Project - Roles

The Roles settings allow you view the roles to which you currently belong and provide project
access to the listed roles. If you are a member of the DPMAdmin role, or you are assigned the
Can assign project feature, all roles are displayed in the roles list. Otherwise, only the roles
to which you belong are displayed.

Note: The DPMAdmin role never displays.


Figure 1-9
Project Roles settings
126

Chapter 1

After activation, the selected roles are provided access to the project.

Activate Current Project - Disconnected settings

The Disconnected settings provide options for deploying a survey to one or more IBM® SPSS®
Data Collection Interviewer installations, without the need for an IBM® SPSS® Data Collection
Interviewer Server. The settings are applied when the questionnaire is opened in Interviewer.
Refer to the Interviewer User’s Guide for more information.

Note: The Disconnected node displays under the Activate tree when the Use disconnected
interviewing option is selected from the Project settings dialog.

Figure 1-10
Disconnected settings

You can configure settings for:


„ Routing
„ Display Fields
„ Data Entry
„ Deployment
127

Base Professional

Disconnected - Routing

The Routing settings allow you to select each routing’s activity and the player that will be used in
IBM® SPSS® Data Collection Interviewer.
Figure 1-11
Routing settings

Routing. Allows you to select from the available questionnaire routings

Activity. Allows you to select an activity for the selected routing. Activities include:
„ Local Interview – when selected, the associated routing is flagged for live interviewing, and
Web and Phone are the only options available in the Player column.
„ Initial Data Entry – when selected, the associated routing is flagged as an initial data entry
project, which means that Interviewer cases are keyed in Initial Entry mode. When this
option is selected, IBM® SPSS® Data Collection Data Entry is the only option available in
the Player column.
„ Full Validation – when selected, the associated routing is flagged as a full validation project,
which means that all question responses require full validation. When this option is selected,
Data Entry is the only option available in the Player column.
128

Chapter 1

Player. This column allows you to select which Interviewer player will be used for the associated
routing. The available options are dependant on what was selected in the Activity column. Options
include:
„ Data Entry – when selected, cases will be entered via the Data Entry Player when the
questionnaire is opened in Interviewer.
„ Web – when selected, cases will be entered in live interviewing mode when the questionnaire
is opened in Interviewer.
„ Phone – when selected, cases will be entered in live interviewing mode when the questionnaire
is opened in Interviewer.

Disconnected - Display Fields

The Display Fields settings allow you to select which fields will be visible when the questionnaire
is opened in IBM® SPSS® Data Collection Interviewer, and identify the field that will be used to
uniquely identify each case. Use the Move Up and Move Down buttons to change the field order.
Figure 1-12
Display Fields settings
129

Base Professional

Disconnected - Data Entry

The Data Entry settings allow you to configure data entry verification related options. The settings
are applied when the questionnaire is opened in IBM® SPSS® Data Collection Interviewer.
Figure 1-13
IBM SPSS Data Collection Data Entry settings

Identify unique surveys with this variable: Allows you to select which field will be used to uniquely
identify the questionnaire. The RespondentID field is the default.

Require two user verification: When selected, two user verification is required when validating
question responses. This means that the user who validates question responses must be different
than the user who performs initial data entry.

Disconnected - Deployment

The Deployment settings provides options for deploying the questionnaire to a deployment
package, or directly to the local IBM® SPSS® Data Collection Interviewer installation.
130

Chapter 1

Figure 1-14
Deployment settings

Data file

File type: Allows you to select the file format for the activated or locally deployed questionnaire.
Options include:
„ Data Collection Data File (.ddf)
„ Statistics File (.sav)

Upon activation

Use a deployment package for disconnected machines: When selected, this option allows you
to create a deployment package for use with Interviewer. Deployment packages are typically
employed for Interviewer machines that are not connected to an IBM® SPSS® Data Collection
Interviewer Server. Click Browse... to select a location to save the deployment package.

Add to Interviewer on this machine: When selected, the questionnaire is automatically added to the
Interviewer project list on the current machine. This option is only available when Interviewer
is installed on the same machine.
131

Base Professional

Activate Current Project - Participants settings

The Participants settings provide options for specifying the project’s sample management
parameters.

Note: The Participants node displays under the Activate tree when the Use participants to specify
who to interview option is selected from the Project settings dialog.
Figure 1-15
Participants settings

You can configure settings for:


„ Upload
„ Database
„ Fields
„ Script
„ E-mail

Participants - Upload

The Upload settings allow you to specify the name and location of the file that contains the
participant records.
132

Chapter 1

Figure 1-16
Upload settings

Use existing participants. When selected, the project uses sample management and participant
records that have already been uploaded.
Upload participants. When selected, the project uses sample management and you are provided the
option of uploading the participant records. Click Browse... to select a participant file location.

Note: You can only upload participant records if your IBM® SPSS® Data Collection Interviewer
Server Administration user profile is assigned the Can upload participants activity feature in
Interviewer Server Administration. Refer to the topic Assigning Users or Roles to Activity
Features in the Interviewer Server Administration User’s Guide for more information.
With phone interviewing support. When selected, the project allows outbound telephone
interviewing.
133

Base Professional

The table names the fields that are (or will be) present in the sample table and displays how
they will be used.
Setting Description
Available Fields to make available to the sample management
script. Cancel any that are not used by the sample
management script (you cannot cancel required
fields).
Fields whose contents are used only by the interview
script become available during interviews and need
not be selected here.
Authentication Fields to use for authenticating inbound callers
taking Web interviews. Choose the fields you want
to use and cancel any you do not.
If you need to be able to select specific participant
records, you should select the Id field because this
is a key to the database and is guaranteed to contain
a unique value for each record.
If you authenticate on a field that may contain
non-unique values, the sample management system
will select the first record whose value in that
column matches the values specified in the sample
management script.
Field Field names. You cannot modify these settings.
Default Default values to be inserted into empty fields.
Fields with no default values will have a null value.
You may specify your own defaults as long as the
values are consistent with the fields’ data types and
lengths.
Type The type of data in the field. You cannot change the
data type of required fields.
Length The number of characters that can be held in a text
field. You cannot change the length of required
fields.

Refer to Uploading participant records for information about the field parameters you can change.

Participants - Database

The Database settings allow you to specify the server to which the participant records will be
uploaded and the database and table in which the records will be stored.
134

Chapter 1

Figure 1-17
Database settings

Server name: The name of the server on which the participant database is located. The
drop-down list contains only those servers that are present in your current domain. If you want
to use a server in another domain, you must manually enter the domain name (for example,
<domain_name>\<server_name>).
Database name: The name of the participant database. The drop-down list displays the names of
databases that you have permission to use and that exist on the chosen server. If you are uploading
participant records, and you have permission to create databases, the New button is visible and
allows you to create a new database. Enter the database name when prompted.
Table name: The name of the table that contains the participant records for this project. The
drop-down list displays the names of tables in the chosen database. If you are uploading
participant records you can click New to create a new table. Enter the table name when prompted.

Participants - Fields

The Fields settings allow you to specify which fields will be present in the sample table and
provides various field configuration options.
135

Base Professional

Figure 1-18
Fields settings

With phone interviewing support. When selected, the project allows outbound telephone
interviewing.

The fields table names the fields that are (or will be) present in the sample table and displays
how they will be used.
Setting Description
Available Fields to make available to the sample management
script. Cancel any that are not used by the sample
management script (you cannot cancel required
fields).
Fields whose contents are used only by the interview
script become available during interviews and need
not be selected here.
136

Chapter 1

Setting Description
Authentication Fields to use for authenticating inbound callers
taking Web interviews. Choose the fields you want
to use and cancel any you do not.
If you need to be able to select specific participant
records, you should select the Id field because this
is a key to the database and is guaranteed to contain
a unique value for each record.
If you authenticate on a field that may contain
non-unique values, the sample management system
will select the first record whose value in that
column matches the values specified in the sample
management script.
Field Field names. You cannot modify these settings.
Default Default values to be inserted into empty fields.
Fields with no default values will have a null value.
You may specify your own defaults as long as the
values are consistent with the fields’ data types and
lengths.
Type The type of data in the field. You cannot change the
data type of required fields.
Length The number of characters that can be held in a text
field. You cannot change the length of required
fields.

Refer to Uploading participant records for information about the field parameters you can change.

Participants - Script

The Script settings displays the sample management script that the project will use for managing
participant records. In the selection box, choose the name of the script you want to download.
The standard choices are Basic (for Web interviewing) and Multimode (for Web and telephone
interviewing).
137

Base Professional

Figure 1-19
Script settings

Depending on your user permissions, and the sample management system configuration, you may
also be able to click Browse... to load a file other than those in the drop-down list.

Participants - E-mail

The E-mail settings provide options for setting up e-mail for participants in the sample file. For
example, at the start of a project you might send a message to everyone inviting them to participate
in the survey. Later, you might setup a second job that sends reminders to those respondents
who have not yet taken the survey.
138

Chapter 1

Figure 1-20
E-mail settings

E-mail provides the following options:


„ Respondent selection. You can send e-mail to all respondents in a queue or set of queues.
Alternatively, send to a fixed number of respondents chosen either from the start of the queue
or at random.
„ Customized and personalized message texts. If you want to include information from a
respondent’s sample record, or the value of a project property, insert a placeholder in the
message text and the appropriate values will be substituted for the placeholders when the
messages are sent. This allows you to address respondents by name, and to include the URL
for starting the interview as part of the message.
„ Test messages. Check how the message will appear to respondents by sending a test message.
„ Project status. You can specify that the job should only be run if the project has a particular
status.
139

Base Professional

„ Activity recording. You can record which respondents received which e-mail by updating a
field in the sample record with a note of the time and date at which the e-mail was sent.
„ No repeat e-mail. If you rerun an e-mail job, a message is normally to everyone who is
selected to receive it. You may choose not to target people who received this message during
a previous run.
„ Delayed sending of e-mails. There is no direct link between setting up an e-mail job and
running it. All job specifications are saved and are run only when selected from a list of
e-mail jobs for the current project.
„ Maintenance facilities. Job specifications can be edited and deleted as required.
„ Dealing with e-mail problems. You can specify the e-mail address of a user who is to be
contacted if there are problems. This user is also the person who receives test e-mails.
Note: E-mail does not support SMTP servers that are set up to require authentication.

E-mail jobs

The e-mail jobs table allows you to define the following parameters:
Setting Description
Name Enter a name for the e-mail job.
From Enter the e-mail address that will display as the sender.
Reply Address Enter your name (or e-mail address) or the name (or e-mail address) of the
person on whose behalf you are sending the message. Whatever you type here
will appear as the sender’s name in the recipient’s message box.
Priority Select the e-mail priority from the drop-down list. You can choose either a
High, Medium, or Low priority.
Project Status Select the status that the project must have in order for the e-mail to be sent.
You can select more than one status.

E-mail text tab

The E-mail text tab allows you to configure the following settings:

Subject: Enter a subject for the e-mail message.

Send as: Select either HTML or Plain text. HTML provides formatting options (bold text, italics,
and so on), while Plain text does not.

Body: Displays suggested message text, complete with substitution markers for inserting
respondent or project specific information. You can accept this text as it is, modify it, or replace
it with different text.

Preview... Click to preview how the e-mail will display.

Substitutions... If you want to insert the value of a Sample Management field or project property
into the message text, click in the text at the point you want to make the insertion, then click
Substitutions... and select the field or property you want to insert from the dialog box that is
140

Chapter 1

displayed. The property name appears in the message text enclosed in curly brackets and will be
replaced by the appropriate value when the e-mail message is sent.

E-mail address field from sample: Select the Sample Management field that contains recipient
e-mail addresses.

Write date and time that e-mail was sent to sample: When selected, the date and time when the
message was sent, as part of each recipient’s sample record, is recorded.

Sample field in write to write: Select the name of the sample field from the drop-down list.

Note: Use the above two options if you want to prevent the same message from being sent to
respondents more than once.

If a respondent is sent the same message more than once, the date and time field information is
overwritten each time a new message is sent.

Send e-mails to this address when test e-mails are sent or when there are problems: Enter an e-mail
address that will be sent messages when test e-mails are sent or when problems are encountered.

Participants tab

Choose queues: Select the queues from which recipients can be selected.

Note: The list only shows queues that contained sample records at the time the mrDPMServer3
service was started. This may mean that the list may contain out-of-date information, for example,
because records have been moved into a queue that was previously empty or because a queue
that contained records is now empty. You may need to ask your IBM® SPSS® Data Collection
Interviewer Server administrator to stop and restart the mrDPMServer3 service in order to view an
up-to-date queue list.

To restrict your selections even further do the following:

Choose a field to filter participants: Select a field to be used for filtering participants. The list shows
all fields in the sample table except Queue.

Choose filter value(s): Select the values that the field must contain in order for respondents to be
selected. The list shows all values present in the selected field.

Send e-mail to:


„ All participants meeting the criteria specified above: When selected, e-mail is sent to all
participants that meet the previously specified criteria.
„ A maximum number of participants: When selected, e-mail is only sent to the number of
participants you specify.
„ First x: When selected, e-mail is sent to select respondents from the top of the list (of those
matching the selection criteria).
„ Randomly selected: When selected, e-mail is sent to random respondents (of those matching
the selection criteria).
141

Base Professional

This e-mail can be sent to the same participant more than once: When selected, e-mail is allowed to
be sent to the same participant more than once. This option is only available if you enabled the
Write date and time that e-mail was sent to sample field on the E-mail text tab.

Uploading participant records

This topic explains how to upload participant records using the Activate dialog box.

Note: You can only upload participant records if your IBM® SPSS® Data Collection Interviewer
Server Administration user profile is assigned the Can upload participants activity feature in
Interviewer Server Administration. Refer to the topic Assigning Users or Roles to Activity
Features in the Interviewer Server Administration User’s Guide for more information.

Using this method to load records is generally the same as loading records using the Participants
activity in Interviewer Server Administration, but there are some restrictions that apply to the
participants text file. Whereas the Participants activity accepts any field names in the participants
text file, and provides facilities for mapping fields in the file to the field names that the sample
management system requires in the database, the Activate component does not. This means that
all fields that are present in the text file, and that are required sample management fields, must
have the correct names in the text file. For example, the record Id must be stored in a field called
Id. If the field name is RecNum, you must change this in the header line of the text file before
you upload records.

The names of the required fields for Web interviews are as follows:
Column Data type Null Primary key Default Notes
and length permitted value
Active Long Yes No 0 Set to 1 while
the sample
management
functions are
running; that
is, while the
record is in
the ACTIVE
queue.
Id Text(64) No Yes The sample
record
ID which
uniquely
identifies
each sample
record.
Queue Text(255) Yes No FRESH Names the
queue in
which the
record is
currently
held.
142

Chapter 1

Column Data type Null Primary key Default Notes


and length permitted value
Serial Long Yes No 0 The unique
serial number
that IBM®
SPSS® Data
Collection
Interviewer
Server
assigns
to each
respondent’s
case data.
Generally,
this serial
number is not
the same as
the sample
record ID.
When a
respondent
restarts an
interview,
Interviewer
Server uses
the serial
number to
retrieve the
respondent’s
case data
record and
to display
the responses
(stored in
the case
data record)
that the
respondent
has already
given.
Test Long Yes No Null Set to 1 if the
record is test
data, or 0 if
it is real data
(also known
as live data).
This column
is used by
the Phone
activity to
restrict the
type of data
that appears
in phone
reports. If the
value is Null,
the Phone
activity will
treat the
record as if
143

Base Professional

Column Data type Null Primary key Default Notes


and length permitted value
it is both real
and test data.

The required fields for telephone interviews are:


Column Data type Null Primary Default Notes
and length permitted key value
Active Long Yes No 0 Set to 1 while the sample
management functions are
running; that is, while the
record is in the ACTIVE
queue..
ActivityStartTime
DateTime Yes No Null The StartTime of latest
record in the history table
for a specific sample.
Date
AppointmentTime Yes No The time in UTC at which
the respondent asked to be
called.
Long
AppointmentTryCount Yes No 0 The number of calls made
to this record after an
appointment was set.
When sample records are
uploaded into the sample
table, a non-null default
value should be specified
otherwise errors will occur
during the upload.
Audit Text(2000) Yes No Records changes made
but see Notes to other fields (except
for more Comments) in the record.
information This field was new in
Interviewer Server 4.0.
In earlier versions, these
changes were stored in the
Comments field.
If you reuse a pre-v4.0
sample table that contains
a Comments field of
SQL type ntext, the
Audit field is created as
nvarchar(2000) instead.
This is due to an issue in
the Microsoft OLE DB
consumer templates that
prevents a table containing
two ntext columns.
CallOutcome Text(64) Yes No The call outcome (return)
code for the previous call
to this record.
144

Chapter 1

Column Data type Null Primary Default Notes


and length permitted key value
Long
CallRecordingsCount Yes No 0 for The number of call
telephone recordings for this record.
projects, Records loaded with this
otherwise field empty have this field
Null set to Null in the sample
table.
Comments Text(2000) Yes No Additional information
but see Notes about the participant.
for more Interviewers may update
information. this field when they call
the participant.
In pre-v4.0 sample tables,
the Comments field is
created as ntext(16). If
you reuse a pre-v4.0
sample table that contains
a Comments field of
type ntext, its data type
remains unchanged and
the Audit field is created
as nvarchar(2000) instead.
This is due to an issue in
the Microsoft OLE DB
consumer templates that
prevents a table containing
two ntext columns.
The standard multimode
sample management
scripts display records
with comments before
dialing so that the
interviewer can read
the comments before
talking to the participant.
ConnectCount Long Yes No 0 The number of times
that the number has been
connected. This field is
updated when a sample is
dialed and connected.
DayPart Text(128) Yes No Null Records the call count for
each specific day part.
For example, assume there
are two day parts named
aa, ab. The value for
this field will be aa1|ab2
(or aa1). This means the
sample was used to call
one time in aa time range
and two times in ab time
range. If the sample has
not yet been used, the
value of this field is null.
145

Base Professional

Column Data type Null Primary Default Notes


and length permitted key value
ExpirationTimeDateTime Yes No 2099–12–31Defines the participant
23:59:000 record expiration date
and time. For example, a
project may dictate that
participant records can
only be called within a
specific date range.
Expired records are not
available for dialing
(except for retrieving
appointments).
Id Text(64) No Yes The sample record ID that
uniquely identifies each
sample record.
InternalDialerFlags
Text(64) Yes No NULL Used in conjunction with
a 3rd party dialer. In
full predictive mode, this
field should accompany
all numbers dialed
commands. It is set to
an initial value by the
CATI system for the first
dialer (for a different
dialer, the initial value can
be different).
IBM® SPSS® Data
Collection Dialer will
return a new value for
Internal Dialer Flag for
the number. After dialing,
this field will be updated
with the new value,
and this value will be
permanently set with the
sample record and passed
through for all subsequent
dialing attempts.
InterviewModeText(64) Yes No How the record may be
used: set to Web for an
inbound self-completion
interview or Web CATI
for outbound telephone
interviewing. In projects
that allow a mix of
inbound and outbound
calling, the sample
management script should
check the value of this
field and select records
for telephone interviewing
accordingly.
The value in this field
can be changed in the
questionnaire script, or by
editing the record directly
in SQL. You might
want to do this towards
146

Chapter 1

Column Data type Null Primary Default Notes


and length permitted key value
the end of a survey if
there are a number of
timed out or stopped
Web interviews and you
want your interviewers
to contact those people
to try to complete the
interviews.
Long
NoAnswerCount Yes No 0 How many times this
sample has been called
and received NoAnswer.
This field is updated
when a sample is dialed
and returned with a call
outcome of NoAnswer
PhoneNumber Text(64) Yes No Must contain a phone
number if the record is
to be used for telephone
interviewing.
If the project uses
an autodialer, phone
numbers that start with
a + sign will have that
character replaced by the
InternationalAccessCode
defined in DPM. + signs
preceded by white space
or other characters are not
replaced.
If the project allows
inbound calling, you can
add a question to the script
that asks respondents to
enter contact numbers,
and then update the
sample records with this
information.
PreviousInterviewerID
Text(64) Yes No The name of the
interviewer who made
the previous call to
this participant. This
allows appointments
to be returned to the
previous interviewer if the
current time is before the
AppointmentMarginAfter
interval has passed.
Appointments that are not
kept within this period
may be passed to any
interviewer.
When interviews are
reviewed, this field is
updated with the name of
the reviewer.
147

Base Professional

Column Data type Null Primary Default Notes


and length permitted key value
PreviousQueue Text(64) Yes No The name of the queue
in which the record was
previously held. When
records are displayed
for interviewers, the
record’s current queue
is always shown as
ACTIVE because the
record has been selected
for interviewing.
Displaying the value of
PreviousQueue can be
useful to interviewers
as it may provide
additional information
about the record’s calling
history. For example, if
PreviousQueue is FRESH,
the interviewer knows
the record has not been
called before, whereas
if PreviousQueue is
APPOINTMENT, he/she
knows that the respondent
has already been contacted
and has asked to be called
back to be interviewed.
Queue Text(64) Yes No FRESH Names the queue in which
the record is currently
held.
When replicate identifiers
are defined in the queue
field for specific records,
those records can
then be used to create
sample/participant record
subsets.
RecallTime Date Yes No The time in UTC that
was set as the callback
time for appointments that
are set automatically by
the sample management
script.
Long
RequiresManualDial Yes No Indicates that the record
must be manually dialed.
The sample management
script will set AutoDial=0
for these records.
The feature will not work
if RequiresManualDial
is not defined in the
participants table
148

Chapter 1

Column Data type Null Primary Default Notes


and length permitted key value
ReturnTime Date Yes No The time at which the
record was returned to
sample management. This
allows you to specify the
amount of time that must
elapse between repeat
calls to records whose
interviews timed out or
were stopped.
Serial Long Yes No 0 The unique serial
number that Interviewer
Server assigns to each
respondent’s case data.
Generally, this serial
number is not the same
as the sample record
ID. When a respondent
restarts an interview,
Interviewer Server uses
the serial number to
retrieve the respondent’s
case data record and to
display the responses
(stored in the case data
record) that the respondent
has already given.
Screener Text(64) Yes No Null Identifies which
respondents are the
suitable candidates for the
current survey. Screener
questions are designed to
filter respondents. If a
respondent answers do not
meet the Screener criteria,
the respondent is not
allowed to continue the
survey, and the Screener
field is recorded as Failed.
If respondent answers
meet the Screener criteria,
they are allowed to
continue the survey, and
the Screener field is
recorded as Passed.
This field can be set using
the following IOM script
in routing (it is the data
source for the Incidence
report).
Passed Screener:

IOM.SampleRecord.Item["Screener"].Value = "Passed"

Failed Screener:

IOM.SampleRecord.Item["Screener"].Value = "Failed"

IOM.Terminate(Signals.sigFailedScreener, True)
149

Base Professional

Column Data type Null Primary Default Notes


and length permitted key value

In order to accurately
calculate the project
incidence, the Screener
field is added to the
sample table. The field
is updated during the
survey with three values
– Null, Passed, and
Failed. The sum of
Passed is the incidence
numerator; the sum of
Passed and Failed is the
incidence denominator.
The incidence report is
generated using TOM
based on the data source,
sample table, and sample
history table.
SortId Text(64) Yes No Null A random value that
can be used for sorting
records prior to selection.
(Appointments and recalls
are not affected by this
property as they are
always sorted in ascending
date/time order.) The
Participants activity can
initialize this field with
a random value when
uploading records. If
records are uploaded in
batches, each record in the
sample table receives a
new random number, not
just those being uploaded
in the current batch. See
“Naming the Database
Server, Sample Database,
and Sample Table” in the
Interviewer Server User’s
Guide for details.
Test Long Yes No Null Set to 1 if the record is
test data, or 0 if it is real
data (also known as live
data). This column is used
by the Phone activity to
restrict the type of data
that appears in phone
reports. If the value is
Null, the Phone activity
will treat the record as if it
is both real and test data.
150

Chapter 1

Column Data type Null Primary Default Notes


and length permitted key value
TimeZone Long Yes No The respondent’s
timezone. This is
used in the setting
of appointments to
ensure that any time
differences between
the respondent’s and
interviewer’s locations are
taken into account when
the record is presented
for recalling. For more
information, see the topic
Time Zone Management
on p. 1032.
TrunkGroup Long or Yes No NULL If sample records are used
Text(64) in telephone interviewing
projects, you can use
the TrunkGroup field to
specify which trunk group
of the dialer will be used
for dialing the sample
record. If you want the
dialer to automatically
select the trunk group,
the field should be set to
NULL or empty.
TryCount Long Yes No 0 The number of calls
made to this record.
When sample records are
uploaded into the sample
table, a non-null default
value should be specified
otherwise errors will occur
during the upload.
UserId Text(64) Yes No NULL The UserId of latest record
in the history table for a
specific sample.

The data types shown above are those that the IBM® SPSS® Data Collection Data Model uses.
When the table is created in the sample database, the Activate component converts these data
types into the corresponding data types in the database application you are using. (For further
details about the mapping process, open the IBM® SPSS® Data Collection Developer Library
documentation and use the Search facility to locate the topic entitled “Data Type Mapping for
Columns in Sample Tables”.) You can check the column data types by opening the table in your
database application and can change the data types if they are not exactly what you want. Refer to
your database application’s documentation for information on changing data types.
Your participants text file does not need to contain information for all the required columns, as
many of the columns are used only internally by the sample management system and will only
contain information once a participant has been called. As a minimum, you must supply a value in
your text file for the Id column. For telephone interviewing projects, you should provide a value
for the PhoneNumber column, and if you have participants in more than one time zone you might
want to provide a value for the TimeZone column.
151

Base Professional

Once you have told Activate which text file to use, it scans the file and decides how it will load
the data into the sample table. It does this by comparing the field names in the participants text
file with the columns names that need to exist in the sample table. If there are additional fields in
the text file, new columns will be created in the sample table to hold this data. The fields table
displays a summary of what it will do and lets you change it if this is necessary. Typically, you
might change the fields that are used for authenticating inbound callers.
The columns in this display are as follows.
Setting Description
Available Fields to make available to the sample management
script. Cancel any that are not used by the sample
management script (you cannot cancel required
fields).
Fields whose contents are used only by the interview
script become available during interviews and need
not be selected here.
Authentication Fields to use for authenticating inbound callers
taking Web interviews. Choose the fields you want
to use and cancel any you do not.
If you need to be able to select specific participant
records, you should select the Id field because this
is a key to the database and is guaranteed to contain
a unique value for each record.
If you authenticate on a field that may contain
non-unique values, the sample management system
will select the first record whose value in that
column matches the values specified in the sample
management script.
Field Field names. You cannot modify these settings.
Default Default values to be inserted into empty fields.
Fields with no default values will have a null value.
You may specify your own defaults as long as the
values are consistent with the fields’ data types and
lengths.
Type The type of data in the field. You cannot change the
data type of required fields.
Length The number of characters that can be held in a text
field. You cannot change the length of required
fields.

Uploading participant records

E Select Upload (located under the Participants node).

E Select Upload participants and click Browse....

This opens the Specify Participants dialog box.


152

Chapter 1

Figure 1-21
Specify Participants dialog box

E In Delimiter, select the character that separates the fields in each record. The default is a comma. If
you pick a different character, this becomes the default the next time you activate a project.

E Click Browse... and select the .txt or .csv file you want to upload.

E The upload process automatically randomizes records as it loads them. Click Re-randomize all
participant records during import if you want to cancel the randomization process.

E Click OK to close the dialog box.

Activate checks the participants text file and displays the fields in the fields table.

E Make whatever changes are appropriate in this table.

E Use the Fields and Script settings to select the sample database and table you want to use, and the
sample management script that will control access to the participant records.

Activate Current Project - Telephone settings

The Telephone settings provides options specific to projects used for telephone interviewing
(autodialer settings, calling rules, and so on).

Note: The Telephone node displays under the Activate tree when the Use telephone interviewing
option is selected from the Project settings dialog.
153

Base Professional

Figure 1-22
Telephone settings

You can configure settings for:


„ Interviewing
„ Calling Rules
„ Dialing
Note: The Dialing options are only available when a dialer is installed on the server.

Telephone - Interviewing settings

The Interviewing settings allow you to configure options for a telephone interviewing project.
154

Chapter 1

Figure 1-23
Interviewing settings

You can configure settings for:


„ Display Fields
„ Call Outcomes
„ Introduction
„ Interviewer
„ Review

Interviewing - Display Fields

The Display Fields settings allow you to specify which sample management fields are required
in the participant records, which fields should be displayed on the interview screen, and which
of the displayed fields interviewers can edit.

When an autodialer connects a telephone interviewer to a participant (or, for projects that do not
use an autodialer, when the interviewer requests a number to call) the interviewing program
displays a page showing information about the participant and a list of possible call outcomes.
Some items of information are always displayed whereas other items are displayed only if selected
by the supervisor. The supervisor can change the selection of optional fields during the course
of the project. Supervisors can also specify for each displayed field whether or not interviewers
can change the field’s contents. For example, if the Comments field is displayed you might
want interviewers to be able to update this field with information that might be useful to other
interviewers who are about to speak to this participant.
155

Base Professional

Figure 1-24
Display Fields settings

Setting Description
Label Shows the field name that will be displayed on the
interviewing screen.
Required Shows which fields must be present in each
participant record. You cannot change the settings
in this column for any of the standard fields that
must be present in all telephone databases, but you
can change the settings for other fields.
156

Chapter 1

Setting Description
Show Determines which fields will be displayed on the
interviewing screen. Of the standard fields, the ID,
Queue, Name, Phone Number, Comments, and
Previous Queue fields are always displayed, so you
cannot clear the Show check box for these fields.
The settings in this column also define the fields
that interviewers can search when searching for a
specific contact. To allow interviewers to search for
specific contacts, you must select the Show Specific
Contact option on the Interviewing - Interviewer
dialog .
Can Edit During Survey Determines which of the displayed fields
interviewers can edit. For the standard fields, you
can only change the settings for Return Time,
Interview Mode, and Call Outcome.
Can Tabulate Specifies which fields should be available to the
Phone activity. The only standard field whose
setting you can change is Try Count. If your sample
data includes a Segment field and you want to run
the reports in the Phone activity that can display
data about segments, make sure that you select
Can Tabulate for your Segment field. For more
information about segments, search the Phone
activity online help for the topic “About Segments”.

Interviewing - Call Outcome

The Call Outcome settings allow you to set the call outcome options.

Interviewers working on a telephone interviewing project are provided with a list of call outcome
codes from which they must select the outcome of each call that they make. IBM® SPSS® Data
Collection Interviewer Server comes with a default list of call outcome codes that cover most
requirements, so you should never need to build a call outcome list from scratch.
Note: For projects that use an autodialer, Interviewer Server automatically maps the status codes
returned by the autodialer to one of the call outcomes.
157

Base Professional

Figure 1-25
Call Outcome settings

Setting Description
Code The call outcome code number.
Name The call outcome name.
Text The call outcome description. This is the text that the interviewer sees.
Show During Interview Specify which codes must be available while interviews are in
progress; for example, an Abandoned Interview code can be selected
if a participant starts an interview but then refuses to complete it.
Show Appointment Page Specify which codes should prompt the interviewer to arrange a
callback appointment with the participant. When interviewers select
one of these outcomes they will prompted to enter a callback date
and time.
Cancel Code Specify which code should be used for canceled calls. Canceled
calls occur when an interviewer is presented with a number to call
manually, but clicks Cancel Contact rather than making the call. This
returns the participant record to the Sample Management system with
the appropriate code so that the record can be returned to the queue
from which it was selected.
Always Hidden Select which codes are always hidden from interviewers. Typically,
these are call outcomes that are chosen automatically by Interviewer
Server.
158

Chapter 1

Interviewing - Introduction

The Introduction settings allow you to define the introductory script that interviewers should
read to each participant.

When an autodialer connects a telephone interviewer to a participant (or, for projects that do not
use an autodialer, when the interviewer requests a number to call), interviewers are provided
with an introductory text that they can read to the participant to explain the reason for the call.
IBM® SPSS® Data Collection Interviewer Server comes with a default text that is suitable for all
surveys, but you can define your own text that is more specific to the current project.
Figure 1-26
Introduction settings

Introduction to survey: Displays the default introductory text. You can manually replace the default
text that Interviewer Server provides with your organization’s default text.

Substitution fields: Lists fields available for insertion into the introductory text. The available
fields typically reference to a sample field or the interviewer’s name.

Show valid substitution fields: When selected, only valid substitution fields display in the
Substitution fields list.

Inserting a field into the introductory text

E Click in the introductory text at the point you want to make the insertion.

E Select a field from the Substitution fields list.


159

Base Professional

E Click Insert.

The field name appears in the introductory text enclosed in curly brackets and will be replaced by
the appropriate value when the introductory message displays for the interviewers.

Interviewing - Interviewer

The Interviewer settings allow you to specify interviewer dialing parameters.


Figure 1-27
Interviewer settings

Dialing option: Select whether an autodialer is used to dial phone numbers, whether interviewers
must dial numbers manually, or whether interviewers can use modems to dial numbers.
„ IBM SPSS Dialer (Extension) – Power dial for the interviewer screen. In extension dialing, the
autodialer dials participants only when interviewers click the Start Dialing button in the
Phone Participants activity. This mode can result in longer wait times for interviewers, but
is unlikely to result in silent calls.
„ IBM SPSS Dialer (Group) – Dial for the interview in a group (with optional predictive dialing). In
group/predictive dialing, the autodialer dials participants before interviewers are available to
answer the connected calls. That is, the software predicts when interviewers will click the
Start Dialing button. This mode can deliver the highest interviewer productivity, but might
result in silent calls.
„ Modem – Show Dial Contact button on the Interviewer screen. Allow interviewers to use modems
to dial phone numbers. The Dial Contact button on the main screen of the Phone Participants
activity will then be usable. When interviewers click that button, the phone number displayed
160

Chapter 1

in the main screen will be dialed automatically by the modem. Note that the modem option
will work only for phone numbers that are formatted as follows:
+Country/RegionCode (Area/CityCode) SubscriberNumber
For example, 44 12 3456 7890 for a subscriber in the United Kingdom. In addition, a separate
software installation is required on each telephone interviewer station that will use the
modem option. For more information, search the IBM® SPSS® Data Collection Interviewer
Server Installation Instructions for the topic “Things You Must Do on Local Machines”. If
a station has access to more than one modem, you can specify which one to use—for more
information, use the search function in the IBM® SPSS® Data Collection Developer Library
documentation to search for the text “Settings for the Phone Participants Activity” and in the
search results open the topic with that title.
If you select the option to use modems, the project cannot use an autodialer. Interviewers will
still be able to dial numbers manually if they have access to a telephone keypad. The modem
option works only on Microsoft Windows computers.
„ Manual – Interviewer dials numbers manually. Interviewers must manually dial the phone
number displayed on the main screen of the Phone Participants activity.

Note: If you select the option to dial phone numbers manually, the project cannot use an autodialer
or modems.

Interviewer to select qualifications: When selected, interviewers will be prompted to select their
qualifications at the start of each session.

Interviewer qualifications control which sample records are allocated to each interviewer and are a
good way of making the best use of your interviewers’ skills. There are two ways of assigning
qualifications to interviewers, which can be used together or separately. Administrators can set
an interviewer’s qualifications when they create IBM® SPSS® Data Collection Interviewer
Server Administration accounts, or interviewers may select their own qualifications at the start of
each interviewing session or during a session.

Depending on how your company uses qualifications, it may be appropriate for administrators to
set some qualifications and for interviewers to be allowed to select others. For example, language
or refusal-conversion qualifications could be set by administrators, while location qualifications
that specify which region an interviewer should call could be set and changed by interviewers
themselves.

Select the qualifications that interviewers may select themselves. Selecting the option but no
qualifications is the same as not selecting the option at all.

Note: Take care when choosing which qualifications interviewers may select, as it is possible
to allow interviewers to select qualifications they do not have. For example, suppose the
administrator has created Sam’s account with a French language qualification. If you allow
interviewers to set the language qualification, Sam will be presented with the full list of languages
and will be able to choose any combination of languages from that list.
161

Base Professional

Show Specific Contact button on the Interviewer screen: When selected, interviewers can retrieve
specific participants from the sample database and the Specific Contact button on the main screen
of the Phone Participants activity is enabled. When interviewers click the button, they are
presented with a dialog box from which they can select whether to retrieve their last contact or
search for a contact. If they choose to search for a contact, they can select the field to search and
specify the value to search for. The choice of fields to search is determined by the settings in the
Show column in Interviewing - Display Fields.

Automatically select next contact: When selected, the interviewing program will automatically
select an interviewer’s next contact; when not selected, interviewers must click a button to
request their next contact.

Interviewers working on projects with this option set will see a check box labeled “Auto contact
selection” just above the list of call outcomes, so it is still possible for some interviewers to work
in fully manual mode if you wish.

Some projects may use a combination of automated and modem or manual dialing. You can still
select the “automatically select next contact” option for these projects, but interviewers who are
using the dialer will need to cancel the “Auto contact selection” check box on the interviewing
screen otherwise they will not be able to stop the dialer making calls when they reach the end of
their shift or need to take a break.

Enable monitoring/recording: When selected, supervisors can monitor and record interviewers
while they are in progress.

Interviewer must get approval for monitoring/recording: When selected, interviewers must obtain
consent for monitoring and recording from each participant.

Depending on your local laws or your organization’s policy, you can configure monitoring for
three different scenarios:
Scenario To Specify This
Monitoring and recording are not allowed for this Clear the Enable monitoring/recording check box.
project
Monitoring and recording are always allowed for Select the Enable monitoring/recording check box
this project and clear the The interviewer must get approval for
monitoring/recording check box.
Monitoring and recording are allowed only if the Select the Enable monitoring/recording and The
participant gives his or her consent interviewer must get approval for monitoring/recording
check boxes.

If you have selected The interviewer must get approval for monitoring/recording, Yes and No options
will appear on the main screen of the Phone Participants activity whenever interviewers retrieve a
contact. As part of their introductory script, interviewers must ask each participant if they give
their consent for monitoring and recording, and record the participant’s answer by selecting either
Yes or No. The three options underneath The interviewer must get approval for monitoring/recording
determine the default settings of the Yes and No options as described below:
Option Description
Interviewer must manually select an option The interviewer must always select either Yes or No.
162

Chapter 1

Option Description
Default setting is ‘monitoring/recording prohibited’ The No option is selected by default. The
interviewer can change the selection.
Default setting is ‘monitoring/recording allowed’ The Yes option is selected by default. The
interviewer can change the selection.

Allow Interviewer to start an interview without a dialer connection to a respondent: When selected,
interviewers are allowed to start interviews without a dialer connection to a respondent.

Interviewing - Review

The Review settings allow you to specify whether interviewers can review the participant’s
responses after the interview has completed.
Figure 1-28
Review settings

Review interview options: The drop-down list provides the following options:

Option Description
No Review The interviewer cannot review interviews.
Review Interview The interviewer can review the whole interview.
Review Open-ends The interviewer can review open-ended (text) responses only.

If you have selected either Review Interview or Review Open-ends, you can then choose between
the following two settings.
163

Base Professional

Show the review button on the Interviewer screen: When selected, the interviewer must click the
Review Completed Interview button in the Phone Participants activity in order to start the review.

Interviewer must review: When selected, the review starts automatically when the interview
finishes.

Telephone - Calling Rules settings

The Calling Rules settings allow you to define specific calling rules for a telephone interviewing
project.
Figure 1-29
Calling Rules settings

You can configure settings for:


„ Parameters
„ Ordering
„ Call Times
„ Appointments
„ Overrides

Calling Rules - Parameters

The Parameters settings allow you to specify the amount of time to wait before re-dialing numbers
that are busy, unanswered, or answered by an answering machine.
164

Chapter 1

You set these parameters at the start of the project, and can change them throughout the
interviewing period to match the current requirements of the survey. For example, if it is the last
day of the survey and you are running low on new participants, you might want to increase the
maximum number of times that numbers may be called. You might also wish to reduce the elapse
times for automatically set appointments so that numbers with callbacks become available for
recall more quickly.
Figure 1-30
Calling Rules settings

Time parameters: The table allows you to define the amount of time to wait before attempting to
re-dial samples that meet a specific criteria (no answer, busy, and so on). Enter appropriate values
(in minutes) for each call category.

Give preference to the interviewer who arranged the appointment: When selected, the interviewer
who arranged the appointment is given preference over other available interviewers (as it applies
to scheduling when samples can be retried). If the project uses group/predictive autodialing,
165

Base Professional

the interviewer will not be connected automatically to the participant who has an appointment.
Instead, the participant’s details are displayed on the interviewer’s screen, and the interviewer
must then click the Start Dialing button to dial the participant’s phone number.
„ Before an appointment, by the arranger only: The number of minutes before a scheduled
appointment that the interviewer, who arranged the appointment, may attempt to retry the
sample.
„ After an appointment, by any interviewer: The number of minutes after a scheduled appointment
that any available interviewer may attempt to retry the sample.

No preference for appointments: When selected, any available interviewer is allowed to retry the
sample, regardless of who arranged the appointment. This is the default setting.
„ Before an appointment, by any interviewer: The number of minutes before a scheduled
appointment that any available interviewer may attempt to retry the sample.

Before a recall: The number of minutes before the recall time that a number with an automatic
appointment can be called. The default is ten minutes.

Time zones: The time zones in which participants are located. The values that you enter in this
field must be the indexes of the time zones in the list of time zones stored in the registry. If more
than one time zone is specified, the numbers must be separated by semicolons. If this property is
blank, IBM® SPSS® Data Collection Interviewer Server will ignore time zone and calling times
when selecting records for interviewers to call.
Time Zone Name Displayed As Index Value
Greenwich Standard Time (GMT) Casablanca, Monrovia 90
GMT Standard Time (GMT) Greenwich Mean Time 85
: Dublin, Edinburgh, Lisbon,
London
Morocco Standard Time (GMT) Casablanca -2147483571
W. Europe Standard Time (GMT+01:00) Amsterdam, 110
Berlin, Bern, Rome, Stockholm,
Vienna
Central Europe Standard Time (GMT+01:00) Belgrade, 95
Bratislava, Budapest, Ljubljana,
Prague
Romance Standard Time (GMT+01:00) Brussels, 105
Copenhagen, Madrid, Paris
Central European Standard Time (GMT+01:00) Sarajevo, Skopje, 100
Warsaw, Zagreb
W. Central Africa Standard Time (GMT+01:00) West Central 113
Africa
GTB Standard Time (GMT+02:00) Athens, Istanbul, 130
Minsk
E. Europe Standard Time (GMT+02:00) Bucharest 115
Egypt Standard Time (GMT+02:00) Cairo 120
South Africa Standard Time (GMT+02:00) Harare, Pretoria 140
FLE Standard Time (GMT+02:00) Helsinki, Kyiv, 125
Riga, Sofia, Tallinn, Vilnius
Israel Standard Time (GMT+02:00) Jerusalem 135
166

Chapter 1

Time Zone Name Displayed As Index Value


Jordan Standard Time (GMT+02:00) Amman -2147483582
Middle East Standard Time (GMT+02:00) Beirut -2147483583
Namibia Standard Time (GMT+02:00) Windhoek -2147483578
Arabic Standard Time (GMT+03:00) Baghdad 158
Arab Standard Time (GMT+03:00) Kuwait, Riyadh 150
Russian Standard Time (GMT+03:00) Moscow, St. 145
Petersburg, Volgograd
E. Africa Standard Time (GMT+03:00) Nairobi 155
Georgian Standard Time (GMT+03:00) Tbilisi -2147483577
Iran Standard Time (GMT+03:30) Tehran 160
Arabian Standard Time (GMT+04:00) Abu Dhabi, 165
Muscat
Caucasus Standard Time (GMT+04:00) Baku, Tbilisi, 170
Yerevan
Azerbaijan Standard Time (GMT+04:00) Baku -2147483584
Mauritius Standard Time (GMT+04:00) Port Louis -2147483569
Armenian Standard Time (GMT+04:00) Yerevan -2147483574
Afghanistan Standard Time (GMT+04:30) Kabul 175
Ekaterinburg Standard Time (GMT+05:00) Ekaterinburg 180
West Asia Standard Time (GMT+05:00) Islamabad, 185
Karachi, Tashkent
Pakistan Standard Time GMT+05:00) Islamabad, Karachi -2147483570
India Standard Time (GMT+05:30) Chennai, Kolkata, 190
Mumbai, New Delhi
Nepal Standard Time (GMT+05:45) Kathmandu 193
N. Central Asia Standard Time (GMT+06:00) Almaty, 201
Novosibirsk
Central Asia Standard Time (GMT+06:00) Astana, Dhaka 195
Sri Lanka Standard Time (GMT+06:00) Sri 200
Jayawardenepura
Myanmar Standard Time (GMT+06:30) Rangoon 203
SE Asia Standard Time (GMT+07:00) Bangkok, Hanoi, 205
Jakarta
North Asia Standard Time (GMT+07:00) Krasnoyarsk 207
China Standard Time (GMT+08:00) Beijing, 210
Chongqing, Hong Kong, Urumqi
North Asia East Standard Time (GMT+08:00) Irkutsk, Ulaan 227
Bataar
Singapore Standard Time (GMT+08:00) Kuala Lumpur, 215
Singapore
W. Australia Standard Time (GMT+08:00) Perth 225
Taipei Standard Time (GMT+08:00) Taipei 220
Tokyo Standard Time (GMT+09:00) Osaka, Sapporo, 235
Tokyo
Korea Standard Time (GMT+09:00) Seoul 230
Yakutsk Standard Time (GMT+09:00) Yakutsk 240
Cen. Australia Standard Time (GMT+09:30) Adelaide 250
167

Base Professional

Time Zone Name Displayed As Index Value


AUS Central Standard Time (GMT+09:30) Darwin 245
E. Australia Standard Time (GMT+10:00) Brisbane 260
AUS Eastern Standard Time (GMT+10:00) Canberra, 255
Melbourne, Sydney
West Pacific Standard Time (GMT+10:00) Guam, Port 275
Moresby
Tasmania Standard Time (GMT+10:00) Hobart 265
Vladivostok Standard Time (GMT+10:00) Vladivostok 270
Central Pacific Standard Time (GMT+11:00) Magadan, 280
Solomon Is., New Caledonia
New Zealand Standard Time (GMT+12:00) Auckland, 290
Wellington
Fiji Standard Time (GMT+12:00) Fiji, Kamchatka, 285
Marshall Is.
Tonga Standard Time (GMT+13:00) Nuku’alofa 300
Azores Standard Time (GMT-01:00) Azores 80
Cape Verde Standard Time (GMT-01:00) Cape Verde Is. 83
Mid-Atlantic Standard Time (GMT-02:00) Mid-Atlantic 75
Argentina Standard Time (GMT-03:00) Buenos Aires -2147483572
E. South America Standard Time (GMT-03:00) Brasilia 65
SA Eastern Standard Time (GMT-03:00) Buenos Aires, 70
Georgetown
Greenland Standard Time (GMT-03:00) Greenland 73
Montevideo Standard Time (GMT-03:00) Montevideo -2147483575
Newfoundland Standard Time (GMT-03:30) Newfoundland 60
Atlantic Standard Time (GMT-04:00) Atlantic Time 50
(Canada)
Central Brazilian Standard Time (GMT-04:00) Manaus -2147483576
SA Western Standard Time (GMT-04:00) Caracas, La Paz 55
Pacific SA Standard Time (GMT-04:00) Santiago 56
Venezuela Standard Time (GMT-04:30) Caracas -2147483573
SA Pacific Standard Time (GMT-05:00) Bogota, Lima, 45
Quito
Eastern Standard Time (GMT-05:00) Eastern Time (US 35
and Canada)
US Eastern Standard Time (GMT-05:00) Indiana (East) 40
Central America Standard Time (GMT-06:00) Central America 33
Central Standard Time (GMT-06:00) Central Time (US, 20
Canada)
Central Standard Time (Mexico) (GMT-06:00) Guadalajara, -2147483581
Mexico City, Monterrey
Mexico Standard Time (GMT-06:00) Guadalajara, 30
Mexico City, Monterrey - Old
Canada Central Standard Time (GMT-06:00) Saskatchewan 25
US Mountain Standard Time (GMT-07:00) Arizona 15
Mexico Standard Time 2 (GMT-07:00) Chihuahua, La Paz, 13
Mazatlan - Old
168

Chapter 1

Time Zone Name Displayed As Index Value


Mountain Standard Time (GMT-07:00) Mountain Time 10
(US, Canada)
Mountain Standard Time (GMT-07:00) Chihuahua, La Paz, -2147483580
(Mexico) Mazatlan
Pacific Standard Time (GMT-08:00) Pacific Time (US, 4
Canada), Tijuana
Pacific Standard Time (Mexico) (GMT-08:00) Tijuana, Baja -2147483579
California
Alaskan Standard Time (GMT-09:00) Alaska 3
Hawaiian Standard Time (GMT-10:00) Hawaii 2
Samoa Standard Time (GMT-11:00) Midway Island, 1
Samoa
Dateline Standard Time (GMT-12:00) International Date 0
Line West

Filter interviewers based on their qualifications: When selected, calls are assigned to interviewers
based on interviewer qualifications. For example, if the participant’s native language is Spanish,
then assign the call to a Spanish speaking interviewer.

Calling Rules - Ordering

The Ordering settings allow you to specify the order in which records are retrieved from each
individual queue. For example, you may want to first dial records with a higher number of
attempts when scanning the Recall queue.
169

Base Professional

Figure 1-31
Ordering settings

Ordering records: The table defines the ordering of records, by field, in each queue. Select the
field order from the Field drop-down menus.

Prioritize recalls over fresh. Allows you to specify the frequency in which recalls to busy and
other unanswered calls take priority over calls to new numbers. Specify a percentage to check
the RECALL queue before the FRESH queue. For example, specifying a value of 25% would
result in the RECALL queue being checked before the FRESH queue for one-in-four calls. The
default value is 90%.

Order records within each queue: The table displays the current field order for each queue and
allows you to define which sample fields will be used for ordering the queue records.
Setting Description
Queue The queue name. The names are not modifiable.
170

Chapter 1

Setting Description
Direction The drop-down list provides all sample fields available to its relative queue. Select
an appropriate field from the list.
Sort By The order in which the selected sample field will be sorted. The drop-down list
provides options for Ascending and Descending.

Notes
„ When queue ordering and field/weight ordering are both specified, queue ordering is used to
resolve parallel ordering.
„ Records with values for which no order or weight are specified are retrieved last from the
Appointment queue (records are not retrieved from any of the other queues).

Calling Rules - Call Times

The Call Times settings allow you to specify the project’s valid participant call times and day
parts. Day parts allow you to ensure that records are called at specific times of the day in order to
increase the chance of success in reaching participants.

Note: If you have an existing project that uses pre-version 5.6 sample management scripts, you
can setup day parts but they will not be recognized. The use of day parts requires version 5.6 or
higher sample management scripts.
171

Base Professional

Figure 1-32
Call Times settings

Weekday: Allows you to specify valid weekday participant call times:


„ Start (hh:mm). The earliest time at which participants may be called on weekdays. Enter
an appropriate start time.
„ End (hh:mm). The latest time at which participants may be called on weekdays. Enter an
appropriate end time.

Weekend: Allows you to specify valid weekend participant call times:


„ Start (hh:mm). The earliest time at which participants may be called on weekends. Enter
an appropriate start time.
„ End (hh:mm). The latest time at which participants may be called on weekends. Enter an
appropriate start time.
By default, a weekend runs from 00:01 on Saturday to midnight on Sunday. The Sample
Management script will handle situations where the project has participants in time zones with
different definitions of weekdays and weekends.

Single day part: When selected, the time parameters defined in the Valid participant call times
section are used.
172

Chapter 1

Maximum tries. The maximum number of times that a record may be called. The default is 3. If
an Appointment is made, then an additional “Maximum Tries” tries can be made to connect
the appointment.

Using day parts. When selected, you can utilize an existing day parts template that defines valid
participant call times.

Day parts: The table lists all day parts that are currently defined for the project and allows you
to add new day parts.
Setting Description
Day Part The day part name. Enter an appropriate name.
Tries The maximum number of times that a record may be called. Enter an appropriate
value.
Start Time The earliest time at which participants may be called. Enter the earliest call time.
End Time The latest time at which participants may be called. Enter the latest call time.
Value Enter the appropriate days of the week to contact participants.

Clear Daypart(s). Click to clear any currently defined day part settings in the Day Parts table.

Load Template... Click to load an existing day parts template. Templates are stored in the
Distributed Property Management (DPM). See the “Distributed Property Management (DPM)”
topic in the IBM® SPSS® Data Collection Developer Library for more information.

Save As... Click to save the current settings, listed in the Day Parts table, as a template in DPM. The
Save As... dialog displays, allowing you to specify a template name. Enter an appropriate template
name and click Save. Otherwise click Cancel to return to the Call Times dialog without saving.

Calling Rules - Appointments

The Appointments settings allow you to specify a project expiry date and create appointment
schedules.

An appointment schedule can be used to specify the interviewer shifts and holidays, ensuring that
appointments can be made when interviewers are working. There can be multiple appointment
schedules for a distributed site.

Appointment schedules may differ between projects and are therefore project specific. For
example, interviewers may be scheduled to call business projects during the day and consumer
projects in the evening.
173

Base Professional

Figure 1-33
Appointments settings

Project Expiry (UTC Time). Provides options for specifying a project expiration time.
„ Date: The project expiration date. You can manually enter a date, in the format mm/dd/yyyy, or
you can click the down arrow to display a calendar and select a date.
„ Time: The project expiration time. This indicates the exact time of day, for the selected date,
that the project will expire. Enter an appropriate time in the 24-hour format hh:mm (for
example 17:00 for 5:00 PM).

Appointment Schedule. Provides options for clearing the existing sub-schedules, loading schedule
templates, and saving schedule templates.
174

Chapter 1

Selected template: Displays the name of the currently loaded appointment schedule template.
When a template is not loaded, not using template displays.

Sub-schedules: Lists available sub-schedules.

Creating a new sub-schedule

E Enter an appropriate name in the Sub-schedule field.

Removing an existing sub-schedule

E Right-click the cell to the left of the appropriate sub-schedule name and select Remove.

Regular shifts in sub-schedule: The table lists all shifts currently defined for the selected
sub-schedule.

Creating new shifts for a sub-schedule

E Select a sub-schedule from the Sub-schedule list. If no sub-schedules exist, you must either create
a new sub-schedule or load an appointment schedule template.

E Enter the appropriate shift information in the table.

Setting Description
Days Select the appropriate days of the week to contact participants.
Start Time The earliest time at which participants may be called. Enter the earliest call time.
End Time The latest time at which participants may be called. Enter the latest call time.

Date overrides: The table lists all date override shifts currently defined for the selected
sub-schedule. Date overrides allow you to specify specific dates and/or times where appointments
are not allowed, as well as define specific dates and times that will override the shifts defined
in the Regular Shifts table.

Creating new date overrides for a sub-schedule

E Select a sub-schedule from the Sub-schedule list. If no sub-schedules exist, you must either create
a new sub-schedule or load an appointment schedule template.

E Enter the appropriate date override information in the table.

Setting Description
Type The drop-down menu provides the following options:
„ No appointment. Select this option when you want to define specific dates and
times where no appointments are allowed
„ Date override. Select this option when you want to define specific dates and
times that override the shifts defined in the Regular Shifts table.
Date The drop-down menu allows you to select specific no appointment or date override
dates. Select the appropriate date(s).
175

Base Professional

Setting Description
Start Time When No appointment is selected, the starting time at which participants may not be
called. Enter the starting time. When Date override is selected, the earliest time at
which participants may be called. Enter the earliest call time.
End Time When No appointment is selected, the end time at which participants may not be
called. Enter the end time. When Date override is selected, the latest time at which
participants may be called. Enter the latest call time.

Clear Schedule. Click to clear any currently defined sub-schedules.

Load Template... Click to load an existing appointment schedule template. Templates are stored
in the Distributed Property Management (DPM). See the “Distributed Property Management
(DPM)” topic in the IBM® SPSS® Data Collection Developer Library for more information.

Save As. Click to save the current schedule settings as a template in DPM. The Save As... dialog
displays, allowing you to specify a schedule template name. Enter an appropriate schedule
template name and click Save. Otherwise click Cancel to return to the Appointments dialog
without saving.

Calling Rules - Overrides

The Overrides settings allow you to specify the parameters that control dialing for a subset of
records. All records, except those in the specified subset, continue to follow the base dialing rules.

Parameters that can be overridden are:


„ The maximum number of attempts for a record.
„ The call back delay for numbers that are not answered, busy, answered by an answering
machine, or started on the web.

In addition, you can specify that a subset of records be identified as having a high priority. High
priority records are generally called before other records. This feature could be used, for example,
if you discover that the completion percentage for a particular subset of records (region for
example) is particularly low. You could indicate that the subset has priority until the completion
percentage reaches the average, at which point you could disable the override.
176

Chapter 1

Figure 1-34
Overrides settings

Prioritize. Select this option when you want the specify that the Selection criteria value be identified
as having priority over other records. High priority records are generally called before other
records

Selection criteria. Identifies the subset of records upon which the override parameters are based.
Enter an appropriate criteria.

Time parameters: The table allows you to specify an override value (in minutes) for specific
delay types:
Delay Type Description
No answer delay The callback delay (in minutes) for numbers that are not answered.
Enter an appropriate value in the corresponding Override (minutes)
cell.
Busy delay The callback delay (in minutes) for numbers that are busy. Enter an
appropriate value in the corresponding Override (minutes) cell.
Answering machine delay The callback delay (in minutes) for numbers that are answered by an
answering machine. Enter an appropriate value in the corresponding
Override (minutes) cell.
Web callback delay The callback delay (in minutes) for surveys that are started on the
Web. Enter an appropriate value in the corresponding Override
(minutes) cell.

Maximum tries (for any day parts). Specifies the maximum number of callback attempts for each
participant. Enter an appropriate value.
177

Base Professional

Telephone - Dialing settings

The Dialing settings allow you to configure autodialer related parameters.


Figure 1-35
Dialing settings

You can configure settings for:


„ Autodialer
„ Predictive
„ Answering Machine Detection

Dialing - Autodialer

The Autodialer settings allow you to define settings that relate to the use of an autodialer. The
autodialer settings are only valid when IBM SPSS Dialer (Extension) – Power dial for the interviewer
screen or IBM SPSS Dialer (Group) – Dial for the interview in a group (with optional predictive dialing)
is selected as a Dialing option in the Interviewer settings.
178

Chapter 1

Figure 1-36
Autodialer settings

Send caller identification: When selected, the caller’s telephone number is transmitted when
the autodialer makes a call.

Phone number to send (leave blank to use dialer’s settings): Enter a valid phone number. The phone
number must contain only the digits 0 to 9, * and # (optionally preceded by a plus (+) to present
the international access code). In addition, the phone number can contain the visual separators
SPACE, (, ), . and -. Visual separators are not allowed before the first digit.

Error if login position is not in configuration: When selected, the Phone Participants activity will not
open on a station with an unrecognized position and an error message will be displayed instead.
When not selected, stations with unrecognized positions can be used to conduct interviews (the
Phone Participants activity will still open), and manual dialing must be used on those stations.

Ring time: Enter the minimum length of time (in seconds) that an unanswered phone call must ring
before the autodialer terminates the call. Make sure that you set a value that allows participants
plenty of time to pick up the phone. In addition, local laws might specify the minimum value that
you must use. The default value is 15 seconds.

Name of silent call announcement file (leave blank to use dialer’s settings): Enter the name of a
wave-sound (.wav) file that contains a message that will be played to the participant when a silent
call occurs (for example, SilentCall.wav). Silent calls can occur when an autodialer generates
more calls than there are interviewers available to handle the calls. The .wav file must be located
in the autodialer’s “audio” folder, the location of which is defined in the autodialer’s dialer.ini file.
179

Base Professional

You can also specify a .wav file that is located in a sub folder of the audio folder by including a
relative path (for example, Projects\MyProject\SilentCall.wav).

Time to try connecting to interviewer: Enter the numbers of minutes that interviewers must wait
to be connected to a participant before the autodialer cancels the connection attempt, allowing
interviewers to leave their stations if they need to. The default value is 10 minutes.

Percentage of calls to record: Enter the percentage of calls that the autodialer will record. Both
individual questions and entire calls are recorded, and the recordings are saved as sound files in
the autodialer’s file system. To record calls, enter a whole number from 0 to 100. To record no
calls or all calls, enter 0 or 100 respectively. The default value is 0.

If you enter any other value, that percentage of calls will be selected at random for recording, with
the following exceptions:
„ All subsequent calls to a participant whose previous call was recorded will also be recorded.
This includes calls to keep an appointment or calls to complete an interview that was
interrupted by a disconnection.
„ Calls to participants retrieved by the Specific Contact button are recorded only if that
participant was called previously and that previous call was recorded.
Note that when you change this setting, there might be a delay of up to a minute before the
new value takes effect.
For more information, use the search function in the IBM® SPSS® Data Collection Developer
Library documentation to search for the text “Recording Calls and Interviews” and in the search
results open the topic with that title.

Dialing - Predictive

The Predictive settings allow you to configure the predictive autodialing parameters. When using
predictive dialing, the autodialer dials participants before interviewers are available to answer the
connected calls. That is, the software predicts when interviewers will become available. This
mode can deliver the highest interviewer productivity, but may result in silent calls.

The predictive settings are only valid when the Dialer (Group/Predictive) – Show Start Dialing button
on the Interviewer screen option is selected as the Dialing option in the Interviewer settings.
180

Chapter 1

Figure 1-37
Predictive settings

Initial dialing aggressiveness: The initial “aggressiveness” of the autodialing system when it
calculates the number of predictive calls to make. A higher aggressiveness setting can lead to less
wait time for interviewers, but might result in more silent calls. Enter a whole number between
0 and 100. The default value is 0.

To stop the autodialing system from dialing predictively, set Initial dialing aggressiveness to 0. In
this mode, the autodialer dials participants only when interviewers click the Start Dialing button in
the Phone Participants activity, which is unlikely to result in silent calls.

Maximum percentage of silent calls: The maximum percentage of silent calls that are allowed to
occur in the 24 hour period (since midnight). If the actual rate of silent calls approaches this value,
the autodialing system reduces the current rate of predictive calls to ensure that the maximum
percentage of silent calls is not exceeded. Enter a decimal value between 0 and 100. The default
value is 0.

Target percentage of silent calls: The target percentage of silent calls that should occur at any time.
The autodialing system attempts to keep the actual rate of silent calls at this value by continually
adjusting the current rate of predictive calls. Enter a decimal value between 0 and 100. The value
must be less than the value of Maximum percentage of silent calls. The default value is 0.
181

Base Professional

Dialing - Answering Machine Detection

The Answering Machine Detection settings allow you to configure parameters relating to
answering machine detection.

IBM® SPSS® Data Collection Dialer supports a simple answering machine detection (AMD)
that can be useful when dialing residential numbers. It is based on the observation that human
greetings are usually short, whereas answering machine messages are long. When an auto-dialed
participant record is dialed, and Dialer detects that an answering machine picked up the call, the
call is automatically ended with the Answering Machine call outcome when the IBM® SPSS®
Data Collection Interviewer Server is properly configured to support the feature. Answering
machine detection is currently configured through the DPM Explorer tool.
Figure 1-38
Answering Machine Detection settings

Mode: The answering machine detection mode. Possible values include: Disabled, Filtering, and
Calibration.

Parameters: The table lists the current answering machine detection parameters. All of the
optional parameters (off, max, on, db, and so on) are encapsulated in this property. For example,
max:4.5,on:11 (using commas as the delimiter).
182

Chapter 1

Announcement file: The name of the sound file to play for an answering machine, including the
pathname (relative to AudioDir on the Dialer). You are not required to provide a value for this
setting.

False Positives

False positives are humans that were misinterpreted as answering machines, in which case the
dialer hangs up the call, or plays the amfile. The main causes of false positives are:
„ The human speaks a long greeting (> max). This typically happens when reaching a business
number. If an amfile file is specified, the respondent will hear it.
„ The human has loud background noise (permanently above the db threshold). This typically
occurs with car phones. The playback of amfile starts when the AMD algorithm has waited
for end-of-greeting for off seconds. However, this timer needs to be longer than the longest
AM greeting (~30 seconds), so it is unlikely that the respondent will wait long enough
to hear amfile.

Humans can also be misinterpreted as “No audio”. This occurs when:


„ There is no audio connection (for example, because the battery in the respondent’s phone is
running low). After on seconds without start-of-voice, the dialer hangs up (call outcome:
No audio).
„ The human picks up the phone but does not start speaking immediately (for example, the
respondent is busy, or due to a disability). After on seconds, the dialer hangs up.
„ The greeting was spoken very softly (below the db threshold). Usually the respondent speaks
up louder, but if the voice is not raised the dialer hangs up after on seconds. The dialer plays
the amfile to “No audio” numbers unless the setting is disabled by option cp:-S (hangup
immediately if silence).

Qualification Timers

Spoken words contains short gaps in the audio due to articulation. The dialer ignores gaps that are
shorter than a qualification timer qoff; that is, the end-of-voice is asserted when qoff expires.

The dialer utilizes two qualification timers:


„ qoff, for humans, determines when a call is connected to an interviewer. The settings should
be short (<0.8 seconds); otherwise the respondent might become impatient and hang up.
However, a qoff that is too short causes many false negatives because the timer fails to bridge
well-articulated greetings.
„ qam, for answering machines, determines when amfile is played. It should be long (>2
seconds), otherwise the playback might start before the answering machine is ready to
record it.

Calibration Mode

Calibration mode is a means to determine the optimum AMD parameters. In calibration mode,
all calls are connected to interviewers (just as when AMD is disabled). The AMD measures
the duration of the respondent’s initial greeting. During this phase, the interviewer can hear
183

Base Professional

the respondent, but cannot talk. When the AMD measurement is finished (end-of-voice), the
interviewer hears a short beep (WBeep) and audio is connected in both directions.

In cases of long initial silence (longer than coff seconds) or persistent voice (longer than con
seconds), the AMD measurement is aborted and audio is connected in both directions. This makes
it possible to try out different values of db and qoff without losing the contacts.

The greet time measured by AMD is sent to the application and recorded in call.log. To determine
the reliability of a given max setting for the AMD call dispositions, make a frequency distribution
of the call outcomes according to the following table:
Interviewer’s Disposition AMD ReportedgreetValue

greet = 0 greet ≤ max greet > max


Human (proceed, False positive? [1] Correct (human) False positive
appointment, refusal,
and so on)
Answering machine Semi-correct (AM) [2] False negative Correct (AM)

[1]greet=0 can be an artifact from using a too small coff value, such that the AMD measurement
was abandoned before the greeting started.

[2]greet=0 starts playback of amfile, but possibly too early, before the answering machine is
ready to record it.

Summary of AMD Modes and Parameters

Syntax: mode , param1 : value , param2 : value2...

Example: amd = 1,qoff:0.40,log:1


Mode Meaning Description
0 Disabled No AMD analysis (but the Call Progress parameter cp is used).
1 Filtering Only “live” calls are through-connected to an interviewer; otherwise the
amfile is played.
2 Calibration Calls are connected in listen-only mode during AMD analysis; the result
is reported in greet.

Name Default Unit Description Action if exceeded Applies to


off 6.0 seconds Max silence at start of Hangup Filtering
call QSAMP_NO_AUDIO
coff 4.5 seconds Max silence at start of Connect two-way Calibration
call audio
max 1.15 seconds Max greeting from Hangup Filtering
human [1] QSAMP_ANSMC
on 60.0 seconds Max greeting from AM Play amfile to AM Filtering
con 3.0 seconds Max greeting measured Connect two-way Calibration
audio
184

Chapter 1

Name Default Unit Description Action if exceeded Applies to


qoff 0.35 seconds Qualification timer for Connect Both
end of voice [2] QSAMP_CON-
NECTED
qam 2.9 seconds Qualification timer for Play amfile Filtering
starting playback
db -34.0 dBm Silence threshold (range Start max timer Both
-46 to -34 dBm) [3]
log 0 0 or 1 If =1, report greet Both
and silence times
(milliseconds) in
call.log[4]
cp Override dialer.ini All
call progress analysis
completion criteria
CpComplete[5]

[1]max recommended range from 0.9 seconds (aggressive, many false positives) to 2.1 seconds
(conservative, many false negatives).
[2]qoff recommended range from 0.3 seconds (many false negatives) to 1.1 (detects most AM, but
connects humans very slowly to an interviewer).
[3]db should be high (insensitive) to avoid detecting people in noisy environments as answering
machines (false positives).
[4]log times are measured on the dialer PC and can differ slightly from the greet times measured
by the DSP.
[4]log times are measured on the dialer PC and can differ slightly from the greet times measured
by the DSP.
[5]cp should normally not be applied for AMD; the only relevant parameter is cp:-S. If persistent
silence is detected, hangup call without playing amfile.

AMD Parameter for Call Progress Analysis

The cp parameter specifies the actions to take when detecting call progress tones, overriding the
default call progress analysis completion criteria CpComplete in the dialer.ini file. Application
areas:
„ Calling countries with non-standard ringback or “number unobtainable” tones.
„ Calling networks that announce tariff information at the start of the call.
„ Calling subscribers with personalized ringback tones (music or announcements).
„ In order to achieve special effects, such as recording of in-band announcements.

The table below lists the different call progress events. Each event is identified by a letter;
pre-CONNECT events in lower case, post-CONNECT in upper case. Each letter or group of
letters is preceded by an action identifier (see the legend below the table). The post-CONNECT
events are only detected in AMD filtering or calibration mode (amd=1 or amd=2).

Syntax: cp:actionID event... [ actionID event... ] ...

Default: cp:-b-c-d-f!r:t:v-B-C+D-F-T
185

Base Professional

Example: cp:=vea Ignore pre-CONNECT voice events (for instance, personalized ringback tones).

Example: cp:*t*v Record recfile when detecting tritone or start-of-voice (until stopped by noansw
timeout).

Example: cp:+v Connect to extension when detecting start-of-voice.


Def ault Call Outcome if Action is ‘-’ or ‘:’
Call Progress Event pre post
con nect Dialer Interviewer Phone IBM® SPSS®
Quancept™
d Dial tone (steady tone for -d +D QSAMP_BADNUMBER QSAMP_BUSY d_badsyn
>3 s)
r Ringback pulse (duration !r =R QSAMP_RINGING n/a [6] d_error [7]
1.5 - 3 s)
q End of ringback (no pulse =q =Q QSAMP_NOANSW coNoAnswer d_na
for >8 s)
b Busy tone (cadence or -b -B QSAMP_BUSY coBusy d_busy
“precise” tone [8])
c Congested / reorder tone -c -C QSAMP_FASTBUSY coFastBusy d_sitout
(cadence)
t Tritone (Special :t -T QSAMP_TRITONE coTriTone d_sitout
Information Tone)
f Fax/modem tone -f -F QSAMP_MODEM coFaxModem d_modem
v Start of voice (diffuse :v =V QSAMP_ANNOUNCEMT coAnnouncement d_sitout
energy above db dBm)
e End of voice (for qoff =e =E QSAMP_ANNOUNCEMT coAnnouncement d_sitout
seconds)
a Voice for more than max =a =A QSAMP_ANNOUNCEMT coAnnouncement d_sitout
seconds
s Silence; no voice for off n/a =S QSAMP_ANNOUNCEMT coNoAudio d_error [7]
seconds [9]
w Call waiting, ignore voice =w =W QSAMP_BUSY coBusy d_busy
events [10]

Action identifiers:
- Hangup immediately. ! Start noansw timer.
: Hangup if no CONNECT is received within ^ Stop CP analysis (ends AMD, connecting the
two seconds. extension).
+ Connect to extension even if no CONNECT = No action (can be used to override default
was received. actions).
* Start recording (for use with recfile option
beg:3).

[6] Call outcome QSAMP_RINGING does not map to a supported CallOutcome in Interviewer
Phone 5.6.
[7] Call outcomes can be mapped to other tipcode values in section [qsamp map] in the qts-sms.ini
file.
[8] “Precise” tone meaning the 480 + 620 Hz busy tone of the North American Precise Audible
Tones Plan.
186

Chapter 1

[9] Condition S (silence) is only generated in AMD filtering or calibration mode (amd=1 or amd=2).
[10] Condition W (waiting) is generated by signaling events with the action announcemt in
causes.cfg.

Activate Current Project - Quota settings

The Quota settings provide options for specifying the project’s Quota Control parameters.

Note: The Quota node displays under the Activate tree when the Use quota control option is selected
from the Project settings dialog.
Figure 1-39
Quota settings

Create new quota database: Select this option if you are activating the project for the first time, or
if you are reactivating but this is the first time that you have had quota information available. The
exception is when your new project shares quotas with another project. In this case, if the shared
quota database already exists, select the quota database from the list instead.

When a project uses quota control and you activate it for the first time, the activation process
creates a new quota database for the project using the information in the project’s quota definition
(.mqd) file.

The quota database is a set of tables whose names start with QUOTA and which the activation
process creates inside the project database. They contain definitions of the quota groups and their
targets and, once interviewing starts, counts of completed, pending, and rolled back interviews for
each group. The quota definition (.mqd) file is the file that the IBM® SPSS® Data Collection
187

Base Professional

Quota Setup program creates when you save the quota definitions and targets. The activation
process uses the file to determine the structure and content of the quota database it is to create.
The .mqd file is not used during interviewing.
Use existing quota database: If the project has been previously activated with quota, or it shares an
existing quota database with another project, select this option and then select the appropriate
quota database from the drop-down list.
Do not update the quota definitions on the server: When selected, quota definitions on the server
will not be updated.
Publish new quota definitions, but do not update existing quotas: When selected, any new definitions
are updated to the server, but existing definitions on the server remain unchanged.
Update quota with changes made in the project’s mqd file Once a quota database exists, this check
box is always available for selection but is unchecked. If you have made changes to the .mqd file,
select this check box if you want these changes to be implemented in the quota database.

Note: If you have changed quotas using the Quotas activity in IBM® SPSS® Data Collection
Interviewer Server Administration these changes will have been written to the quota database but
will not appear in the project’s .mqd file. If you choose to activate using the .mqd file, the changes
you made with the Quotas activity will be lost. If you want to keep these changes, you will need to
make them in the .mqd file using Quota Setup before reactivating.

IBM SPSS Data Collection Quota Setup

To set up a quota control file for the project, use the Quota Setup program. This is available
from the Start menu by choosing:
All Programs > IBM Corp. > IBM® SPSS® Data Collection 6.0.1 > Accessories > Quota

Follow the instructions in the Quota Setup program’s online help (see the Defining Table Quotas
and Defining Expression Quotas topics). However, when you save a quota definition (.mqd) file
you must save it in a subfolder beneath the questionnaire (.mdd) file (not in the same folder
as the questionnaire file).

Activate Current Project - Advanced settings

The Advanced settings allow you to configure parameters that provide a more detailed level of
control over the activation process.
188

Chapter 1

Figure 1-40
Dialing settings

You can configure settings for:


„ Page Templates
„ Files
„ Tasks
„ Others

Advanced - Page Templates

The Page Templates settings provides options for defining custom templates in place of some or
all of the default templates.
189

Base Professional

Figure 1-41
Page Templates settings

Use Custom Interview Templates. Select this check box if you want to use your own templates
for any of the listed events.

Custom templates: The table lists the default templates and allows you to define a custom template
or URL for each default template.
Setting Description
Name The default template name. Refer to the ‘Default templates’ section below for
more information.
Type Indicates whether the custom template information is generated from a template
file, or a URL. The drop-down list provides the options None, Template, and URL.
„ None - when selected, the default template is used.
„ Template - when selected, a custom template file is used, and the template
name must be specified in the associated Filename/URL cell.
„ URL - when selected, a custom template URL is used, and the template URL
must be specified in the associated Filename/URL cell.
Filename / URL Identifies the custom template filename or URL. Custom template files must be
present in the project’s source directory, in [INSTALL_FOLDER]\IBM\SPSS\Dat-
aCollection\6\Interviewer Server\Projects, or in a shared location on your
web server. In all cases, you specify the file using a simple filename
(not a path name). When the project is activated, templates that exist in
the project’s source directory will be copied to the project’s directory in
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\Interviewer Server\Projects.
When interviews take place, the interviewing program will look for the templates
first in the project-specific directory and then in the main Projects directory.
190

Chapter 1

Default templates

Note: You can specify all pages except Authenticate and Authenticate Retry as templates or URLs.
For these two pages we strongly recommend using only templates, as URLs can result in an
interview having two connection IDs.
Template Name Description
Interview stopped The page to display when the participant stops an interview or the interview is
stopped by a statement in the interview script. There is no default template but
IBM® SPSS® Data Collection Interviewer Server displays ‘End of interview.
Thank you for your participation.’
Interview completed The page to display at the end of the interview (that is, when the participant
has answered all relevant questions in the questionnaire). There is no default
page, but Interviewer Server displays ‘End of interview. Thank you for your
participation’.
Note that if the interview ends with a display statement, this text is displayed
as the last page of the interview instead.
Interview rejected The page to display when a participant fails authentication and no retry prompt
is required, for example, when the participant fails quota control. The default
is a template named rejected.htm that displays the message ‘Thank you for
your interest in participating in this survey.’
Authenticate The page to display when an inbound project uses Sample Management, and
you need to verify that the person taking the interview is a member of the
participant group. The default is a template named authenticate.htm that
displays the message ‘Please enter your authentication information’.
Authenticate failed The page to display when authentication fails. The default is a template named
authfailed.htm that displays the message ‘Your authentication information
is incorrect’.
Authenticate retry The page to display when authentication of a prospective participant against
the participant database fails, and you want the participant to reenter the
authentication details. The default page is a template named authretry.htm that
displays the message ‘The authentication information you have entered is
incorrect. Please try again’.
Project inactive The page to display when the participant attempts to start an interview for an
inactive project. The default is a template named projinactive.htm that displays
the message ‘Please come back later’.
Quota full The page to display when the participant belongs in a quota cell whose target
has already been met. There is no default page.

Click Reset if you want to return to the default settings.

Disabling button double-clicks in the default templates

Survey respondents are often presented the option of logging in to complete surveys. After a
respondent enters a user ID and clicks OK, it can sometimes takes a few seconds for the server
to authenticate the user credentials. If a respondent clicks OK a second time, the server may
generate a message similar to the following:

Thank you for your interest in participating in this survey.


A survey is active for your ID. Please continue the original survey or return in 10 minutes to restart.
191

Base Professional

You can avoid this by adding the following script to the header tags for all default templates:

<script language="javascript" defer="true">


function addEvent(target, eventName, handlerName)
{
if (target.addEventListener) // W3C
{
target.addEventListener(eventName, handlerName, false);
return true;
}
else if (target.attachEvent) // IE
{
return target.attachEvent("on" + eventName, handlerName);
}
else
{
target["on" + eventName] = handlerName;
}
}
var ctrlNext;
ctrlNext = document.getElementsByName("_NNext")[0];

function NextClicked()
{
if(ctrlNext != null)
{
setTimeout('ctrlNext.disabled = true;', 1);
}
}

addEvent(ctrlNext, 'click', NextClicked);


</script>

The script prevents additional clicks from registering with the server.

Advanced - Files

The Files settings provides options for excluding specific folders when activating a project.
Excluding specific folders effectively limits the number of files that are uploaded during the
activation process.
192

Chapter 1

Figure 1-42
Files settings

Source files. This non-modifiable field displays the source files location specified in the Project
Settings.
Include sub-folders. When selected, the sub-folders selected in the Sub-folders to include section are
copied to the Shared and Master project folders along with the main project files.
Sub-folders to include: Displays all sub-folders in the current Source files location. Select which
sub-folders will be activated with the project.

Note: This option is only available when the Include sub-folders option is selected.
193

Base Professional

Data Collection files to include. Displays all file types that can be copied when a IBM® SPSS®
Data Collection project is activated. File types not listed will not be activated with the project.
Select which file types will be activated with the project.

Note: You can add files to a project through activation, but you cannot remove files. Deselecting
sub-folders, or Data Collection files to include, will not remove any folders/files from the server.

Advanced - Tasks

The Tasks settings allow you to specify which tasks the activation process will perform. Once
a project has been activated for the first time, this is an ideal way of reducing the time required
for subsequent activations when only certain parts of the activation process need to be run. For
example, if the project does not use Sample Management or Quota Control you can skip these
tasks.

Figure 1-43
Tasks settings

Activate tasks: Specify which tasks the activation process should perform. Tasks are displayed in
a hierarchical structure that can be expanded by clicking the + symbols at the start of each line.
Select the check boxes of the tasks you want the activation process to perform.
194

Chapter 1

„ Activate project. Run the complete activation process. This is the default and only option
the first time you activate a project (even if that project does not use Sample Management
or Quota Control).
„ Update project. Run all aspects of the Update Project part of the activation process.
„ Update project database. Update the information held for this project in the project database.
„ Update project files. Update the project files (project_name.xxx) for this project.
„ Update FMRoot. Copy files from FMRoot\Shared into FMRoot\Master.
„ Update interview server files. Copy files from FMRoot\Master to the Projects folder on all
IBM® SPSS® Data Collection Interviewer Servers.
„ Update master MDM document. Update the master base language for this project to be the one
specified in Questionnaire Base Language on the Project Settings tab.
„ Update project in DPM. Update the basic project information held for this project in DPM. If
you are activating the project just to change its status, selecting this task and no others makes
the change in the shortest possible time.
„ Update sample management. Run all aspects of the Sample Management part of the activation
process.
„ Update sample management in DPM. Update the Sample Management information held for
this project in DPM.
„ Update sample management database. Update the CATI Sample Management information held
for this project in DPM.
„ Update quota. Run all aspects of the Quota Control part of the activation process.
„ Update quota database. Update the project’s quota database with information about the
project’s Quota Control requirements.
„ Update quota in DPM. Update the Quota Control information held for this project in DPM.
„ Update phone interviewing. Run all aspects of the ‘Update Telephone’ part of the activation
process.
„ Update e-mail jobs. Update the project’s e-mail setup information.

Advanced - Others

The Others settings provides options for defining a destination cluster, reporting setting, and
dialog settings.
195

Base Professional

Figure 1-44
Others settings

Destination cluster: The cluster on which to activate the project. If you are activating a project that
has already been activated — for example, if you have changed and recompiled the questionnaire
— you must activate it on the same cluster that you previously used.

Use a reporting database: When selected, you can identify a reporting database.
„ Reporting database connection: When Use a reporting database option is selected, this field
displays the connection string to a reporting database. You can manually enter a connection
string or click Edit... to launch the Data Link Properties dialog and construct a connection
string.

Suppress interview template warnings: Select this option if you do not want the activation process
to display warnings if the project’s template files do not contain well-formed HTML.

IBM SPSS Data Collection Activation Console


The IBM® SPSS® Data Collection Activation Console allows you to monitor questionnaire
activation status. The console provides options for viewing pending and completed activations,
and creating activation history filters. The console is composed of the following tabs:
„ Activation History tab – allows you to view the status of both pending and completed
activations.
„ Filters tab – provides options for filtering questionnaire activation history.
„ Settings tab – provides options for configuring the Activation Console.
196

Chapter 1

When the Activation Console is launched, an icon displays in the Windows taskbar. Whenever a
questionnaire is submitted for activation, or completes activation, the icon provides relevant status
notification messages. The notification messages include the survey URL link for each completed
activation. For more information, see the topic Settings tab on p. 198.

Activation History tab


The Activation History tab allows you to view the status of both pending and completed
activations.

Note: In general, you are limited to viewing only your activations.

Pending Activations

The following information is provided:


„ ProjectIcon – Provides a visual cue for the activation status.

Icon Description
Indicates that project files are currently uploading to the server.

Indicates that the project is pending activation.

Indicates that project activation is currently in progress.

„ Project – The project name as it appears in IBM® SPSS® Data Collection Interviewer Server
Administration.
„ Status – The project activation status (pending, processing, success, or failed).
„ Server – The name of the server to which the questionnaire is being activated.
„ User – The name of the user who initiated the activation.
„ Submitted – The time at which the questionnaire was submitted for activation. This is the time
as reported by the IBM® SPSS® Data Collection Interviewer Server.

Select all. Click to select all projects in the Pending Activations list.

Cancel selected. Click to cancel activation for all selected projects.

Completed activations

The following information is provided:


„ ProjectIcon – Provides a visual cue for the activation status.

Icon Description
Indicates that project activation was successful. You can select the appropriate table row
and click View Message... to view all activation messages.
Indicates that project activation failed. You can select the appropriate table row and click
View Message... to view related activation error information.
197

Base Professional

„ Project – The project name as it appears in Interviewer Server Administration.


„ Status – The project activation status (pending, processing, success, or failed).
„ Server – The name of the server to which the questionnaire is being activated.
„ User – The name of the user who initiated the activation.
„ ProcessingServer – The server that performs the activation. In a cluster environment, the server
to which an activation is submitted is not necessarily the server that performs the activation.
„ StartTime – The time at which the questionnaire was submitted for activation. This is the
time as reported by the Interviewer Server.
„ EndTime – The time at which the questionnaire completed activation. This is the time as
reported by the Interviewer Server.
„ Link – The URL for the activated, live questionnaire.
„ Test Link – The URL for the activated, test questionnaire.
„ ProjectId – The activated questionnaire project’s unique ID. The ID is generated by the
Interviewer Server.

Refresh. Click to refresh the activation status.

View Messages. Select a completed activation and click to view any messages generated during
activation.

Removing pending activations

E Select the appropriate project(s) from the Pending Activation section. Alternatively you can select
all project by clicking Select all.

E Click Cancel selected or right-click and select Cancel.

Note: You cannot remove activations initiated by other users. You can only remove your own
activations.

Filters tab
The Filters tab provides options for defining how activations are displayed on the Activation
History tab.

Activation type. Displays the status for the current activation. When applicable, the drop-down list
allows you to select which activation types display on the Activation History tab. Options include:
„ All
„ Activate – the activation history for activations submitted via the IBM® SPSS® Data
Collection Activation Console.
„ Launch – the activation history for activations submitted via the IBM® SPSS® Data Collection
Interviewer Server’s Launch activity.
„ Promote – the activation history for activations submitted via the Interviewer Server’s
Promote Project activity.
198

Chapter 1

Activation history. The drop-down list allows you to select which activations display in the
Activation History tab. Options include:
„ All
„ Successful activation
„ Failed activation

Activation status. The check boxes allows you to select the activation status to display in the
Activation History tab. Options include:
„ Active
„ Inactive
„ Test

Project. Allows you to define specific projects to display in the Activation History tab.

E Click Find to locate questionnaire projects on the Interviewer Server.

E Use the Add>> and <<Remove buttons to select which projects will display in the Activation
History tab.

User. Allows you to define projects, activated by specific users, to display in the Activation
History tab.
E Click Find to locate users on the Interviewer Server.

E Use the Add>> and <<Remove buttons to select which users’ projects will display in the Activation
History tab.

Activation date between. Allows you to select an activation date range. Only activations that
occurred between the specified date will display in the Activation History tab.
E Click Apply to save your settings.

Settings tab
The Setting tab provides IBM® SPSS® Data Collection Activation Console configuration settings.

Activation status run option. The drop-down menu provides options for determining how the
activation console will handle submitted activations:
„ Start Activation Status when activation queued. The Activation Console begins immediately
after activations are added to the queue. This is the default setting.
„ Start Activation Console when my computer starts. The Activation Console automatically
begins when the computer is started.
„ Start Activation Console manually. The Activation Console is manually started.

Default date range. Controls the date range that displays for the Activation date between fields on
the Filters tab.

Activation message range.


199

Base Professional

Show activation notification. When selected, the Activation Console taskbar icon provides
activation notification messages.

Activation auto refresh on/off. When selected, the Activation History tab automatically refreshes
based on the Activation auto refresh interval setting.

Play audible notification. When selected, the Activation Console taskbar icon provides audible
notifications whenever activation messages are generated. The Play this sound file field allows you
to specify a sound file. Click the Browse (...) button to select a sound file.

E Click Save changes to save your settings.

Local Deployment Wizard overview


The Local Deployment Wizard allows you to deploy a survey to one or more IBM® SPSS®
Data Collection Interviewer installations without requiring an IBM® SPSS® Data Collection
Interviewer Server. The wizard provides a simpler alternative to the Activate dialog that is
commonly used to deploy surveys to Interviewer.

The wizard contains the following steps:


„ Usage options – allows you to select how the project will be used.
„ Validation options – provides data entry validation options.
„ Routing options - data entry – provides data entry routing options.
„ Routing options - live interviewing – provides live interviewing routing options.
„ Display options – allows you to select which fields are visible in the case data, and select
which field is used to uniquely identify each case.
„ Deployment options – provides options for deploying the survey to a deployment package or
directly to the local Interviewer installation.
„ Expiry date and time options – provides options for defining the project expiration data and
time.
„ Summary options – provides a summary of all selected options prior to starting project
deployment.

Note: If a project was previously activated, the wizard provides the previous activation options. If
a survey was not previously activated, the wizard provides default values.

Usage options

The usage options step allows you to select how the project will be used. Options include:
„ Data entry (default setting) – select this option when the project will be used for entering
response data from paper surveys.
200

Chapter 1

„ Live interviewing – select this option when the project will be used to conduct face-to-face
interviewing.
„ Include subdirectories – select this option if you have subdirectories that include additional
files, such as templates and images.

E After selecting the appropriate usage option, click Next to continue to Validation options (when
Data entry is selected) or Routing options - live interviewing (when Live interviewing is selected).

Validation options

The validation options step allows you to select the data entry validation method. This step is only
available when you select Data entry in the Usage options step.

Options include:
„ Full validation – when selected, all responses require validation.
„ Partial validation – when selected, only a subset of responses require validation. Partial
validation is not available for surveys that contain only one routing.
„ Require two-user validation – when selected, operators are not allowed to validate their own
entries. A second operator is required to validate initial entries.

E After selecting the appropriate validation options, click Next to continue to Routing options -
data entry.

Routing options - data entry

The data entry routing options step allows you specify the routing used for data entry. This step is
only available when you select Data entry in the Usage options step.

E Select the appropriate routing context for each data entry option:
„ Initial data entry – the drop-down menu provides all available routings.
„ Full validation – the drop-down menu provides all available routings. This option is only
available when you select Full validation in the Validation options step.
„ Partial validation – the drop-down menu provides all available routings. This option is only
available when you select Partial validation in the Validation options step.
Note: Partial validation is not available for surveys that contain only one routing.

Notes
„ You will receive an error when the same routing is selected for Partial validation and Initial
data entry or Full validation.
„ The Initial data entry and Full validation (if applicable) routing options are automatically selected
when the survey contains only one routing context.

E After selecting the appropriate routing options, click Next to continue to Display options.
201

Base Professional

Routing options - live interviewing

The live interviewing routing options step allows you specify the routing used for live interviewing.
This step is only available when you select Live interviewing in the Usage options step.

E Select the appropriate routing options for each project task:


„ Routing – the drop-down menu provides all available routings.
„ Renderer – the drop-down menu provides all available renderers. The selected renderer
controls which display renderer is used for live interviewing. The default value is Web.

Notes
„ The Routing option is automatically selected when the survey has only one routing context.

E After selecting the appropriate routing options, click Next to continue to Display options.

Display options

The display options step allows you to select which fields are visible in the case data, and select
which field is used to uniquely identify each case.
„ Identify unique surveys with this variable – select an appropriate variable that will be used to
uniquely identify each survey. The drop-down menu provides all user variables that can be
used as unique IDs. Boolean and categorical variables are excluded from this list.
„ Display fields – select the appropriate display fields. Selected fields are included in the IBM®
SPSS® Data Collection Interviewer Case List. The fields are displayed in the order in which
they appear in the Display fields list. Use Move Up and Move Down to reorder the list.

Notes
„ Respondent.ID and DataCollection.Status are selected by default.
„ DataCollection.Status is a required field and cannot be deselected.

E After selecting the appropriate display options, click Next to continue to Deployment options.

Deployment options

The deployment options step allows you to select whether to deploy the survey to a deployment
package or directly to the local IBM® SPSS® Data Collection Interviewer installation.

Options include:
„ Create a deployment package for this project (default setting) – when selected, the project is
saved as a deployment package, allowing it to be loaded into other Interviewer installations.
Enter a save location in the provided field, or click ... to browse for an appropriate save
location. The deployment package is saved to the location you specify.
202

Chapter 1

„ Deploy this project to local Interviewer – when selected, the project is deployed to the local
Interviewer installation. This option requires an Interviewer installation on the local machine.
„ Data file type – allows you to select the deployment package save file format. The drop-down
menu provides the following save file options:
– Data Collection Data File (.ddf)
– Statistics File (.sav)

E After selecting the appropriate deployment options, click Next to continue to Expiry date and
time options.

Expiry date and time options


The expiry date and time step allows you to define the project’s expiration date and time (UTC
time). Defining a project expiration date and time allows interviewers to easily identify expired
projects.

Options include:
„ Date: The project expiration date. You can manually enter a date, in the format mm/dd/yyyy, or
you can click the down arrow to display a calendar and select a date.
„ Time: The project expiration time. This indicates the exact time of day, for the selected date,
that the project will expire. Enter an appropriate time in the 24-hour format hh:mm (for
example 17:00 for 5:00 PM).

E After selecting the appropriate deployment options, click Next to continue to Summary options.

Summary options
The Summary Options step provides a summary of the options selected in each wizard step.

E After reviewing the selected options, click Finish to exit the Deployment Wizard.
„ If you selected Create a deployment package for this project in the Deployment options step,
the deployment package is saved to the specified location.
„ If you selected Deploy this project to local Interviewer, the project is deployed to the local IBM®
SPSS® Data Collection Interviewer installation.

Note: If you want to change any of the selected options, click Previous until the appropriate wizard
step displays. After changing the appropriate option(s), click Next until the Summary Options step
displays. Review the selected options, and click Finish.
203

Base Professional

Activation Settings
Using the File Management component

Most users who activate projects using either an IBM® SPSS® Data Collection Interviewer
Server Administration activity such as Launch or a desktop program such asIBM® SPSS®
Data Collection Base Professional have access to the shared FMRoot folder. Users whose
computers are not connected to the network cannot access FMRoot and therefore need to use the
File Management component for activation instead. When you install Base Professional, the
installation procedure asks whether the user has access to FMRoot and configures the user’s
machine accordingly. You can change this manually at any time simply by changing the value of
a registry key.
The registry key is called UseFileManagerWebService and it is located in
HKEY_LOCAL_MACHINE\SOFTWARE\SPSS\COMMON\FileManager. Its default value is
0 meaning that activation will use FMRoot. To use the File Management component instead of
FMRoot, change the value of this key to 1.
Users who do not have access to FMRoot and whose files are copied using the File Management
component may notice that activation run slightly slower than for users with access to FMRoot.

Option to select .sam sample management scripts

The activation procedure does not normally allow users to select sample management scripts
written in VBScript (.sam files). If your company has an overriding requirement to use .sam sample
management scripts with IBM® SPSS® Data Collection projects, you may reinstate the option to
select .sam files by setting the ShowVBScriptProvider key to 1 in the registry. This key is of type
DWORD and is located in \HKEY_LOCAL_MACHINE\Software\SPSS\mrInterview\3\Activate.
If the key is not defined or has a value of zero, .sam files cannot be selected.

Specifying which files are copied during local deployment

The IVFilesToBeCopied registry entry controls which files and file extensions are copied during
local deployment. By default, IVFilesToBeCopied includes the following files and extensions that
are automatically copied during local deployment:
„ .mdd
„ .sif
„ .htm
„ .html
„ .xml
„ .mqd
„ .gif
„ .jpg
„ .jpeg
„ .png
„ .mov
204

Chapter 1

„ .bmp
„ .avi
„ catifields_*.mdd
„ .css
„ .js
„ catiCallOutcomes_*.mdd
„ projectinfo.xml

You can define additional files and/or file extensions by updating the
IVFilesToBeCopied user registry entry. The IVFilesToBeCopied registry entry is
located at: HKEY_CURRENT_USER\Software\SPSS\mrInterview\3\Activate.

The IVFilesToBeCopied rules are as follows:


E When the localdeployconfig.xml file is available, the file’s IVFilesToBeCopied value is used.

E When the localdeployconfig.xml file is not available, the


IVFilesToBeCopied value is retrieved from the user registry
(HKEY_CURRENT_USER\Software\SPSS\mrInterview\3\Activate\IVFilesToBeCopied)
and written to the local config.xml file.
E When the IVFilesToBeCopied user registry key is not found,
IVFilesToBeCopied is read from the local machine key
(\HKEY_LOCAL_MACHINE\Software\SPSS\mrInterview\3\Activate\IVFilesToBeCopied),
copied to the current user registry key
(HKEY_CURRENT_USER\Software\SPSS\mrInterview\3\Activate\IVFilesToBeCopied), and
then written to the local config.xml file.

Note: Registry key changes will not take effect until you manually remove any existing references
to IVFilesToBeCopied in the local config xml file. For example:
<?xml version="1.0" encoding="utf-8" ?>
<properties>
<IVFilesToBeCopied> <![CDATA[mdd;*.htm;*.html;*.xml;mqd;*.gif;*.jpg;*.jpeg;*.png;*.mov;*.bmp;*.avi;catifields_*.mdd;*.css;*.js;catiCa
</properties>

The default local activation directory is C:\Documents and Settings\<your Windows user
name>\Application Data\IBM\SPSS\DataCollection\Activate.

Data Management Scripting


This section provides documentation for the using IBM® SPSS® Data Collection Base
Professional to perform various data management-related tasks. The following table provides a
summary of the documentation in this section.
What’s new in IBM SPSS Data Collection Base Notes about what has changed in this section
Professional 6.0.1
Data Management Overview A brief introduction to data management scripting.
205

Base Professional

Getting Started A step-by-step guide to help you get started with


data management scripting.
Understanding the Process Flow A number of diagrams that illustrate the process
flow when a DataManagementScript (DMS) file
is run.
Data Management Script (DMS) File Includes a detailed reference to the
DataManagementScript (DMS) file, an
overview of filtering data, and several examples.
DMS Runner Explains how to use the DMS Runner command
prompt utility to run your DMS files.
Transferring Data Using a DMS File An introduction to transferring data to and from
various formats.
Working with IBM SPSS Data Collection Provides information about using a DMS file to
Interviewer Server Data export IBM® SPSS® Data Collection Interviewer
Server data to other formats.
Merging Case Data Explains how to use a DMS file to combine the
case data from two or more data sources into a
single data source.
Data Cleaning An introduction to cleaning data and several
examples that illustrate using the DMS file to clean
data.
Working with the Weight Component Provides detailed information about the different
methods of calculating weighting, and includes
the formulae used by the Weight component and
several detailed examples.
Creating New Variables An introduction to creating different types of
variables.
Analyzing a Tracking Study Describes how you can use data management
scripting and other IBM® SPSS® Data Collection
and SPSS technologies to analyze the response data
from your tracking studies.
Table Scripting in a Data Management Script Covers the aspects of table scripting that are
specific to DMS files and provides some examples
of scripting tables in a DMS file.
Data Management Functions Provides information for Data Management
(DMOM) functions.
Data Management Troubleshooting and FAQs Tips and answers to some common problems and
queries.
WinDMSRun Provides information about WinDMSRun, which
is a sample Windows application that comes with
the IBM® SPSS® Data Collection Developer
Library. You can use WinDMSRun to set up and
run your DMS files. WinDMSRun also has a
handy feature that enables you to view the input
and output data. The Visual Basic .NET source
code of WinDMSRun is provided as a reference
for programmers.
Samples Includes details of the numerous sample DMS and
related files that come with the Data Collection
Developer Library.
Data Management Reference This provides detailed reference information about
the Data Management (DMOM) and Weight
component object models.
206

Chapter 1

Data Management Overview

IBM® SPSS® Data Collection Base Professional includes a number of components that facilitate
various data management tasks, including:

Transferring data. You can transfer data from one format or location to another. For example, when
you collect data using IBM® SPSS® Data Collection Interviewer Server, it is stored in a relational
MR database and you can use a DataManagementScript (DMS) file to transfer the data to an SPSS
.sav file so that you can analyze it using SPSS.

Merging data. You can use a DataManagementScript (DMS) file to combine the case data from
two or more data sources into a single data source. For example, if the case data for your survey
has been collected in separate data sources, you may want to merge the data before cleaning
and analyzing it.

Filtering data. You can use filters to restrict the transfer to certain variables or respondents. For
example, you might want to create filters to restrict the transfer to specific variables that record
demographic details and to include respondent data only if the interview was completed in the last
24 hours.

Cleaning data. This involves correcting errors and anomalies in the case data. For example,
checking whether more than one response has been selected for each single response question, and
if so, marking the case as requiring review or taking some action to correct the problem such as
randomly selecting one of the responses and removing the others. When cleaning data, it is usual,
but not necessary, to store the “clean” data separately so that the original data remains intact.

Setting up weighting. This involves creating special weighting variables that can be used to weight
data during analysis; for example, so that it more accurately reflects the target population.

Creating new variables. You can define filter variables and other derived variables, which can then
make analysis easier. For example, if a questionnaire asked respondents to enter their exact age as
a numeric value, you may want to create a derived categorical variable that groups respondents
into standard age groups.

Scripting tables. If you have IBM SPSS Data Collection Survey Reporter Professional, you can
create batch tables using a script.

Data Management Infrastructure

You define your data management tasks using a DataManagementScript (DMS) file. The
DMS file is a text file with a .dms filename extension. It is easy to read and edit a DMS file
that defines simple data management tasks, such as copying data from one location to another,
even if you have little or no programming experience. If you are an advanced user and have
some programming or scripting expertise, the DMS file makes it possible to define complex and
advanced data management tasks.
207

Base Professional

The DMS file does not use a new language, instead it acts as a container for a number of
industry-standard languages:
„ Property definitions. The DMS file uses a simple INI file-like syntax to initialize property
settings.
„ SQL syntax. Used in the DMS file for queries and other operations that are implemented by the
OLE DB provider. The SQL syntax that you can use depends upon which OLE DB provider
you are using. For example, when you are using the IBM SPSS Data Collection OLE DB
Provider you can use any SQL syntax that is supported by the IBM® SPSS® Data Collection
Data Model, provided that it is also supported by the DSC through which you are accessing
the data. Refer to the SQL Syntax topic in the IBM® SPSS® Data Collection Developer
Library for more information.
„ mrScriptMetadata. Used in the DMS file to define new variables in the metadata.
mrScriptMetadata is a proprietary SPSS syntax that provides a fast and easy way of creating
metadata using a script.
„ mrScriptBasic. Used in the DMS file for defining procedural code for cleaning data, setting
up weighting, etc. mrScriptBasic is based on Visual Basic and Visual Basic Scripting
Edition (VBScript), and if you know these languages, you will find mrScriptBasic easy to
learn. However, unlike Visual Basic and VBScript, mrScriptBasic provides native support
for market research data types and expression evaluation.

In addition, IBM® SPSS® Data Collection Base Professional includes the following:
„ Base Professional IDE. Integral to Base Professional is an integrated development environment
(IDE) that enables you to create, edit, run, and debug IBM® SPSS® Data Collection scripts.
For more information, see the topic Using IBM SPSS Data Collection Base Professional
on p. 11.
„ DMS Runner. A command line tool for running a DMS file. For more information, see the topic
DMS Runner on p. 289.
„ Data Management Object Model (DMOM). A set of component objects that are designed
specifically to facilitate data transformations. These objects are generally called from
mrScriptBasic in the DMS file. For more information, see the topic DMOM Scripting
Reference on p. 501.
„ Weight component. A set of component objects for creating rim, factor, and target weighting.
Like the DMOM objects, the weight objects are generally called from mrScriptBasic in the
DMS file. For more information, see the topic Working with the Weight Component on p. 406.
„ Table Object Model (TOM). A set of component objects for creating market research tables.
TOM is available only if you have purchased the Base Professional Tables Option. For more
information, see the topic Table Scripting on p. 1140.
„ Samples. The Data Collection Developer Library comes with numerous sample DMS files
that you can use as templates. For more information, see the topic Using the Sample DMS
Files on p. 466.
208

Chapter 1

Getting Started

Getting started with data management scripting can be challenging because it uses technologies
(like SQL syntax, mrScriptBasic, and mrScriptMetadata) which you may not have used before.
In addition, data management scripting is powerful and can accomplish many different data
management tasks and procedures. This makes it hard to know where to start. However, you don’t
have to master all of the technologies to start using data management scripting and you can
build up your knowledge step by step.

This section is designed to help you get started with data management scripting. It walks you
through some simple tasks and gives you some ideas on how to build up your knowledge. This
section is designed to be used in conjunction with the other documentation and it contains many
links to other topics and sections. The links are there for your convenience. However, you can
ignore them if you find them distracting.

You may often find it helpful to read the next and previous topics in the table of contents, although
you may want to come back and do that later. If you follow a link and decide you want to return
to where you were, just click the Back button on the toolbar.

The topics in this section are:


1. Running Your First Transfer
2. Setting up a Filter
3. Transferring Different Types of Data
4. Using an Update Query
5. Running Your First Cleaning Script
6. Mastering mrScriptBasic
7. Learning about Weighting
8. Mastering mrScriptMetadata
9. Getting To Know the Samples
10. Where Do I Go From Here?

1. Running Your First Transfer

At the heart of data management scripting is the DataManagementScript (DMS) file. This is a
text file with a .dms filename extension that defines a data transformation job. At its simplest, a
DMS file can define a simple transfer from one data format or location to another. The IBM®
SPSS® Data Collection Developer Library comes with several sample DMS files, including
MyFirstTransfer.dms, which is similar to the simple example described in Simple Example of a
DMS File. You can use this file to transfer to an IBM® SPSS® Statistics .sav file the first 100
records in the Museum sample IBM SPSS Data Collection Data File that comes with the Data
Collection Developer Library.

The sample file is called MyFirstTransfer.dms and by default it is installed into the
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Data Management\DMS folder.
209

Base Professional

E Let’s open the file in IBM® SPSS® Data Collection Base Professional now:

Before you run the file, check that it is set up correctly for your computer:
„ The connection string in the InputDataSource section (highlighted in red) defines the name,
location, and type of the data you want to transfer. If the sample files have not been installed
into the default location, you will need to edit the Location and Initial Catalog connection
properties accordingly.
„ The connection string in the OutputDataSource section (highlighted in blue) defines the
name, location, and type of the target data. Check that the location exists on your computer
and if not, edit the Location connection property accordingly. (If a .sav file exists with the
same name in the specified location, move the existing .sav file to a different folder before
you run the transfer.)
„ The Path parameter in the Logging section defines the location for the log file. Check that this
location exists on your computer, and if not edit the parameter accordingly.

E Now let’s run the file. To do this, choose Start Without Debugging from the Debug menu.

E Now click the Output tab (this is typically in the lower part of the Base Professional window), to
bring the Output pane to the front.
210

Chapter 1

Provided that the sample files are present and in the specified locations, and there is not already
a .sav file of the same name and in the same location as that specified in the OutputDataSource
section, this is what you should see in the Output pane:

“The transformation completed successfully” message means that the transfer has been successful.
If you now go to the target location folder in Windows Explorer, you should see three new files:
„ MyFirstTransfer.sav. This is an SPSS Statistics .sav file that should contain the first 100 records
from the Museum sample IBM SPSS Data Collection Data File. If you open this file in SPSS
Statistics, you will notice that the variable names are different. This is because the long
names that can be used in the IBM® SPSS® Data Collection Data Model are not valid in
SPSS Statistics and some of the variables (such as multiple response variables) are handled
differently in SPSS Statistics. The DMS file uses the Data Model data source components
(DSCs) to read and write the data for the transfer. SPSS Statistics SAV DSC is used to write
the data to the .sav file. For more information, see the topic Transferring Data to IBM SPSS
Statistics on p. 342.
„ MyFirstTransfer.sav.ini. This file contains a setting that defines the language to be used when
reading the .sav file that has just been created, for example, if the .sav file is used as the input
data source in another data management script. Refer to the Language Handling by the SPSS
Statistics SAV DSC topic in the Data Collection Developer Library for more information.
„ MyFirstTransfer.sav.xml. This file contains additional information for use by SPSS WebApp.
Refer to the SPSS WebApp XML File topic in the Data Collection Developer Library for
more information.
„ MyFirstTransfer.mdd. This is a Metadata Document (.mdd) file for the transfer. If you want to
use the .sav file in IBM® SPSS® Data Collection Survey Tabulation, performance will be
better if you access the .sav file using this file.

In the next topic, 2. Setting up a Filter, you will learn how to set up a filter to restrict the variables
that are included in the transfer.
211

Base Professional

2. Setting up a Filter

In the simple DMS file we looked at in 1. Running Your First Transfer, the following line is
included in the InputDataSource section:
SelectQuery = "SELECT * FROM vdata WHERE Respondent.Serial < 101"

This is called the select query and it is where you specify the filter for the transfer. A filter is
a way of defining a subset of the data to be transferred. The filter can restrict the variables that
are included in the transfer, or the case data records, or both. In this query, the line WHERE
Respondent.Serial < 101 specifies that case data records should be included in the transfer only
if their serial number is less than 101. Respondent.Serial is a special variable (called a system
variable) that is present in most IBM® SPSS® Data Collection data and which stores the
respondent’s serial number.

Now let’s change the filter to restrict the transfer to a few specific variables and different case
data records. Doing this will change the MyFirstTransfer.dms sample file, so you may want
to make a backup copy of it first.

E If its not already open, open MyFirstTransfer.dms in IBM® SPSS® Data Collection Base
Professional and change the line shown above to:
SelectQuery = "SELECT age, gender, education, remember FROM VDATA _
WHERE gender = {female}"

This filter will restrict the transfer to the four named variables (age, gender, education, and
remember) and case data records for female respondents only.

If we use this filter for the transfer, the structure of the data will not match the structure of the
data we transferred before. If we try to export it to the same .sav file, we will get an error. So
before you run the transfer, you need to move or delete the files created by the previous export, or
specify a different name for the output files. We will change the name of the output files. To do
this, change the OutputDataSource section as follows (the changes are highlighted):
OutputDataSource(myOutputDataSource)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrSavDsc; _
Location=[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Output\MyFirstTransfer2.sav"
MetaDataOutputName = "[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Output\MyFirstTransfer2.mdd"
End OutputDataSource

E Now save the file with a new name, for example MyFirstTransfer2.dms.

E Now press Ctrl+F5 to run the transfer without debugging, and click the Output tab, so that you can
see the Output pane.

E If you have IBM® SPSS® Statistics, use it to open the .sav file after the transfer has finished.
You will see that this time only the variables in the select query have been transferred. However,
a number of numeric variables have been created for the remember variable, because it is a
multiple response variable. If you have the SPSS Statistics Tables add-on module, you will see
that a multiple response set called $remembe has also been created for this variable. Refer to the
212

Chapter 1

Variable Definitions When Writing to a .sav File topic in the IBM® SPSS® Data Collection
Developer Library for more information.

E If you are new to writing SQL queries, refer to the Basic SQL Queries, Running the Example
Queries in DM Query, and DM Query topics in the Data Collection Developer Library.

Note that you can copy queries in DM Query and paste them straight into your DMS files. Paste
the query after the equal sign like this:

SelectQuery = "<Paste the query here>"

For more information, see the topic Filtering Data in a DMS File on p. 243.

In the next topic, 3. Transferring Different Types of Data, you will learn how to transfer other
types of data.

3. Transferring Different Types of Data

Generally DMS files use the IBM® SPSS® Data Collection Data Model and its data source
components (DSCs), to read and write data in different formats. DMS files can also transfer data
using OLE DB providers that are not part of the Data Model. However, this topic provides an
introduction to transferring different types of data using the Data Model and its DSCs.
„ For an introduction to the Data Model, see the Data Model topic in the IBM® SPSS® Data
Collection Developer Library.
„ For an introduction to DSCs and information about the DSCs that are supplied with the Data
Model, see the Available DSCs topic in the Data Collection Developer Library.

You can use a DMS file to transfer data from any format for which you have a suitable
read-enabled DSC to any format for which have a suitable write-enabled DSC.

In this topic you will learn how to set up a DMS file to transfer the Museum sample IBM® SPSS®
Quanvert™ database to a IBM SPSS Data Collection Data File (.ddf). This transfer utilizes the
Quanvert DSC to read the Quanvert database and the IBM SPSS Data Collection Data File CDSC
to write the IBM SPSS Data Collection Data File.
213

Base Professional

Let’s look at MyFirstTransfer2.dms file in IBM® SPSS® Data Collection Base Professional again.
Note that this topic assumes that you have created the MyFirstTransfer2.dms file as described in
the 2. Setting up a Filter. If you haven’t already done that, create the file now, so it looks like this:

The InputDataSource section (highlighted in red) is where you define the details that relate to the
data you want to transfer.

The OutputDataSource section (highlighted in blue) is where you define the details that relate to
the target data.

It is the ConnectionString parameter in each of these sections that defines the name and location
of the data and the DSC and other options that you want to use. You can type in the connection
string by hand, but you must be careful to spell all of the connection properties and file and path
names correctly. Refer to the Connection Properties topic in the Data Collection Developer
Library for more information.

However, there is an easier way of setting up the connection string, and that is to use the Base
Professional Connection String Builder. Let’s do that now for the InputDataSource section:

E In the InputDataSource section, delete the value of the connection string (shown below):

ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"

Your connection string should now look like this:

ConnectionString = ""

E Keeping the cursor between the quotation marks, choose Connection String Builder from the Tools
menu.

This opens the Connection Tab in the Data Link Properties dialog box, with the Provider tab
already set up to use the IBM SPSS Data Collection OLE DB Provider.
214

Chapter 1

E From the Metadata Type drop-down list, select Quanvert Database.

E Enter the Metadata Location of the Museum sample Quanvert database. You can either type the
name and location of the Quanvertqvinfo file into the text box or click Browse and select the file
in the Open dialog box. By default the Museum sample Quanvert database is installed into the
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Data\Quanvert\Museum folder.

E From the Case Data Type drop-down list, select Quanvert Database.

E Enter the Case Data Location of the Museum sample Quanvert database exactly as you entered it
in the Metadata Location text box.

E Click OK.

This inserts the connection string at the cursor position. Now your InputDataSource section
should look like this:

InputDataSource("My input data source")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrQvDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Quanvert\Museum\qvinfo; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Quanvert\Museum\qvinfo; _
MR Init MDSC=mrQvDsc"
SelectQuery = "SELECT age, gender, education, remember FROM VDATA _
WHERE gender = {female}"
End InputDataSource
215

Base Professional

To make the connection string easier to read, we have added line continuation characters
so that we can place each property on a separate line, like the original connection string in
MyFirstTransfer.dms. For more information, see the topic Breaking Up Long Lines in the DMS
file on p. 247. However, if you prefer you can leave the connection string on one very long line.

You may notice that some of the connection properties were not present in the original connection
string that you deleted. This is because you do not need to specify connection properties whose
default values are being used.

Now let’s set up the connection string for the OutputDataSource section for the export to the
IBM SPSS Data Collection Data File:

E Delete the connection string in the OutputDataSource section.

E Without moving the cursor, choose Connection String Builder from the Tools menu.

Again, this opens the Connection Tab in the Data Link Properties dialog box, with the Provider
tab already set up to use the IBM SPSS Data Collection OLE DB Provider.

E From the Metadata Type drop-down list, select (none).

E From the Case Data Type drop-down list, select IBM SPSS Data Collection Data File (read-write).

E Enter the Case Data Location for the new IBM SPSS Data Collection Data File. You can either
type the name and location of the file into the text box or click Browse and browse to the location
in the Open dialog box.

E Select the Advanced tab.


216

Chapter 1

E Select Allow dirty data. This specifies that we want the case data to be validated, but we want to
accept data even if it contains some errors and anomalies.

E Click OK.

This inserts the connection string at the cursor position.

E Now edit the MetadataOutputName parameter in the OutputDataSource section, to give the output
metadata file a different name (such as MyFirstTransfer3.mdd) so that when we run the file, the
new metadata won’t overwrite the metadata that was created for the .sav file in the previous topic.

E Save the file with a new name, such as MyFirstTransfer3.dms.

E Now run MyFirstTransfer3.dms.

You can use similar techniques to set up connection strings for different types of transfer. For
more information, see the topic Transferring Data Using a DMS File on p. 310.

4. Using an Update Query

You can optionally use an update query in the InputDataSource section and the OutputDataSource
section to add, delete, or update case data records. Note that you can use an update query only
if the CDSC that is being used can write case data and supports this type of operation (that is,
the Can Add and Can Update properties are both True for the CDSC—for more information, see
Supported Features of the CDSCs in the IBM® SPSS® Data Collection Developer Library).
217

Base Professional

In 2. Setting up a Filter, we learned how to write SQL query statements to filter the data. Update
queries use a different type of SQL statement, called data manipulation statements. The
following data manipulation statements are supported by the IBM® SPSS® Data Collection
Data Model:
„ INSERT. Use to add new case data records.
„ UPDATE. Use to change existing case data records.
„ DELETE. Use to delete case data records.

E Follow the links above to find out more about these SQL statements and use DM Query and the
Museum sample IBM SPSS Data Collection Data File (.ddf) to try out the examples.
E Now create the following DMS file, which contains two update queries:

InputDataSource(myInputDataSource, "My input data source")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
SelectQuery = "SELECT * FROM VDATA _
WHERE Respondent.Serial < 101"
UpdateQuery = "DELETE FROM vdata _
WHERE DataCollection.Status = {Test}"
End InputDataSource

OutputDataSource(myOutputDataSource, "My output data source")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\UpdateQuery.ddf"
MetaDataOutputName = "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\UpdateQuery.mdd"
UpdateQuery = "UPDATE vdata _
SET DataCollection.FinishTime = Now()"
End OutputDataSource

The update query in the InputDataSource section uses an SQL DELETE statement to delete the test
data. A WHERE clause is used to filter the case data records on the DataCollection.Status system
variable. Note that you should use update queries in the InputDataSource section of your DMS
files with extreme care, because the update query will modify the input data source irreversibly.

The update query in the OutputDataSource section uses an SQL UPDATE statement and the
function to set the value of the DataCollection.FinishTime system variable to the current time.
E Now try running this example.

5. Running Your First Cleaning Script

Data cleaning is the process by which you correct errors and anomalies in the case data. Typically
you clean the data using mrScriptBasic code in the OnNextCase Event section. mrScriptBasic
is based on Visual Basic Scripting Edition (VBScript), which is in turn based on Visual Basic,
and if you are familiar with either of these languages, you will find mrScriptBasic easy to pick
up. This topic does not go into detail about mrScriptBasic, but rather walks you through a simple
218

Chapter 1

cleaning script and shows you how to examine the results. The script is designed as a taster
rather than as a “real life” example.

We will look at and run the MyFirstCleaningScript.dms file, which is similar to the Cleaning.dms
file described in 1. More Than One Response To a Single Response Question in the Data Cleaning
section. However, it has been modified to deliberately introduce a number of errors into a copy
of the Museum XML sample data, which are then “cleaned” in the OnNextCase Event section.
Without these modifications, the cleaning script would not change the data because the Museum
sample data is generally clean.

E Let’s start by opening the MyFirstCleaningScript.dms in IBM® SPSS®


Data Collection Base Professional. By default, the file is installed into the
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Data Management\DMS folder.

Notice that the InputDataSource section contains the following update query:

UpdateQuery = "UPDATE vdata SET interest = {Birds, Fossils}, _


expect = expect + {12, 13, 14}, _
when_decid = when_decid + {129, 130, 131, 132, 133}, _
age = age + {1, 2, 3} _
WHERE Respondent.Serial < 11"

This updates the first 10 case data records in the input data source with additional responses to
four questions (interest, expect, when_decid, and age). This will make the answers to these
questions incorrect because they are all single response questions and so should have only one
response each. More than one response to a single response question is typical of the type of
error that data cleaning attempts to correct.

Now let’s look at the OnNextCase Event section in MyFirstCleaningScript.dms to see how these
four questions are cleaned.
219

Base Professional

Base Professional has an option to show the line numbers on the left side. If this option isn’t
selected, you may want to switch it on. For more information, see the topic IBM SPSS Data
Collection Base Professional options on p. 101.

When an error is encountered in the code when it is validated or run, Base Professional displays a
message that includes the line number. For example, the message for an error that occurs in the
OnNextCase Event section on line 41 would look something like this:
Event(OnNextCase,"Clean the data")mrScriptEngine parse error:
Parser Error(41): ...

Using the option to display the line numbers makes it easy to locate the line on which the error
occurred. For more information, see the topic Debugging on p. 46.

MyFirstCleaningScript.dms shows several different ways of cleaning single response data that
contains more than one response. However, the function is always used to test whether more than
one response has been selected for the question. The AnswerCount function is part of the , all of
whose functions are automatically available to mrScriptBasic.

Lines 43-45 test whether the interest question has more than one response and if so, replaces
them with the Not answered response.

Lines 47-49 replace any multiple responses to the expect question with a predefined default.

Lines 51-53 test whether there are multiple responses to the when_decid question, and if so uses
the function to select one of the responses at random and remove the rest.
220

Chapter 1

Lines 55-59 handles multiple responses to the age question by setting the DataCleaning.Status
system variable to Needs review and adding a message to a text string and the DataCleaning.Note
system variable. Note that line 41 has already set up the text string to contain the respondent’s
serial number (Respondent.Serial system variable) and line 68 writes the text to a report file.

Lines 63-65 illustrate another way of handling a single response question that has more than
one response, and that is to delete the record from the output data source. This is done for the
gender question and to illustrate this, the response to this question is made multiple response for
one case data record (for which Respondent.Serial has a value of 11) in the OnNextCase Event
section in lines 61-62.

E Now let’s run the DMS file.

E Now we will use DM Query to examine the clean data. For instructions on setting up DM Query
to do this, see the How to Run the Example Queries in DM Query Using the Museum Sample
topic in the IBM® SPSS® Data Collection Developer Library. However, remember to select the
output data source files (MyFirstCleaningScript.mdd and MyFirstCleaningScript.xml) rather than
the installed Museum sample files.

E Enter the following query into the text box in DM Query:

SELECT Respondent.Serial, interest, expect, when_decid,


age, gender, DataCleaning.Note, DataCleaning.Status
FROM vdata

Here are the results for the first 15 case data records:

First let’s study the first ten rows. These represent the case data records for which the update query
inserted additional responses for the interest, expect, when_decid, and age questions. As expected,
the multiple responses in the interest and expect columns have been replaced with Not_answered
and general_knowledge_and_education respectively. The multiple responses in the when_decid
column have been replaced by one response selected randomly. The responses in the age column
are unchanged, but we can see that the text has been written to the DataCleaning.Note variable
and the DataCleaning.Status variable is set to needsreview. Also, notice that, as expected, there is
221

Base Professional

no Respondent.Serial with a value of 11, because this is the record for which the gender variable
was given two responses and so the case was deleted from the output.

If you open the MyFirstCleaningScript.txt report file, you will see that the Age needs checking
text has been written for the first ten respondents. You can open the text file in Base Professional
or in a text editor, such as Notepad.

E To find out more about data cleaning, read the Data Cleaning section, which includes a general
introduction to data cleaning, an overview of using a DMS file to clean data, and examples of how
to handle many common data cleaning requirements.

The next topic, 6. Mastering mrScriptBasic, gives ideas about how to go about learning
mrScriptBasic.

6. Mastering mrScriptBasic

In the previous topic, 5. Running Your First Cleaning Script, we looked at a simple example of
using mrScriptBasic to clean the case data. You use mrScriptBasic to write procedural code in
the Event sections of your DMS file. There are a number of possible Event sections, each one
being processed at a different time during the execution of the DMS file. Therefore each one is
suitable for different types of tasks. For more information, see the topic Event Section on p. 273.
However, the Event sections are not compulsory and there are many data management tasks that
you can achieve without using an Event section.

mrScriptBasic is based on Visual Basic Scripting Edition (VBScript), which is in turn based on
Visual Basic. However, the syntax of mrScriptBasic has a number of differences from VBScript
and Visual Basic. If you plan to write only very simple mrScriptBasic code, you may be able to
meet your needs by simply modifying the sample DMS files that are provided with the IBM®
SPSS® Data Collection Developer Library. However, if you plan to write more complicated
procedures, you will probably find you make faster progress if you spend some time learning
mrScriptBasic outside of a DMS file. This topic provides some suggestions on how to do that.
E If you are new to scripting or programing with objects, start by working through , in the
mrScriptBasic section.
E Then read the other introductory topics in the . These give you an introduction to mrScriptBasic,
and describe the main syntax differences between mrScriptBasic and VBScript and Visual Basic.
222

Chapter 1

E Now follow the steps in Creating Your First mrScriptBasic Script. This topic introduces you to the
features in IBM® SPSS® Data Collection Base Professional that help you write mrScriptBasic
scripts and explain how to run an mrScriptBasic file in Base Professional.
E Now run the examples described in . The examples are installed with the Data Collection
Developer Library as sample mrScriptBasic files. Refer to the Sample mrScriptBasic Files topic in
the Data Collection Developer Library for more information.
E Take the time to read the topics on each of the examples because they draw attention to many
things that you need to look out for when you start writing mrScriptBasic code.
E Now read Debugging to learn about the features in Base Professional that you can use to help
you debug your mrScriptBasic files.
E Now try creating some mrScriptBasic files of your own and running them. For example, you could
try and recreate in mrScriptBasic some of the Visual Basic examples shown in MDM Tutorial
topic in the Data Collection Developer Library.
E While you are doing this, refer to and make full use of the debugging features.

E Before you start writing mrScriptBasic code in the Event sections of your DMS files, take the
time to study the following topics:
„ Using Objects in the Event Sections. Describes which objects are automatically registered
with the mrScriptBasic engine in the various Event sections.
„ Event Section. Describes the various Event sections and provides links to topics on each one.
These topics contain a number of examples.
„ Data Cleaning Examples. Provides a number of examples of using mrScriptBasic in the
OnNextCase Event section to clean data.

Note: The IBM® SPSS® Data Collection Data Model comes with the , which provides an
alternative way of running mrScriptBasic files. It also provides a number of debugging features.

7. Learning about Weighting

Weighting is another term for sample balancing. You use weighting when you want the figures
in your table to reflect your target population more accurately than the actual figures do. For
example, suppose your target population consists of 57% women and 43% men, but you
interviewed 50% women and 50% men for your survey. By applying weighting, you can make the
women’s figures count for more than the men’s figures, so that they more accurately reflect the
gender distribution in the target population.

IBM® SPSS® Data Collection Base Professional includes the IBM® SPSS® Data Collection
Weight component, which enables you to set up weighting in your data. In this topic we will look
at and run the Weighting.dms file. This sets up weighting based on equal numbers of male and
female respondents.
E Let’s start by opening the Weighting.dms file in Base Professional. By default, the file is installed
into the [INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Data Management\DMS
folder.
223

Base Professional

Notice that this DMS file contains a Metadata Section:

Metadata (enu, Analysis, Label, Input)


Weight "Weighting based on gender balance"
Double usagetype("Weight");
End Metadata

This creates in the output data source a numeric variable, called Weight, to hold the weighting
information that we will set up using the Weight component in the OnJobEnd Event Section
section.

Here is the OnJobEnd Event section code that calls the Weight component and sets up weighting
in the Weight variable that we defined in the Metadata section:

Event(OnJobEnd, "Weight the data")


Dim WgtEng, Wgt, fso, ReptFile
Set WgtEng = dmgrJob.WeightEngine

' Create an html file to contain the weighting report...

Set fso = CreateObject("Scripting.FileSystemObject")


Set ReptFile = _
fso.CreateTextFile("C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\WeightingReport.htm", _
True)

' Create an instance of the Weight object with the following parameters:
' "Weight" is the variable of type Double that was created in the
' metadata section.
' "gender" is the variable that stores the characteristics on
' which we wish to base the weighting.
' wtMethod.wtTargets defines the weighting method as target weighting.

Set Wgt = WgtEng.CreateWeight("Weight", "gender", wtMethod.wtTargets)

' Define a two cell weighting matrix with a target of 301 for
' each value of the gender variable...

Wgt.CellRows.Targets = "301; 301"

' Call the WeightEngine.Prepare method and then write the


' weighting report to the html file...

WgtEng.Prepare(Wgt)
ReptFile.Write(Wgt.Report)

' Call the WeightEngine.Execute method, which will insert the


' calculated weight values in the Weight variable. Then
' write the weighting report to the html file...

WgtEng.Execute(Wgt)
ReptFile.Write(Wgt.Report)
224

Chapter 1

ReptFile.Close()

' Setting WgtEng to Null ensures that the connection to the


' output data source is closed and that any pending data
' updates are flushed...

set WgtEng = Null


End Event

E Now run the Weighting.dms file.

E If you have the Base Professional Tables Option, you can use the DMSWeightedTables.mrs table
scripting sample mrScriptBasic file to create two tables of Age by Gender, the first unweighted
and the second weighted using the Weight variable we have just set up. (If you don’t have the
Base Professional Tables Option, you can use DM Query instead as described below.) To run the
DMSWeightedTables.mrs sample:

1. Open the DMSWeightedTables.mrs file in Base Professional. (The table scripting sample files are
typically installed into the [INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Scripts\Tables
folder.)

2. Press Ctrl+F5 OR choose Start Without Debugging from the Debug menu.

Here is the unweighted table:

Here is the weighted table:


225

Base Professional

E Notice that in the unweighted table there are 339 male and 263 female respondents in the Base
row, whereas in the table weighted using the Weight variable there are equal numbers of male and
female respondents in the Base row.

E If you do not have the Base Professional Tables Option, create equivalent tables in DM Query.
Refer to the How to Run the Example Queries in DM Query Using the Museum Sample topic in
the IBM® SPSS® Data Collection Developer Library for more information.. However, remember
to select the output data source files (Weighting.mdd and Weighting.ddf) rather than the installed
Museum sample files.

E To create the unweighted table, enter the following into the text box:

SELECT groupby.col[0] AS Age,


SUM(gender = {male}) AS Male,
SUM(gender = {female}) AS Female
FROM vdata
GROUP BY age ON age.DefinedCategories()
WITH (BaseSummaryRow)

Here are the results:

E To create the weighted table in DM Query, enter the following into the text box:

SELECT groupby.col[0] AS Age,


SUM((gender = {male}) * Weight) AS Male,
SUM((gender = {female}) * Weight) AS Female
FROM vdata
GROUP BY age ON age.DefinedCategories()
WITH (BaseSummaryRow)
226

Chapter 1

Here are the results:

The difference in decimal precision is because the Base Professional Tables Option post-processes
the results. Refer to the Advanced SQL Queries topic in the Data Collection Developer Library
for more information.

Note that you could include code in the OnAfterJobEnd Event section in your DMS file to set up
the tables. For more information, see the topic OnAfterJobEnd Event Section on p. 285.

The Weight component automatically creates a weighting report. The OnJobEnd Event section
contains the following line:
ReptFile.Write(Wgt.Report)

This writes the report to an HTML file called Weighting.htm.

E Let’s open the weighting report file now. To do this, in Windows


Explorer go to the location of the weighting report file (typically this is
[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Output) and double-click
WeightingReport.htm.

This opens the file in your default browser:


227

Base Professional

For more information, see the topic Weighting Report on p. 412.

E To find out more about weighting, read the Working with the Weight Component section, which
includes a general introduction to weighting, details of the various weighting methods and their
formulae, and examples of using each of the weighting methods. Note that most of the examples
are in the form of mrScriptBasic (.mrs) files. The final topic in the Working with the Weight
Component section, Setting Up Weighting in a DMS File, explains the changes you need to make
to the examples to incorporate them into a DMS file.

The next topic, 8. Mastering mrScriptMetadata, gives ideas about how to go about learning
mrScriptMetadata.

8. Mastering mrScriptMetadata

In 7. Learning about Weighting, we used a line of mrScriptMetadata to create a numeric variable


in the output data source to hold the weighting information. The mrScriptMetadata was in the
Metadata section , which is an optional section in a DMS file for creating new questions and
variables. Typically, you would use this to create filters and other derived variables for use during
analysis. However, the DMS file does not limit you in the type of metadata you can create. You
define the metadata using mrScriptMetadata, which has been designed as a fast and easy way
of creating metadata using a script.

If you are planning to use the Metadata section to create anything other than the simplest variables,
you may find it helpful to spend some time familiarizing yourself with mrScriptMetadata outside
of a DMS file. This topic provides some suggestions for how to do that.

E Start by reading the topics in the . These give you an introduction to mrScriptMetadata and
describe how you can open an mrScriptMetadata file in IBM® SPSS® Data Collection Metadata
Model Explorer.

E Refer to the mrScript MDSC topic in the IBM® SPSS® Data Collection Developer Library.
mrScript MDSC can convert an mrScriptMetadata file into an MDM Document and an MDM
Document into an mrScriptMetadata file.

E The Data Collection Developer Library comes with some sample mrScriptMetadata files. Follow
the instructions in the second topic in the User’s Guide to open these files in MDM Explorer.

E Now try creating some mrScriptMetadata files of your own and opening them in MDM Explorer.
For example, you could try to recreate some of the questions in some of the sample .mdd files.
Refer to the Sample Data topic in the Data Collection Developer Library for more information.

E You can also open the sample .mdd files in MDM Explorer to see an mrScriptMetadata
representation of the various metadata objects.

E While you are doing this, refer to the .

E Before you start writing mrScriptMetadata code in your DMS files, take the time to study the
topics in the Creating New Variables section.
228

Chapter 1

9. Getting To Know the Samples

The IBM® SPSS® Data Collection Developer Library comes with a number of sample DMS files
that have been set up to demonstrate many common data management tasks. Most of these sample
files have been designed to run straight “out of the box” on the sample data that also comes with
the Data Collection Developer Library. However, if you did a custom installation, you may need
to modify the file locations specified in the samples before you run them. Some of the samples
require other software to be installed as well as IBM® SPSS® Data Collection Base Professional.
For example, some of the samples require one or more of the Microsoft Office applications, such
as Word, Access, or Excel. Below you will find links to topics that list the samples. These topics
provide a brief explanation of each sample and explain any special requirements.

The sample DMS files are divided into five groups:


„ Sample DMS Files. These demonstrate many basic features like transferring data to and
from various data formats, cleaning data, setting up weighting, creating new variables, using
the IBM® SPSS® Data Collection Metadata Model to Quantum component to set up card,
column, and punch definitions, etc.
„ Sample DMS Files For Exporting IBM SPSS Data Collection Interviewer Server Data.
These samples have been designed to demonstrate exporting IBM® SPSS® Data Collection
Interviewer Server data that has been collected using multiple questionnaire versions. All of
these samples use the Short Drinks sample database, but they can be adapted easily to run with
any multiversion project. You need access to an SQL Server installation and appropriate user
access rights to run these samples. You also need to restore the Short Drinks sample. Refer to
The Short Drinks Sample topic in the Data Collection Developer Library for more information.
„ Sample DMS Files That Integrate with Microsoft Office. These demonstrate some advanced
features, such as transferring data from an Access database, setting up tables and charts in
Excel and topline tables in Word.
„ Table Scripting Sample Data Management Scripts. These provide some examples of scripting
tables in a DMS file. To run these samples, you need to have Base Professional Tables Option
installed. Some of the samples have additional requirements.
„ Sample DMS Include Files. These are Include files that can be reused in other DMS files.

You can use the sample DMS files as a starting point when you develop your own DMS files.
However, it is recommended that you work on a copy of the samples rather than the samples
themselves. An easy way of doing this is to copy the entire folder to another location on your
computer and then work on the copies in the new location. This means you will avoid losing your
work when you uninstall or upgrade to a new version of the Data Collection Developer Library
and it will be easy to refer back to the original samples when necessary.

This topic includes some suggestions for how to make the most of the sample files. However,
some of the sample files use the Include file and text substitution features. So before you begin to
look at the samples, read the topics that explain these features:
„ Using Include Files in the DMS file
„ Using Text Substitution in the DMS file
229

Base Professional

Now, start by studying the lists of sample files and make your own list of which ones seem most
relevant to your work. For each of the samples on your list, do the following:

E Open the sample in Base Professional. Look at each section and try to work out what it means.
Refer to the relevant parts of the documentation. (Note that the topics that list the samples include
handy links to relevant documentation.)

E If the sample uses one or more Include files and/or text substitutions, use the /a: and /norun options
in DMS Runner to save the expanded file. For example, if you change the path in the command
prompt to the folder where the sample DMS files are installed, you could use the following to
save the expanded MDM2QuantumExtra.dms sample to a file called MyExpanded.dms, without
running it:
DMSRun MDM2QuantumExtra.dms /a:MyExpanded.dms /norun

E Now open the expanded file in Base Professional and study it. The expanded file shows the file as
Base Professional and DMS Runner “see” it after all of the text substitutions and Include files
have been implemented. The line numbers shown in error messages always refer to the line
numbers in the expanded file. Note that any comments that appeared between the sections in the
original will not appear in the expanded file.

E Now use Base Professional or DMS Runner to run the sample on the sample data.

E Use the tools at your disposal to study the output of the transformation. How you do this depends
on the type of output. For example, examine report, log, and error files in a text editor. For output
data types that can be read by the IBM® SPSS® Data Collection Data Model, you could open the
relevant output files in IBM® SPSS® Data Collection Paper or IBM® SPSS® Data Collection
Survey Tabulation if you have them. Alternatively you could use DM Query to look at the case
data and MDM Explorer to look at the metadata.

E Now try modifying a copy of the sample to run on your own data. However, make sure that you
do not use any of the features (such as update queries in the InputDataSource section or the
UseInputAsOutput feature) that will actually change the input data until you are sure that is
what you want to do.

The Data Collection Developer Library also comes with the executable file and Visual Basic .NET
source code for WinDMSRun, which is a simple Windows tool that you can use to set up and run
simple DMS files. If you are a programmer who wants to develop applications that use the Data
Management Object Model (DMOM), you may want to use the source code as a reference. For
more information, see the topic WinDMSRun as Programmer’s Reference on p. 309.

10. Where Do I Go From Here?

If you have followed the steps in this Getting Started guide, you should now have an idea of what
you can achieve using a DMS file. You may also be aware of how much there is still to learn.

You may want to sit down now and read through the Data Management Scripting section of the
IBM® SPSS® Data Collection Developer Library. However, this approach may not suit everyone,
and you may prefer to wait until you have a specific problem you want to solve. However, if you
230

Chapter 1

do decide to ait before reading much more, it is still a good idea to browse through the contents of
the Data Management Scripting section, so that you have an idea of the material that it covers.

Notice that the Data Management Reference section provides detailed documentation of all of
the properties and methods of all of the Data Management Object Model (DMOM) and Weight
Component objects. However, you are not restricted to using these objects in your Event section
code. You can use any objects that are registered on your computer, including the objects that are
part of the various IBM® SPSS® Data Collection Data Model components.

You may therefore want to spend some time familiarizing yourself with the structure of the Data
Collection Developer Library, particularly:
„ . This provides detailed documentation of mrScriptBasic, mrScriptMetadata, and the IBM
SPSS Data Collection Function Library.
„ Data Model Reference topic in the Data Collection Developer Library. This provides detailed
documentation of the main object models that are part of the Data Model, including the MDM
and IBM® SPSS® Data Collection Metadata Model to Quantum Component object model
documentation, which you may want to use in your Event section code.

It will almost certainly be in your interest to spend some time learning how to make the most of
the Data Collection Developer Library. Read the Getting the Most Out of the DDL section and
practice using the Index and Search features. When you find a topic that is useful, click the
Contents tab to see which section it is in. This will help you understand how it fits into the bigger
picture and to find related topics.

For example, suppose you want to write advanced Event section code using the Metadata Model
to Quantum component and you want to know whether the component has a method for writing
the data map to a .csv file. If you just type “MDM2Quantum” into the search field, the search will
return many topics. You could narrow the search by typing “MDM2Quantum AND csv”.

The search will now return fewer topics because it will return only those that contain both
“MDM2Quantum” and “csv”. If you look through the topics that are returned, you will see one on
the MDM2Quantum.WriteToCommaSeparatedFile method. This topic describes a method of the
Metadata Model to Quantum component that writes the data map to a .csv file.

If you click the topic in the list and then click the Contents tab, you will then see that the topic is
located in the Methods section of the Metadata Model to Quantum object documentation, which is
in the Data Model Reference section. You will notice that there are topics on each of the Metadata
Model to Quantum object’s other methods and properties in the same section.

You could also have reached the Metadata Model to Quantum Component documentation using
the index. For example, by entering “Metadata Model to Quantum component” into the index
keyword text box and then selecting the Metadata Model to Quantum Component, Overview
subentry. Using the index is sometimes quicker than using the search. However, the search is
useful when the index does not lead you to what you are looking for.

If you prefer to use printed documentation, you can easily print individual topics or all of the
topics in a section.
231

Base Professional

Understanding the Process Flow


This section provides a number of diagrams that illustrate various aspects of DMS file processing:
„ DMS File Flow. Provides a simplified representation of the processing that is performed
when you run a standard DMS file.
„ DMS File Flow When You Use the UseInputAsOutput Option. Provides a simplified
representation of the processing that is performed when you run a DMS file that uses the
UseInputAsOutput option.
„ DMS File Flow In a Case Data Only Transformation. Provides a simplified representation of
the processing that is performed when you run a DMS file that operates on case data only,
either because you have not specified an input metadata source or because you are using a
provider that is not part of the IBM® SPSS® Data Collection Data Model to read the data.
„ DMS File Flow When Operating on Metadata Only. Provides a simplified representation of
the processing that is performed when you run a DMS file that operates on metadata only.
„ Example Timeline. A timeline that represents the processing of a hypothetical DMS file.

DMS File Flow

The following diagram provides a simplified representation of the processing that is performed
when you run a standard DMS file. The diagram is intended to show the sequence in which the
various parts of the file are executed. However, some parts of the file are optional. For example,
the input and output data source update queries are optional as are the GlobalSQLVariables
section and the OnBeforeJobStart, OnAfterMetaDataTransformation, OnJobStart, OnNextCase,
OnBadCase, OnJobEnd, and OnAfterJobEnd Event sections.

The numbers from 1 (at top of diagram) to 13 (at bottom) indicate the sequence in which the
processing takes place. Scroll down below the diagram for notes on each numbered step.

For a diagram that illustrates the processing of a hypothetical job in the form of a timeline, see
Example Timeline.
232

Chapter 1

1. The OnBeforeJobStart Event Section is processed first. This is typically used to set up card
and column allocations in the input data source’s metadata in a job that exports case data to a
IBM® SPSS® Quantum™ .dat file.
233

Base Professional

2. The GlobalSQLVariables Section can optionally be used to exchange information between the
output and input data source. This is useful when you are transferrring case data in batches and
want to transfer only records that have been collected since the last batch was transferred.

3. The Update Query defined in the InputDataSource Section can be used to add, update, or delete
case data in the input data source and is typically used to remove unwanted test data. However,
note that this should be used with caution because the input data source is updated irreversibly.

4. The metadata specified in the connection string in the InputDataSource section is merged with
the metadata defined in the Metadata section, if there is one. The merged metadata is then made
available to the input data source so that any new variables (from the Metadata section) that have
been included in the Select Query statement will be returned by the query.

5. The merged metadata is then filtered according to the filter defined in the Select Query in the
InputDataSource section and written to the output metadata file defined in the OutputDataSource
section.

6. The OnAfterMetaDataTransformation Event Section is run after the metadata merge is complete
and is typically used to set up card, column, and punch definitions in the new variables created in
the Metadata section.

7. The output case data source is created (if it does not already exist) and synchronized with
the output metadata.

8. The OnJobStart Event Section is run before the processing of the individual cases and is typically
used to set up global variables that are required in the OnNextCase and OnBadCase sections.

9. The OnNextCase Event section is processed for each case included in the transfer and is
typically used to clean the case data.

10. The OnBadCase Event section is processed for each case that has failed validation and will not
be transferred to the output data source, and is typically used to create a report of bad cases.

11. The OnJobEnd Event section is run after the processing of the last case has been completed
and is typically used to close report files and set up weighting using the Weight component.

12. The Update Query defined in the OutputDataSource section can be used to add, update, or
delete case data in the output data source.

13. The OnAfterJobEnd Event section is processed after all other processing has finished. This is
typically used to set up tables or to e-mail a report or notification.
234

Chapter 1

DMS File Flow When You Use the UseInputAsOutput Option

The following diagram provides a simplified representation of the processing that is performed
when you run a DMS file using the UseInputAsOutput option. Note that this option should be
used with caution because the input data source is updated irreversibly.
„ You can specify the UseInputAsOutput option in one of the InputDataSource sections, in
which case you do not need an OutputDataSource section in your DMS script. If you are
using the IBM® SPSS® Data Collection Data Model to read the case data, you can specify
the UseInputAsOutput option only if the CDSC for that data source supports the updating
of existing records. You must also set the MR Init MDM Access connection property to 1 in
the InputDataSource section to open the data source for read/write access or when operating
in validation mode.

The diagram is intended to show the sequence in which the various parts of the file are executed.
However, some parts of the file are optional. For example, the update query is optional as are
the OnBeforeJobStart, OnAfterMetaDataTransformation, OnJobStart, OnNextCase, OnBadCase,
OnJobEnd, and OnAfterJobEnd Event sections.

The numbers from 1 (at top of diagram) to 9 (at bottom) indicate the sequence in which the
processing takes place. Scroll down below the diagram for notes on each numbered step.
235

Base Professional

1. The OnBeforeJobStart Event Section is processed first.

2. The Update Query defined in the InputDataSource Section can be used to add, update, or delete
case data in the input data source and is typically used to remove unwanted test data.

3. The metadata specified in the Metadata section is merged with the metadata specified in the
connection string in the InputDataSource section. The merged metadata is then made available
to the input data source so that any new variables (from the Metadata section) that have been
included in the Select Query statement will be returned by the query.

4. The OnAfterMetaDataTransformation Event Section is run after the metadata merge is complete.
236

Chapter 1

5. The OnJobStart Event Section is run before the processing of the individual cases and is typically
used to set up global variables that are required in the OnNextCase and OnBadCase sections.

6. The OnNextCase Event section is processed for each case included in the transfer and is
typically used to clean the case data.

7. The OnBadCase Event section is processed for each case that has failed validation and will not
be transferred to the output data source, and is typically used to create a report of bad cases.

8. The OnJobEnd Event section is run after the processing of the last case has been completed and
is typically used to close report files and set up weighting using the Weight component.

9. The OnAfterJobEnd Event section is processed after all other processing has finished. This is
typically used to set up tables.

DMS File Flow In a Case Data Only Transformation

The following diagram provides a simplified representation of the processing that is performed
when you run a case data-only transformation, either because you have not specified an input
metadata source or because you are using an OLE DB provider that is not part of the IBM®
SPSS® Data Collection Data Model to read the data.

The diagram is intended to show the sequence in which the various parts of the file are executed.
However, some parts of the file are optional. For example, the update queries are optional as are
the OnBeforeJobStart and OnAfterJobEnd Event Event sections. Note that you cannot use an
OnAfterMetaDataTransformation, OnJobStart, OnNextCase, OnBadCase, or OnJobEnd Event
section in a case data-only transformation.

The numbers from 1 (at top of diagram) to 8 (at bottom) indicate the sequence in which the
processing takes place. Scroll down below the diagram for notes on each numbered step.
237

Base Professional

1. The OnBeforeJobStart Event Section is processed first.

2. The GlobalSQLVariables Section can optionally be used to exchange information between the
output and input data source. This is useful when you are transferrring case data in batches and
want to transfer only records that have been collected since the last batch was transferred.

3. The Update Query defined in the InputDataSource Section can be used to add, update, or delete
case data in the input data source.

4. The case data is filtered according to the select query specified in the InputDataSource section.

5. The output data source is created based upon the variables specified in the select query in the
InputDataSource section.
238

Chapter 1

6. The case data specified in the select query is transferred to the output data source.

7. The update query specified in the OutputDataSource Section can be used to add, update, or
delete case data in the output data source.

8. The OnAfterJobEnd Event section is processed after all other processing has finished. This
can be used to set up tables.

DMS File Flow When Operating on Metadata Only

The following diagram provides a simplified representation of the processing that is performed
when you run a DMS file that operates on metadata only. The diagram is intended to show the
sequence in which the various parts of the file are executed. However, some parts of the file
are optional.

The numbers from 1 (at top of diagram) to 5 (at bottom) indicate the sequence in which the
processing takes place. Scroll down below the diagram for notes on each numbered step.
239

Base Professional

1. The OnBeforeJobStart Event Section is processed first.

2. The metadata is filtered according to the select query specified in the InputDataSource Section.

3. The metadata specified in the Metadata section (if any) is merged with the filtered metadata to
give the output metadata.

4. The OnAfterMetaDataTransformation Event Section is processed after the merge is complete.

5. The OnAfterJobEnd Event section is processed after all other processing has finished. This can
be used to export a IBM® SPSS® Quancept™ script.
240

Chapter 1

Example Timeline

The following diagram is a timeline that represents the processing of a hypothetical DMS file,
which is described below.

InputDataSource section. This includes:


„ ConnectionString. Defines a connection to a relational MR database that stores dirty data
collected using IBM® SPSS® Data Collection Interviewer Server and IBM® SPSS® Data
Collection Paper - Scan Add-on.
„ UpdateQuery. Deletes all test data from the database.
„ SelectQuery. Selects a subset of variables that are suitable for analysis and all completed
non-test cases.

Metadata section. This defines a numeric variable to be used to hold weighting defined using the
Weight component in the OnJobEnd Event section, and a number of filter and banner variables
that will be required during analysis.

OutputDataSource section. This includes:


„ ConnectionString. Defines a connection to the data source that will receive the clean case data.
This could be a new or existing, file or database.
„ MetaDataOutputName. Defines the name and path of the new .mdd file that will define the
structure of the output data source.

OnBeforeJobStart Event section. This defines card and column specifications in the input data
source for use when transferring case data to a IBM® SPSS® Quantum™ .dat file.

OnJobStart Event section. Sets up various global variables that are required in the OnNextCase and
OnBadCase Event sections, including a text file for reporting purposes.

OnNextCase Event section. This section contains cleaning code that will be applied to each case
that is included in the transfer.

OnBadCase Event section. This section contains reporting code that will be executed for each case
that has failed validation and will not be transferred to the output data source.
241

Base Professional

OnJobEnd Event section. Closes the report file and uses the Weight component to set up weighting
in the numeric variable defined in the Metadata Section.

Data Management Script (DMS) File

The Data Management Script (DMS) file is a text file with a .dms filename extension, which defines
a data transformation job. The DMS file provides a scalable solution to your data management
tasks. For example, at its simplest, a DMS file can define a simple transfer from one data format or
location to another. A more complex example would be a DMS file that includes complex cleaning
algorithms, that sets up several different types of weighting and a number of filter and banner
variables for use during analysis, and transfers a subset of the case data to three different formats.

You can create a DMS file using IBM® SPSS® Data Collection Base Professional or a standard
text editor, such as Notepad. When using a text editor, if the file contains non-Western European
characters, make sure that you save it using the Unicode or UTF-8 text format option and not
the ANSI option.

This section defines the syntax of the DMS file and provides some examples. The conventions
used for defining the syntax are similar to those used in the IBM® SPSS® Data Collection
Scripting section.

This section includes:


„ Simple Example of a DMS File
„ Filtering Data in a DMS File
„ Breaking Up Long Lines in the DMS file
„ Comments in the DMS file
„ Using Include Files in the DMS file
„ Using Text Substitution in the DMS file
„ Sections in the DMS file

Tips
„ If you are new to DMS files, try working through the Getting Started Guide, if you haven’t
already done so. For more information, see the topic Getting Started on p. 208.
„ The IBM® SPSS® Data Collection Developer Library comes with WinDMSRun, a sample
Windows application for generating, validating, and running a simple DMS file. For more
information, see the topic WinDMSRun on p. 300.
„ The Data Collection Developer Library comes with numerous sample DMS files, which you
can use as templates. For more information, see the topic Using the Sample DMS Files
on p. 466.
242

Chapter 1

„ The Data Management Troubleshooting and FAQs section provides tips and answers to some
common problems and queries.
„ The Understanding the Process Flow section provides some diagramatic representations of
the order of processing the various sections in the DMS file in different situations (such
as a typical standard transformation, using the UseInputAsOutput option, a case data-only
transformation, and when operating on metadata only).

Simple Example of a DMS File

This topic provides an example of a very simple DMS file, which transfers all of the case data
stored in a IBM SPSS Data Collection Data File to a IBM® SPSS® Statistics (.sav) file. This
example demonstrates the minimum contents of a DMS file, which must contain the following
sections:
„ InputDataSource Section. Defines the location and format of the data that you want to transfer.
(This example contains only one input data source. However, you can define more than one
InputDataSource section in a DMS file when you want to merge data.)
„ OutputDataSource Section. Defines the location and format to which you want to transfer the
data. (This example contains only one output data source. However, you can define more than
one OutputDataSource section in a DMS file. When you do this, the data will be transferred to
each of the specified output data sources.)

InputDataSource(myInputDataSource)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
End InputDataSource

OutputDataSource(myOutputDataSource)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrSavDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\Simple.sav"
MetaDataOutputName = "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\Simple.mdd"
End OutputDataSource

It is recommended that you also include a Logging Section in your DMS file. This means that
details of any problems will be written to a log file, which is useful when trying to track down the
cause of any problems that occur.

Logging(myLog)
Path = "c:\temp"
Group = "DMGR"
Alias = "Tester"
FileSize = 500
End Logging
243

Base Professional

Note: This example is provided as a sample DMS file (called Simple.dms) that is installed with the
IBM® SPSS® Data Collection Developer Library. For more information, see the topic Sample
DMS Files on p. 467. For step-by-step instructions on running a DMS file, see 1. Running
Your First Transfer.

Filtering Data in a DMS File

Filters enable you to perform a data transformation on a subset of the data in a data source. For
example, when you transfer data you may often want to transfer some, but not all, of the data.
This topic describes some typical scenarios.

Metadata Filter

When you export data to a particular format for analysis (such as IBM® SPSS® Statistics .sav or
IBM® SPSS® Quantum™ .dat), you may want to exclude from the export all of the variables that
will not be useful to the analysis that you are planning. You may also want to aggregate existing
variables to a new variable (helpful when performing analysis). You can achieve this by defining a
metadata filter that defines the variables to include, or aggregate, in the export.

In the DMS file, you define a metadata filter by specifying in the select query defined for the input
data source the variables that you want to include or aggregate.

Note: Since version 5.6, the Data Management Object Model (DMOM) will check for duplicate
select variables in query strings. As a result, the following example would result in the error
Duplicate select variable: Age when transformation starts:
SelectQuery = "SELECT Person.(Age, Age, Name) FROM HDATA"

1. Filtering metadata via a VDATA query

The following InputDataSource section defines a metadata filter that specifies seven variables that
will be included in the data transformation:
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = ...
SelectQuery = "SELECT age, gender, education, _
interest, rating, remember, signs FROM VDATA"
End InputDataSource

2. Filtering metadata from one level via an HDATA query

The following InputDataSource section defines a metadata filter that specifies three variables,
from the Person level, that will be included in the data transformation:
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = ...
SelectQuery = " SELECT Name, Age, Trip FROM HDATA.Person"
End InputDataSource
244

Chapter 1

3. Filtering metadata from different levels via an HDATA query

The following InputDataSource section defines a metadata filter that specifies one variable from
the parent level, one variable from current level, and one variable from the children level, that
will be included in the data transformation:
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = ...
SelectQuery = "SELECT ^.address, Name, _
Trip.(Country) FROM HDATA.Person”
End InputDataSource

4. Creating new variables using an expression or aggregation

The following InputDataSource section defines a metadata filter that specifies one aggregation and
one expression that will create new variables in the data transformation:
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = ...
SelectQuery = "SELECT sum(visits+visits12) as 'total visits', _
name+address as person FROM HDATA"
End InputDataSource

5. Selecting a grid slice via a VDATA or HDATA query

The following InputDataSource section defines a metadata filter that specifies two grid slices
in the data transformation:
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = ...
SelectQuery = "SELECT Person[3].tvdays[channel_1].column as NewGridSlice1, _
Person[1].Age, Person[3].tvdays[channel_1].column FROM HDATA"
End InputDataSource

The first grid slice will be changed to a normal field after transformation (it has been renamed).
The second grid slice will remain at its HDATA level structure after transformation (it has not
been renamed).

6. Renaming with a metadata filter

Using metadata filtering, you can rename a variable, expression, aggregation, or gird slice.
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = ...
SelectQuery = "SELECT ^.Pets, _
^.address as HomeAddress, _
Name as ‘Person Name', _
Age+Weight as ‘PersonWeight', _
SUM(Age+Gender) FROM HDATA.Person"
End InputDataSource
245

Base Professional

The resulting output metadata is:


Household_Pets (Auto renamed)
HomeAddress;
Person_Name (Auto renamed)
PersonWeight
SUM_Age_Gender_ (Auto renamed)

The system will automatically rename if any of the following conditions are met:
„ The user specified a new name, but it does not meet variable naming conventions (contains
blank spaces, symbols, and so on). For example, Name as ‘Person Name' would be
automatically renamed because of the space between Person and Name.
„ The Select query contains a down-leved variable, but the user did not explicitly rename
the variable. For example, ^.Pets would be automatically renamed to <filename>_Pets
(Household_Pets) because it was down-leved one layer from the top level.
As another example, ^.^.Age would be automatically renamed to <filename>_<parent level
name>_age (household_person_age) because it was down-leveled two layers from the top
level and the person level.
„ The Select query contains an expression or aggregation, but does not explicitly rename the
expression\aggregation. For example, SUM(Age+Gender) would be automatically renamed to
SUM_Age_Gender_ based on metadata creation naming conventions.

Tips:

1. You can use the IBM® SPSS® Data Collection Base Professional Metadata Viewer to view the
names of the variables. For more information, see the topic Using the Metadata Viewer on p. 28.

2. When you have a large number of variables to specify or they have long names, you may find
it easier to set up the metadata filter in WinDMSRun. For more information, see the topic
WinDMSRun Window: Variable Selection on p. 305.

Case Data Filter

In a large study, you may want to export the case data for each region separately, so that it can be
analyzed separately. You can achieve this by defining a case data filter that selects the cases for
which the region variable has a specified value.

In the DMS file, you define a case data filter of this type using a WHERE clause in the select
query defined for the input data source. For example, this InputDataSource section defines a
case data filter (shown in red) that selects cases for which the South category is selected for
the region variable:
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = ...
SelectQuery = "SELECT * FROM vdata WHERE region = {South}"
End InputDataSource
246

Chapter 1

For more information on defining case data filters using the WHERE clause, see the Basic SQL
Queries topic in the IBM® SPSS® Data Collection Developer Library.

Tips:

1. You can use the Base Professional Metadata Viewer to view the variable and category names. For
more information, see the topic Using the Metadata Viewer on p. 28.
2. If you are working with IBM® SPSS® Data Collection Interviewer Server data and want to filter
out test, active, and timed out records, or select records based on the date they were collected, you
may find it easier to set up the case data filter in WinDMSRun. For more information, see the
topic WinDMSRun Window: Case Selection on p. 305.

When cleaning data in an ongoing study, you may want to set up a case data filter to include only
cases that have not been cleaned before. One way of doing this is to use a global SQL variable.
For more information, see the topic GlobalSQLVariables Section on p. 266. In your cleaning
script, you may also want to specify that some cases are to be excluded; for example, because they
contain questionable responses. In the DMS file, you can define this type of case data filter using
the dmgrJob.DropCurrentCase method in the OnNextCase Event section. Here is an example:
Event(OnNextCase)
If age < 0 Then dmgrJob.DropCurrentCase()
.
.
.
End Event

You can also use an update query in the OutputDataSource section to remove case data records
from the output data source. For example, the following snippet shows how you could use
update queries in two OutputDataSource sections to split the case data between two output data
sources—one of which stores the clean data and the other the dirty data.
OutputDataSource(Clean, "My clean data")
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\CleanData.ddf"
MetaDataOutputName = "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\CleanData.mdd"
UpdateQuery="DELETE FROM vdata _
WHERE DataCleaning.Status = {NeedsReview}"
End OutputDataSource

OutputDataSource(Dirty, "My dirty data")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\DirtyData.ddf"
MetaDataOutputName = "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\DirtyData.mdd"
UpdateQuery="DELETE FROM vdata _
WHERE NOT DataCleaning.Status.ContainsAny({NeedsReview})"
End OutputDataSource

Note: This example is included in the SplitIntoDirtyAndClean.dms sample. For more information,
see the topic Sample DMS Files on p. 467.
247

Base Professional

Breaking Up Long Lines in the DMS file

A long statement can be broken into multiple lines using the line-continuation characters (a space
followed by an underscore). Doing this can make your code easier to read, both online and when
printed. Here is an example of a connection string in the InputDataSource section that has been
broken into several lines using the line-continuation characters ( _):
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
End InputDataSource

There are some restrictions on where you can break lines. For example, you cannot break a line
in the middle of a connection property. However, you can break a connection string, provided
the individual properties are intact. In addition you cannot use a line continuation character
in #include or #define statements.

Comments in the DMS file

You can include comments in your DMS file to make it easier to read. You can place comments
anywhere between sections. You can also include comments within sections provided that they
conform to the syntax for that section. For example, comments in the Event sections must
conform to the rules for mrScriptBasic comments and comments in the Metadata Section must
conform to the rules for mrScriptMetadata comments. Comments in all other sections must
start on the first character of a line.

Single-line comments start with the single quotation mark character ('). For example:
' Standard Logging section to activate logging.

You can also define block comments that span more than one line using the block comment start
characters ('!) and end characters (!'). For example:
'! Standard Logging section to activate logging
and define location of log file.!'

Warning. When you use a DMS file in WinDMSRun, any comments that appear between the
sections will be removed when you change between the Normal and Script tabs or when you save
and reopen a file. For more information, see the topic WinDMSRun on p. 300.

Using Include Files in the DMS file

You can include in your DMS file a block of code that is defined in another file. This is useful, for
example, when you want to use the same cleaning or weighting routines or set up standard filter
and banner variables in several projects. Instead of repeating the code in each DMS file, you can
simply define the code you want to reuse in one or more separate files and then use the Include
statement to include them in your main DMS files.
248

Chapter 1

The syntax is:


#include "Filename"

Part Description
Filename The name and location of the file to include.
Typically this is a text file with a .dms filename
extension.

Remarks
„ There is no restriction on where you can put an Include statement in your DMS file. However,
the Include statement is replaced by the code in the Include file when you run the DMS file,
so you must make sure that you place the Include statement in an appropriate position. For
example, if the Include file contains an entire section, place the Include statement before,
after, or between the other sections in the file. Similarly, if the Include file contains only part
of a section, make sure that you place the Include statement in an appropriate place in the
section to which the code applies.
„ You can replace text in an Include file with your own text by inserting a #define statement in
your DMS file before the #include statement. In this way, you can reuse the same Include files
in projects that have different variable names. The main example below shows you how to do
this. For more information on using the #define statement, see Using Text Substitution in
the DMS file.
„ You can also have Include statements in the Include files themselves. However, be careful not
to create a circular construction, because there is no warning when this happens.
„ You must not use line-continuation characters in an #include statement.
„ mrScriptBasic and mrScriptMetadata error messages give the approximate line number where
the error occurred. Using an Include file may result in misleading line numbers in these error
messages because the line numbers are calculated using the expanded file. However, you can
use the DMS Runner /a: option to save the expanded DMS file.
„ You are not restricted to using files with a .dms filename extension as Include files. For
example, you could use an mrScriptBasic (.mrs) file as an Include file. (To ensure that your
DMS file can be debugged, only include .dms or .mrs files.) The MSOutlookSendReport.mrs
file is included as an example Include file to demonstrate this. For more information, see the
topic Sample DMS Files That Integrate with Microsoft Office on p. 475.
„ When you specify the path to an include file, it must always be specified relative to the folder
in which the file that contains the #include statement is located.
„ Your DMS file and all include files must be saved using the same text encoding, either all
ANSI or all Unicode.

Example

Here is a DMS file that contains two Include statements. Two #define statements have been
inserted before the second #include statement to replace text in that Include file:
InputDataSource(myInputDataSource)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
249

Base Professional

Location=[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _


Initial Catalog=[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
#Include ".\Include\Include1.dms"
End InputDataSource

OutputDataSource(myOutputDataSource)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Output\IncludeExample.ddf"
MetaDataOutputName = "[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Output\IncludeExample.mdd"
End OutputDataSource
#define srvar museums
#define textvar address
#Include ".\Include\Include2.dms"

Here is the contents of include1.dms:

'==========================================================
'Licensed Materials - Property of IBM
'
'IBM SPSS Products: Data Collection
'
'(C) Copyright IBM Corp. 2001, 2011
'
'US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP
'Schedule Contract with IBM Corp.
'==========================================================

SelectQuery = "SELECT Respondent.Serial, _


age, address, gender, museums, dinosaurs, _
rating[{dinosaurs}].column, _
rating_ent[{dinosaurs}].column, _
DataCleaning.Note, DataCleaning.Status _
FROM VDATA WHERE Respondent.Serial < 101"

Here is the contents of include2.dms:

Event(OnNextCase, Clean the data)


If srvar.Response.AnswerCount() > 2 Then
DataCleaning.Note = DataCleaning.Note + srvar.QuestionName + " needs checking."
DataCleaning.Status = {NeedsReview}
End If
Dim TextLength
TextLength = Len(textvar.Trim())
If TextLength > 90 Then
textvar = Left(TextLength, 90)
End If
End Event
250

Chapter 1

Here is the contents of the expanded file created using the /a: option of DMS Runner. Notice how
the “srvar” and “textvar” text in include2.dms have been replaced by “museums” and “address”
respectively:

InputDatasource(myInputDataSource)
ConnectionString = _
"Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
SelectQuery = "SELECT Respondent.Serial, age, address, gender, museums, dinosaurs,
rating[{dinosaurs}].column, rating_ent[{dinosaurs}].column, DataCleaning.Note,
DataCleaning.Status FROM VDATA WHERE Respondent.Serial < 101"
End InputDatasource

OutputDatasource(myOutputDataSource)
ConnectionString = _
"Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Output\IncludeExample.ddf"
MetaDataOutputName = "[INSTALL_FOLDER]\IBM\SPSS\DataCollection\6\DDL\Output\IncludeExample.mdd"
End OutputDatasource

Event(OnNextCase, "Clean the data")


If museums.Response.AnswerCount() > 2 Then
DataCleaning.Note = DataCleaning.Note + museums.QuestionName + " needs checking."
DataCleaning.Status = {NeedsReview}
End If
Dim TextLength
TextLength = Len(address.Trim())
If TextLength > 90 Then
address = Left(TextLength, 90)
End If
End Event

Note: The three files in this example are provided as sample DMS files (called
IncludeExample.dms, Include1.dms, and Include2.dms) that are installed with the IBM® SPSS®
Data Collection Developer Library. For more information, see the topic Sample DMS Files
on p. 467.

Using Text Substitution in the DMS file

You can use text substitution to replace text in a DMS file with your own text when you run
the file. This is useful, for example, when you want to reuse DMS files in projects that have
different data sources. Instead of manually changing the data source section, you can define a text
substitution statement that will automatically insert the correct values when the DMS file is run.
251

Base Professional

The syntax is:

#define Search Replace

Part Description
Search Text that is to be replaced by Replace.
Replace The text to replace Search.

Remarks
„ You can include more than one #define statement in a DMS file. You must insert the
statement before the appearance of the text that you want to replace and you must not use a
line-continuation character in the #define statement. Note that the search is case sensitive.
„ You can replace text in an Include file by inserting a #define statement in your DMS file
before the #include statement. For an example of replacing text in an Include file in this
way, see Using Include Files in the DMS file.
„ To restrict the range of a text substitution, insert an #undef statement in your DMS file at the
point where you want the text substitution to stop. The following example shows how to
restrict the range of a text substitution to the contents of an Include file:

#define textvar address


#include ".\Include\Include2.dms"
#undef textvar

„ When debugging DMS files that use text substitution, you may find it useful to use the DMS
Runner /a: option to save the DMS file after the text substitution has been implemented.

Example 1

This example is based on the simple example described in Simple Example of a DMS File.
However, a #define statement has been inserted at the beginning to change the names of the
output files.

#define simple "TextSubstitution"

InputDataSource(myInputDataSource, "My input data source")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
End InputDataSource

OutputDataSource(myOutputDataSource, "My output data source")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrSavDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\" + simple + ".sav"
MetaDataOutputName = "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\" + simple + ".mdd"
End OutputDataSource
252

Chapter 1

Example 2

This example uses the MSExcelToDDF.dms sample file included with the DDL.

#define ADOInfoFile "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\MSExcelToDDF.adoinfo"

' The output Data Collection Data and MDD files...

#define OutputDDFFile "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\MSExcelToDDF.ddf"


#define OutputMDDFile "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\MSExcelToDDF.mdd"

InputDataSource(Input)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrADODsc; _
MR Init MDSC=mrADODsc; _
Initial Catalog=" + ADOInfoFile
SelectQuery = "SELECT * FROM vdata"
End InputDatasource

OutputDataSource(Output)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=" + OutputDDFFile
MetaDataOutputName = OutputMDDFile
End OutputDataSource

Sections in the DMS file

The DMS file consists of a number of sections, which can appear in any order and are defined
as follows:

section_name (parameter, parameter, ... )


...
END section_name

Part Description
section_name This must be a recognized section name as shown
below.
parameter Parameter of type Text. The parameters vary
depending on the section type. For most section
types there are two parameters—the first defines
a name and the second is an optional parameter
that enables you to define a description. However,
the Metadata section takes four parameters that
define the default language, user context, label
type, and the data source to which the section
applies, respectively. Note that for the Event
section the first parameter defines when the section
is to be executed. For more information, see the
documentation on each section.
253

Base Professional

The recognized section names are:


„ Job. Optional section that defines the job name and job description. The Job section is
designed for supporting the job’s global parameters. Currently, TempDirectory is the only
identified parameter.
„ InputDataSource. Required section that defines an input data source for the data
transformation. There must be one (and only one) InputDataSource section in your DMS file.
„ OutputDataSource. Required section that defines an output data source for the data
transformation. There must be at least one OutputDataSource section in your DMS file.
„ GlobalSQLVariables. This is an optional section that defines global SQL variables, which
enable you to exchange information between data sources.
„ Metadata. Optional section that is used to define new questions and variables in the metadata
using mrScriptMetadata.
„ Logging. Optional section that defines the parameters for the Logging component.
„ Event. Optional sections that define procedural code, written in mrScriptBasic, for cleaning
the case data, setting up weighting, etc.

Job Section

The Job section defines the job name and description. The Job section is designed for supporting
the job’s global parameters. Currently, TempDirectory is the only identified parameter. This
section is optional. The syntax is:
Job(name [, "description"])
[TempDirectory = "<Temp Directory>"]
End Job

name and description should be of type Text.

TempDirectory is used to set a temporary directory in which to store temporary files generated
during Job transformation. Ensure this directory is assigned the appropriate read and write
access permissions.

Examples

The following example defines a job with a name of MUS03 and a description of Copy the
Museum database to IBM® SPSS® Statistics SAV:
Job(MUS03, "Copy the Museum database to IBM SPSS Statistics SAV")
TempDirectory = "C:\Temp"
End Job

InputDataSource Section

The InputDataSource section defines the input data source for the data transformation. The
InputDataSource section is required, which means that there must always be at least one
InputDataSource section in a DMS file. If you add more than one InputDataSource section to a
254

Chapter 1

DMS file, the case data for the input data sources will be combined into a single data source. For
more information, see the topic Using a DMS Script to Merge Data on p. 378.

The syntax for the InputDataSource section is:

InputDataSource(name [", description"])


ConnectionString = "<connection_string>"
[SelectQuery = "<select_query>"]
[UpdateQuery = "<update_query>"]
[UseInputAsOutput = True | False"]
[JoinKey = "<variable_name>"]
[JoinType = "Full" | "Inner" | "Left"]
[JoinKeySorted = True | False"]
End InputDataSource

name and description define a name and description for the InputDataSource and should be of
type Text. You use the name you define here in the Metadata section to identify the data source to
which the Metadata section applies.

ConnectionString

This specifies the OLE DB connection properties, which define the OLE DB provider to be used
and all of the details about the physical data source, such as its name and location. Note that each
OLE DB provider has different requirements for the connection string. For specific information
about non-Data Model OLE DB providers, see the documention that comes with the OLE DB
provider. For general information, see Reading Data Using Other OLE DB Providers.

If you want to use the IBM® SPSS® Data Collection Data Model, specify the IBM SPSS Data
Collection OLE DB Provider by setting the Provider connection property to mrOleDB.Provider.n
(where n is the version number). The IBM SPSS Data Collection OLE DB Provider has a number
of custom connection properties that define the CDSC that is to be used to read the case data (this
must be a read-enabled CDSC), the Metadata Document (.mdd) file or other metadata source and
MDSC to be used, etc. Refer to the Connection Properties topic in the IBM® SPSS® Data
Collection Developer Library for more information.

You can specify file locations using a relative path (relative to the folder in which the DMS file
is located when you run it). Generally, you do not need to specify a connection property for
which the default value is being used.

When you are using a non-Data Model OLE DB Provider to write the case data, provided you
have specified an input metadata source, you can set the MR Init Category Names connection
property to 1 so that the category names are exported instead of the numeric values. However, you
will get an error if you use this setting when writing data using the IBM SPSS Data Collection
OLE DB Provider. This means that you cannot use this option when you have more than one
OutputDataSource section in your DMS file and one or more of them uses the IBM SPSS Data
Collection OLE DB Provider to write the data.
255

Base Professional

Each DSC that you can use to read data behaves differently and has different requirements. For
specific information about reading data using the read-enabled DSCs that come with the Data
Model, see:
„ Transferring Data From a Relational MR Database (RDB)
„ Transferring Data From a IBM SPSS Data Collection Data File
„ Transferring Data From IBM SPSS Statistics
„ Transferring Data From IBM SPSS Quantum
„ Transferring Data From IBM SPSS Quanvert
„ Transferring Data From QDI/DRS
„ Transferring Data From Log Files
„ Transferring Data From Triple-S
„ Transferring Data From Microsoft Office
„ Transferring Data From XML

When using the Data Model, if the metadata associated with the input data source is in a Metadata
Document (.mdd) file, specify the name and location of the .mdd file in the Initial Catalog
connection property. Note that the .mdd file itself must be writable. For example:

InputDataSource(myInputDataSource, "My input data source")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
End InputDataSource

If the .mdd file has more than one version, the most recent version will be used by default.
However, you can select an earlier version by using the MR Init MDM Version connection
property. For more information, see the topic Selecting a Specific Version on p. 361. You can
also select multiple versions. This is useful when you want to export data collected using more
than one version of the questionnaire. For more information, see the topic Selecting Multiple
Versions on p. 362.

If the metadata is not in the form of a Metadata Document (.mdd) file, you specify the metadata
source and the MDSC to be used in the Initial Catalog and MR Init MDSC connection properties.
Note that the MDSC must be read-enabled. This example specifies the IBM® SPSS® Quanvert™
Museum sample database and the Quanvert DSC:

InputDataSource("My input data source")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrQvDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Quanvert\Museum\qvinfo; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Quanvert\Museum\qvinfo; _
MR Init MDSC=mrQvDsc"
End InputDataSource
256

Chapter 1

For some types of data you do not need to specify a metadata source, although generally
you will want to do so, because without an input metadata source, you cannot use the
OnAfterMetaDataTransformation, OnJobStart, OnNextCase, OnBadCase, and OnJobEnd event
sections. Moreover, if you are transferring data to a target data source that already exists, the
transfer will succeed only if the structure of the output data exactly matches the existing target
data. It’s possible to transfer data without specifying a metadata source only when using Relational
MR Database CDSC, IBM® SPSS® Data Collection Data File CDSC, SPSS Statistics SAV DSC,
or XML CDSC to write the data (although transferring to a .sav file without a metadata source
has a number of limitations. For more information, see the topic Transferring Data to IBM SPSS
Statistics on p. 342.) See below for an example.

If you are using the Data Model and want to operate on the metadata only, specify the case data in
the normal way in the InputDataSource section. Then in the OutputDataSource section, set the
Data Source connection property to CDSC. This specifies the Null DSC, which means that no case
data will be written. Note that you should not specify the Null DSC in the InputDataSource section.

Tip: An easy way to create the connection string is to use the IBM® SPSS® Data Collection Base
Professional Connection String Builder. For more information, see the topic 3. Transferring
Different Types of Data on p. 212.

SelectQuery

An SQL query that defines a case data filter, a metadata filter, or both a case data and a metadata
filter. For more information, see the topic Filtering Data in a DMS File on p. 243.

If you do not specify a query, it will default to the following query:

SELECT * FROM vdata

If you are using the IBM SPSS Data Collection OLE DB Provider (which is part of the Data
Model), this query means that all of the data that can be flattened will be included. However, this
query will give you an error if you are using another OLE DB provider. So you must always
specify a query when you are using a non-Data Model OLE DB provider.

Case data will be transferred for the specified variables only. However, if you select a helper
variable or a variable that is part of a metadata block (Class object), the parent object will be
included in the output metadata. If you specify in the select query one or more variable instances
that relate to a variable inside a grid or a loop, all of the related variable instances will be included
in the output, but case data will be transferred only for the variable instances specified in the
select query.

When you are using the IBM SPSS Data Collection OLE DB Provider and do not specify a query
or you use a SELECT * FROM vdata query, any variables defined in the Metadata section are
included in the transformation automatically. However, if you specify the variables individually
in the query, you also need to specify in the select query any variables that are defined in the
Metadata section. Otherwise the variables defined in the Metadata section will be excluded from
the output metadata and the transformation.
257

Base Professional

The query can be any SQL query that is supported by the OLE DB provider you are using. For
information about the SQL queries that are supported by the IBM SPSS Data Collection OLE DB
Provider, see the Basic SQL Queries topic in the Data Collection Developer Library.

UpdateQuery

An SQL statement to be executed on the data source before any processing takes place. For more
information, see the topic 4. Using an Update Query on p. 216.

Any Data Manipulation SQL syntax that is supported by the OLE DB provider can be used. When
you are using the IBM SPSS Data Collection OLE DB Provider you can use an INSERT, UPDATE,
or DELETE statement, provided the syntax is also supported by the CDSC that is being used to
read the case data. See the SQL Syntax and Supported Features of the CDSCs topics in the Data
Collection Developer Library for more information.

Warning: Use this feature with care because the update query will permanently alter the input
data source. It is recommended that you take a backup of the input data source before running a
DMS file that includes this feature.

UseInputAsOutput, JoinKey, JoinType, and JoinKeySorted

These parameters are used only when running a case data merge. For more information, see the
topic Using a DMS Script to Merge Data on p. 378.

Examples

1. Using a VDATA select query to filter case data and metadata

The following example specifies the Museum sample Data Collection Data File, which consists of
case data in the museum.ddf file and a Metadata Document (.mdd) file called museum.mdd.

The query specifies a metadata filter consisting of three named variables (age, gender, museums)
and a case data filter that selects female respondents only.
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
SelectQuery = "SELECT age, gender, museums FROM vdata _
WHERE gender = {female}"
End InputDataSource

2. Using an HDATA select query to filter metadata in the same level

The following query specifies a metadata filter consisting of three named variables (age, gender,
name) at the person level.
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
258

Chapter 1

Location=C:\Program Files\SPSSInc\PASWDataCollection5.6\DDL\Data\Data Collection File\Household.ddf; _


Initial Catalog=C:\Program Files\SPSSInc\PASWDataCollection5.6\DDL\Data\Data Collection File\Household.mdd"
SelectQuery = "SELECT age, gender, name FROM hdata.person"
End InputDataSource

3. Using an HDATA select query to filter metadata in different levels

The following query specifies a metadata filter consisting of three named variables (address, age,
country) at different levels via the Up-lev and Down-lev operators (based on the person level).
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\SPSSInc\PASWDataCollection5.6\DDL\Data\Data Collection File\Household.ddf; _
Initial Catalog=C:\Program Files\SPSSInc\PASWDataCollection5.6\DDL\Data\Data Collection File\Household.mdd"
SelectQuery = "SELECT ^.address, age, trip.(country) FROM hdata.person"
End InputDataSource

Note: ^.address is automatically renamed to <FileName>_address (in this example, it is renamed


to Household_address).

4. Using a select query to rename metadata

The following query specifies a metadata filter consisting of one renamed variable (age) at the
person level.
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\SPSSInc\PASWDataCollection5.6\DDL\Data\Data Collection File\Household.ddf; _
Initial Catalog=C:\Program Files\SPSSInc\PASWDataCollection5.6\DDL\Data\Data Collection File\Household.mdd"
SelectQuery = "SELECT age as “person_age” FROM hdata.person"
End InputDataSource

5. Using an update query to delete test data

The following example includes an update query that is used to delete test data from the input
data source. Note that you should always be extremely careful when using an update query in the
InputDataSource section because it alters the input data source permanently.
InputDataSource(myInputDataSource, "My input data source")
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
UpdateQuery = "DELETE FROM vdata _
WHERE DataCollection.Status.ContainsAny({Test})"
End InputDataSource

6. Operating on metadata only


259

Base Professional

The following example shows how to operate on the metadata only. In the InputDataSource
section you specify the case data in the normal way and in the OutputDataSource section, the
Data Source connection property is set to CDSC, which is the Null DSC and which enables you
to operate on the metadata without case data.

'==========================================================
'Licensed Materials - Property of IBM
'
'IBM SPSS Products: Data Collection
'
'(C) Copyright IBM Corp. 2001, 2011
'
'US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP
'Schedule Contract with IBM Corp.
'==========================================================

InputDataSource(Input)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
End InputDataSource

OutputDataSource(Output)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=CDSC"
MetaDataOutputName = "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\MetadataOnly.mdd"
End OutputDataSource

Metadata (ENU, Question, Label, Input)


Entering "Respondents interviewed entering the museum" boolean
expression ("interview = {entering}");
Leaving "Respondents interviewed leaving the museum" boolean
expression ("interview = {leaving}");
End Metadata

Note: This example is provided as a sample DMS file (called MetadataOnly.dms) that is installed
with the Data Collection Developer Library. For more information, see the topic Sample DMS
Files on p. 467.

7. Operating on case data only

The following example shows how to operate on the case data without specifying a metadata
source. This is possible only when using Relational MR Database CDSC and SPSS Statistics
SAV DSC to write the case data. However, note that transferring to a .sav file without using a
metadata source has a number of limitations. For more information, see the topic Transferring
Data to IBM SPSS Statistics on p. 342.
260

Chapter 1

This example transfers case data from a .sav file to another .sav file without using a metadata
source:

InputDatasource(myInputDataSource)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrSavDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Sav\Employee data.sav"
SelectQuery = "SELECT id, bdate, educ, salary, salbegin, jobtime, prevexp FROM VDATA"
End InputDatasource

OutputDataSource(Output)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrSavDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\CaseDataOnly.sav"
End OutputDataSource

Note: This example is provided as a sample DMS file (called CaseDataOnly.dms) that is installed
with the Data Collection Developer Library. For more information, see the topic Sample DMS
Files on p. 467.

8. Using Other OLE DB Providers

The following example uses the Microsoft OLE DB Provider for ODBC Drivers to read data in
an Access database:

InputDataSource(Input)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrADODsc; _
Location=""Provider=MSDASQL.1; _
Persist Security Info=False; _
Extended Properties='DSN=MS Access Database; _
DBQ=C:\Inetpub\iissamples\sdk\asp\database\Authors.mdb; _
DriverId=281; _
FIL=MS Access; _
MaxBufferSize=2048; _
PageTimeout=5; _
UID=admin'""; _
MR Init Project=Authors"
End InputDatasource

The example uses the Data Collection ADO DSC in the Data Model to read the Access database
as an ADO data source. Although not demonstrated here, it is also possible to specify an input
metadata source when using the ADO DSC, which allows you to create an output metadata
document (.mdd) file. For more information, see the topic Transferring Data From Microsoft
Office on p. 328.

Note: To run this sample, you must have the Microsoft OLE DB Provider for ODBC Drivers,
the ODBC data source called MS Access Database, and the Authors sample Access database,
which all come with Microsoft Office. In the Data Collection Developer Library, there are similar
example scripts called MSAccessToQuantum.dms and MSAccessToSav.dms.
261

Base Professional

OutputDataSource Section

The OutputDataSource section defines an output data source for the data transformation. There
must be at least one OutputDataSource section in a DMS file and you can optionally specify more
than one. If you specify more than one OutputDataSource section in a DMS file, the case data will
be written to each output data source specified. This is useful, if, for example, you want to export
the case data to two or more different formats or locations at the same time. The syntax is:
OutputDataSource(name [, "description"])
[ConnectionString = "<connection_string> | UseInputAsOutput = True|False"]
[MetaDataOutputName = "<metadata_location>"]
[UpdateQuery = "<update_query>"]
[TableOutputName = "<table_output_name>"]
[VariableOrder = "SELECTORDER"]
End OutputDataSource

name and description should be of type Text.

ConnectionString

This specifies the OLE DB connection properties, which define the OLE DB provider to be used
and all of the details about the physical data source, such as its name and location. Note that each
OLE DB provider has different requirements for the connection string. For specific information
about non-Data Model OLE DB providers, see the documentation that comes with the provider.
For general information, see Writing Data Using Other OLE DB Providers.

If you want to use the IBM® SPSS® Data Collection Data Model, specify the IBM SPSS Data
Collection OLE DB Provider by setting the Provider connection property to mrOleDB.Provider.n
(where n is the version number). The IBM SPSS Data Collection OLE DB Provider has a number
of custom connection properties that define the CDSC that is to be used to write the case data (this
must be a write-enabled CDSC), the data validation settings, etc. Note that the Initial Catalog
connection property should not be given a value—if specified, it should be set to an empty string
(“”). Refer to the Connection Properties topic in the IBM® SPSS® Data Collection Developer
Library for more information.

You can specify file locations using a relative path (relative to the folder in which the DMS file
is located when you run it). Generally, you do not need to specify a connection property for
which the default value is being used.

If you specify a physical data source that does not already exist, generally it will be created when
the DMS file is executed, provided the specified folder exists. However, when exporting to a
relational MR database, you must create the actual database before you run the DMS file. For
more information, see the topic Transferring Data to a Relational MR Database (RDB) on p. 337.

What happens when you specify a physical data source that does exist, depends on a number
of factors.
„ If you are doing a case-data only transformation or are not using the Data Model to read
the data, the transfer will succeed only if the structure of the output data exactly matches
the existing data.
262

Chapter 1

„ If you are using the Data Model to read the data and have specified an input metadata source,
what happens depends on whether the output of the data transformation matches the structure
of the data in the output data source, whether the cases already exist in the target data source,
and on the format of the data (and hence the CDSC being used).
„ For some data types (such as relational MR database) the transfer will fail if the output data
contains any cases that already exist.
„ For information about what happens when you are using a non-Data Model OLE DB provider
to write the data, see TableOutputName below.

If the metadata to be written will not be in the form of a Metadata Document (.mdd) file, you can
specify the metadata file to be written and the MDSC to be used in the Initial Catalog and MR Init
MDSC connection properties. Note that the MDSC must be write-enabled. At present, the only
DSC that can be used in this way to write a metadata file other than a .mdd file is the Triple-S DSC.

Each DSC that you can use to write data behaves differently and has different requirements. For
specific information about writing data using some of the write-enabled DSCs that come with
the Data Model, see:
„ Transferring Data to a Relational MR Database (RDB)
„ Transferring Data to a IBM SPSS Data Collection Data File
„ Transferring Data to IBM SPSS Statistics
„ Transferring Data to IBM SPSS Quantum
„ Transferring Data to Triple-S
„ Transferring Data to SAS
„ Transferring Data to a Delimited Text File
„ Transferring Data to XML

If you are using the Data Model and want to operate on the metadata only, specify the Data Source
connection property as CDSC. This is a special CDSC, called the Null DSC, which enables you to
connect to the IBM SPSS Data Collection OLE DB Provider without any case data.

Tip: An easy way to create the connection string is to use the IBM® SPSS® Data Collection Base
Professional Connection String Builder. For more information, see the topic 3. Transferring
Different Types of Data on p. 212.

MetaDataOutputName

This defines the name and location of the Metadata Document (.mdd) file to which the exported
metadata is to be written. If you do not specify this parameter, the output metadata is not saved.
The output metadata determines the structure of the output case data. For example, if the output
case data source does not exist, it will be created based on the output metadata. If the output case
data source does exist and the data format is one that can be updated, the output case data is
synchronized with the output metadata.
263

Base Professional

The output metadata is created in the following way:


„ If the DMS file contains a Metadata section that relates to the input data source, the metadata
defined in the Metadata section is merged with the input data source metadata. The input data
source metadata is used as the master version for the merge.
„ The merged metadata is then filtered according to the metadata filter defined in the select query
in the InputDataSource section. If the select query includes a helper variable or a variable that
is part of a metadata block (Class object), the parent object will also be included in the output
metadata. If the select query includes one or more variable instances that relate to a variable
inside a grid or a loop, all of the related variable instances will be included in the output.
„ The output metadata will not contain any versions, even if the input metadata source contains
versions. By default, the output metadata will be based on the most recent version of the
input metadata source. However, if the input metadata source has multiple versions, and you
specify one or more specific versions to use for the transfer (using the MR Init MDM Version
connection property in the InputDataSource section), the output metadata will be based on the
specified version or combination of versions.
„ The resulting metadata is written or merged to the specified location.

When you are transferring data to a .sav file, it is recommended that you save the output metadata
file, whenever possible. If you subsequently want to read the .sav file using the Data Model, it is
usually preferable to do so using the .mdd file, because this gives you access to the original Data
Model variable names. In addition, if you subsequently want to export additional records to the
.sav file, it will be possible only if you run the export using this file as the input metadata source.
For more information, see the topic Transferring Data to IBM SPSS Statistics on p. 342.

Note: Starting with version 5.6, the input metadata will be merged to any existing output metadata
(with the existing output acting as the master). A merge is used, instead of simply overwriting the
existing metadata, to ensure that the category map for the existing output is used. For example, if
data is collected using two separate clusters, the category maps may be different on each cluster.
Similar to a vertical merge, when appending data to a combined data set, the category map for
the output dataset needs to be used. Prior to version 5.6, the output metadata was overwritten,
invalidating any existing data.

Case data-only transformation. No output metadata is created if you do not specify a metadata
source in the InputDataSource section (for example, because you are using a non-Data Model
OLE DB provider to read the data or you are doing a case data-only transformation), and the
output case data structure is based on the attributes and names in the input case data. If you specify
the MetaDataOutputName parameter in this situation, it will be silently ignoredt.

UpdateQuery

An SQL statement to be executed on the data source after the processing of the procedural code
defined in all of the Event sections with the exception of the OnAfterJobEnd Event section. For
more information, see the topic 4. Using an Update Query on p. 216.

Any Data Manipulation SQL syntax that is supported by the OLE DB provider can be used. When
you are using the IBM SPSS Data Collection OLE DB Provider you can use an INSERT, UPDATE,
or DELETE statement, provided the syntax is also supported by the CDSC that is being used to
264

Chapter 1

write the case data. Refer to the Supported SQL Syntax and Supported Features of the CDSCs
topics in the Data Collection Developer Library for more information.

UseInputAsOutput

Set to True if your DMS file has a single input data source and you want the data to be written
to that data source. This means that the input data source will be overwritten with the results of
the data transformation. The default is False. If you are using the Data Model to read the input
data, the CDSC must be write-enabled and support changing the data in existing records (Can
Update is True for the CDSC).

If you want the data to be written to the input data source when you are running a case
data merge, and therefore your DMS file contains multiple input data sources, you must
specify the UseInputAsOutput option in the relevant InputDataSource Section and not in the
OutputDataSource section.

When using the UseInputAsOutput option, you must set the MR Init MDM Access connection
property to 1 in the InputDataSource section.

When you use the UseInputAsOutput option, the version history in the input metadata will be
preserved only if, by default, you are using the most recent version. A new unlocked version will
be created if the most recent version is locked. However, if you specify one or more versions
using the MR Init MDM Version connection property in the InputDataSource section, the input
metadata will be overwritten with the specified version or combination of versions and the version
history will be deleted.

Warning: The UseInputAsOutput option should be used with caution because it changes the input
metadata and case data irreversibly. It is not suitable for use with data that is in the process of
the being collected by a live IBM® SPSS® Data Collection Interviewer Server project. It is
recommended that you take a backup of your data before using this option.

TableOutputName

Used only when writing the case data using a non-Data Model OLE DB provider, this specifies the
name of the table to which the data is to be written.

The table will be created if it does not exist already. You can append data to an existing table
only if the provider you are using supports this operation and the structure of the data you are
transferring is identical to (or a subset of) the data in the existing table. For example, if the
existing table contains the variables age, gender, and income, exports that contain the variables
age and gender, or age, gender, and income should succeed, provided the variables are of the
same type and the provider you are using supports this type of operation. However, an export of
variables age, gender, income, and occupation would always fail.

VariableOrder

Variables are output in the order in which they appear in the Select Query statement in the
InputDataSource section. In Dimensions versions 3.0 through 5.5 , variables were output
in the order in which they appeared in the input metadata source, unless the output data set
265

Base Professional

specified in OutputDataSource already existed. You can delete the output data set in the event
OnBeforeJobStart section, or use property MR Init Overwrite, to ensure that variables are output
in the same order as the Select Query statement in the InputDataSource section.

Starting with version 5.6, SELECTORDER is supported exclusively.

Examples

1. Transferring data to IBM® SPSS® Statistics

The following example specifies that the data is to be written to a .sav file.

OutputDataSource(myOutputDataSource)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrSavDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\Simple.sav"
MetaDataOutputName = "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\Simple.mdd"
End OutputDataSource

2. Using an update query to set the DataCollection.FinishTime

The following example specifies that the case data is to be written to a IBM SPSS Data
Collection Data File (.ddf) and shows an update query. This uses the function to set the
DataCollection.FinishTime variable to the current date and time.
OutputDataSource(myOutputDataSource, "My output data source")
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\UpdateQuery.ddf"
MetaDataOutputName = "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\UpdateQuery.mdd"
UpdateQuery = "UPDATE vdata _
SET DataCollection.FinishTime = Now()"
End OutputDataSource

3. Using the UseInputAsOutput option

The following example shows using the UseInputAsOutput option. Note that the MR Init MDM
Access connection property has been set to 1 in the InputDataSource section.

InputDataSource(Input, "The input data source")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\museum-copy.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\museum-copy.mdd; _
MR Init MDM Access=1"
SelectQuery = "SELECT DataCollection.FinishTime FROM vdata"
End InputDataSource

OutputDataSource(Output, "The output data source")


UseInputAsOutput = "True"
End OutputDataSource
266

Chapter 1

4. Using the Null DSC and operating on metadata and case data only

For an example of using the Null DSC and operating on metadata only, see the third example
in the InputDataSource section. For an example of operating on case data only, see the fourth
example in the same topic.

5. Writing the data to Excel

This example shows exporting a subset of the Museum sample data set to Excel using a non-Data
Model OLE DB provider. Notice that the MR Init Category Names connection property has been
set to 1 in the InputDataSource section so that the category names are transferred to Excel rather
than the category values. This generally makes the data easier to interpret. For more information,
see the topic Writing Data Using Other OLE DB Providers on p. 352.

InputDataSource(Input, "The input data source")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd; _
MR Init Category Names=1"
SelectQuery = "SELECT interest, age, gender, expect, when_decid _
FROM VDATA WHERE gender = {female}"
End InputDataSource

OutputDatasource(MSExcel)
ConnectionString = "Provider=MSDASQL.1; _
Persist Security Info=False; _
Data Source=Excel Files; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\MSExcelTransferToFromDDF.xls"
TableOutputName = "Sheet1"
End OutputDatasource

Note: This example is provided as a sample DMS file, called MSExcelTransferToFromDDF.dms,


that is installed with the Data Collection Developer Library. For more information, see the topic
Sample DMS Files That Integrate with Microsoft Office on p. 475.

GlobalSQLVariables Section

The GlobalSQLVariables section is an optional section that you can use to define global SQL
variables, which provide a means of exchanging data between different data sources. The syntax is:

GlobalSQLVariables(name [, "description"])
ConnectionString = "<connection_string>"
SelectQuery = "<select_query>"
End GlobalSQLVariables

name and description define a name and description for the section and should be of type Text.

ConnectionString
267

Base Professional

A connection string that defines the OLE DB connection properties, which define the OLE DB
provider to be used to access the case data and all of the details about the physical data source,
such as its name and location.

If you are using the IBM® SPSS® Data Collection Data Model, specify the IBM SPSS Data
Collection OLE DB Provider by setting the Provider connection property to mrOleDB.Provider.n
(where n is the version number). The IBM SPSS Data Collection OLE DB Provider has a number
of custom connection properties that define the CDSC that is to be used to access the case data, the
Metadata Document (.mdd) file or other metadata source and MDSC to be used, etc. Refer to the
Connection Properties topic in the IBM® SPSS® Data Collection Developer Library for more
information. Note that the CDSC you are using must be read-enabled. This means that you cannot
use a GlobalSQLVariables section with a IBM® SPSS® Quantum™ data source.

You can specify file locations using a relative path (relative to the folder in which the DMS file
is located when you run it). Generally, you do not need to specify a connection property for
which the default value is being used.

Tip: An easy way to create the connection string is to use the IBM® SPSS® Data Collection Base
Professional Connection String Builder. For more information, see the topic 3. Transferring
Different Types of Data on p. 212.

SelectQuery

An SQL query that defines the global variable(s). You define a global variable in the query using
the AS clause and by prefixing the column name with the at sign (@). The query can specify a
simple column or an expression based on one or more columns, using the SQL syntax supported
by the OLE DB provider you are using. See the Basic SQL Queries topic in the Data Collection
Developer Library for information on the SQL queries supported by the Data Model.

Example

The following example shows using a global SQL variable in a DMS file that is being used
to clean batches of case data and write it to a different data source, which is being used to
store the clean data. The example shows only the GlobalSQLVariables, InputDataSource, and
OutputDataSource sections of the DMS file:
„ GlobalSQLVariables section. This specifies a connection string for the output (clean) database
and a query that defines a global SQL variable called @LastTransferred, which stores the
maximum value of the DataCollection.FinishTime variable. This is a system variable that
stores the date and time that an interview is stopped or completed. This means that the global
variable stores the date and time of the most recent (“newest”) respondent record in the output
data source. Refer to the System Variables topic in the Data Collection Developer Library
for more information.
„ InputDataSource section. This specifies a connection string for the input (live) data source and
a query that selects respondent records that have the Completed status and whose finish time
is after that of the newest record in the output database.
„ OutputDataSource section. This specifies a connection string for the output (clean) database.
268

Chapter 1

'==========================================================
'Licensed Materials - Property of IBM
'
'IBM SPSS Products: Data Collection
'
'(C) Copyright IBM Corp. 2001, 2011
'
'US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP
'Schedule Contract with IBM Corp.
'==========================================================

GlobalSQLVariables(myGlobals, "My globals section")


ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\GlobalSQLVariable.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\GlobalSQLVariable.mdd"
SelectQuery = "SELECT MAX(DataCollection.FinishTime) As @LastTransferred FROM VDATA"
End GlobalSQLVariables

InputDataSource(myInputDataSource)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
SelectQuery = "SELECT * FROM VDATA WHERE (DataCollection.FinishTime > '@LastTransferred') _
AND (DataCollection.Status = {Completed})"
End InputDataSource

OutputDataSource(myOutputDataSource)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\GlobalSQLVariable.ddf"
MetaDataOutputName = "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\GlobalSQLVariable.mdd"
End OutputDataSource

Note: This example is provided as a sample DMS file (called GlobalSQLVariable.dms)


that is installed with the Data Collection Developer Library. You need to run the
GlobalSQLVariableSetUp.dms sample first to set up the output data source and transfer the first
records. For more information, see the topic Sample DMS Files on p. 467. Note that you can use
the RunGlobalSQLVariableExample.bat sample batch file to run the two files in sequence.

Metadata Section

The Metadata section is an optional section that can be used to define new variables in the
metadata that is used for the transformation. The code in the Metadata section must be valid
mrScriptMetadata.
269

Base Professional

When the DMS file is executed, the metadata defined in the Metadata section is merged with
the metadata in the specified data source. You can define only one Metadata section for a data
source and the metadata defined in the data source is always used as the master version in the
merge operation.

If you have not specified a metadata source in the InputDataSource section (for example, because
you are not using the IBM® SPSS® Data Collection Data Model OLE DB Provider to read the
data or are doing a case data-only transformation) and you define a Metadata section for that data
source, the Metadata section will be silently ignored.

The syntax is:


Metadata [(language [, context [, labeltype] [, datasource]])]
field_name "field description" <type_info>;
...
END Metadata

Parameter Description
language Defines the current language for the metadata. Must
be a recognized language code.
context Defines the current user context for the metadata.
User contexts define different usages for the
metadata, so that different texts and custom
properties can be used depending on how the
metadata is being used. For example, the Question
user context is typically used to define the default
texts to be used when interviewing and the Analysis
user context is typically used to define shorter texts
for use when analyzing the response data.
labeltype Defines the current label type. Label types enable
different types of labels to be created for different
types of information. For example, the default label
type of Label is used for question and category texts
and variable descriptions, and the Instruction label
type is used for interviewer instructions.
datasource Identifies the input data source to which the
Metadata section relates. You specify the
data source using the name defined for the
InputDataSource section.

For details of recognized language codes, user contexts and label types, see and .

You define metadata in mrScriptMetadata by adding field entries to the Metadata section. Each
field entry corresponds to a question, loop, block, information item, or derived variable. See in the
mrScriptMetadata section for information about defining fields.

A quick way of creating new variables based on other variables is to use one or more expressions.
Variables that are defined in the metadata using expressions are known as “dynamically derived
variables” because the Data Model calculates the case data for these variables “on the fly”.
However, these variables are not understood by products that do not use the Data Model to read
their data, such as IBM® SPSS® Statistics, IBM® SPSS® Quantum™, Excel, and Access.
Therefore, dynamically derived variables are automatically converted into “standard” variables.
270

Chapter 1

This means that the output data source contains case data for these variables (so that you can
use them in SPSS Statistics, Quantum, Excel, etc.) and the expressions are removed from the
variables in the output metadata.

However, there is one exception: when you are using the UseInputAsOutput option, dynamically
derived variables are not converted to standard variables. This has the advantage that the
dynamically derived variables will automatically include any additional case data records that are
added to the data source and it is not a disadvantage because you cannot use the UseInputAsOutput
option to set up metadata in SPSS Statistics and Quantum data.

Examples

The following metadata section has a default language of English (United States), a default user
context of Question, a default label type of Label, and applies to the data source that was defined
in the InputDataSource section called Input. It defines two derived Boolean variables for use as
filter variables when analyzing the data.
Metadata (ENU, Question, Label, Input)
Entering "Respondents interviewed entering the museum" boolean
expression ("interview = {entering}");
Leaving "Respondents interviewed leaving the museum" boolean
expression ("interview = {leaving}");
End Metadata

All texts and custom properties will be created in the default language, user context, and label
type, unless you specify otherwise when you define them.

The following metadata section has a default language of English (United States), a default user
context of Question, a default label type of Label, and applies to the data source that was defined
in the InputDataSource section called Input. It defines a derived categorical variable named
GenderAge within loop called Person for use as filter variables when analyzing the data.
Metadata (ENU, Question, Label, Input)
Person "Person" loop [1 .. 6] fields -
(
GenderAge "Gender/Age Classification" categorical[1]
{
Boys
expression("Gender = {Male} And Age < 18"),
YoungMen "Young men"
expression("Gender = {Male} And Age >= 18 And Age < 35"),
MiddleMen "Middle-aged men"
expression("Gender = {Male} And Age >= 35 And Age < 65"),
OldMen "Older men"
expression("Gender = {Male} And Age >= 65"),
Girls
expression("Gender = {Female} And Age < 18"),
YoungWomen "Young women"
expression("Gender = {Female} And Age >= 18 And Age < 35"),
MiddleWomen "Middle-aged women"
expression("Gender = {Female} And Age >= 35 And Age < 65"),
271

Base Professional

OldWomen "Older women"


expression("Gender = {Female} And Age >= 65")
};
)
End Metadata

Note: Any variables added in the metadata section need to also appear in the input select statement.

Logging Section

The Logging section defines the parameters for the Logging component and specifies that you
want logging to be performed when the DMS file is executed. This section is optional. However,
if you do not specify a Logging section, no logging will be performed. A DMS file should not
contain more than one Logging section. The syntax is:

Logging(name [, "description"])
Group = "Group"
Path = "Path"
Alias = "Alias"
[FileSize = FileSize
End Logging

name and description define a name and description for the section and should be of type Text.
The following table describes the parameters that you can set within the section.
Logging parameter Description
Group Defines the application group that writes the log
and controls the first three characters of the log
filenames.
Path Defines the location of the log file. You can specify
a full path or a relative path (relative to the location
of the DMS file), such as “MyLogs”. However, the
“..” notation (such as ..\..\MyLogs) is not valid. The
log file will be created in this folder with a .tmp
filename extension.
Alias Defines a name to be used in the logging file. This
means you can identify the logs that originated in
this DMS file when multiple clients are using the
same log file.
FileSize Defines the maximum size of the log file. If you
do not specify this parameter, the maximum size
defaults to 100 KB.

It is generally a good idea to have a Logging section in all your DMS files, as any records that
fail validation will then be written to the log file. For more information, see the topic Validation
Options When Transferring Data on p. 336.

The log filenames are constructed from the first three characters defined in the Group parameter
with the addition of two or more characters to make the name unique and a .tmp filename extension.
272

Chapter 1

Example

The following example shows a DMS file that contains a Logging section that sets the logging
group to “DMGR”, the folder for the log file, an alias name of “Tester”, and a maximum file
size of 500KB.

The example also shows using the log file to record data cleaning information. In the OnNextCase
Event section a string is set up and the Log.LogScript_2 method is called to write the string
to the log file.

'==========================================================
'Licensed Materials - Property of IBM
'
'IBM SPSS Products: Data Collection
'
'(C) Copyright IBM Corp. 2001, 2011
'
'US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP
'Schedule Contract with IBM Corp.
'==========================================================

Logging(myLog)
Group = "DMGR"
Path = "c:\temp"
Alias = "Tester"
FileSize = 500
End Logging

InputDataSource(Input)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd"
SelectQuery = "SELECT Respondent.Serial, visits, before FROM VDATA WHERE Respondent.Serial < 101"
End InputDataSource

OutputDataSource(Output)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\Logging.ddf"
MetaDataOutputName = "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\Logging.mdd"
End OutputDataSource

Event(OnNextCase, "Clean the data")


Dim strDetails
strDetails = CText(Respondent.Serial)

If visits is not Null then


If visits = 0 Then
before = {No}
strDetails = strDetails + " Before = No"
Else
273

Base Professional

before = {Yes}
strDetails = strDetails + " Before = Yes"
End If
Else
before = {No}
strDetails = strDetails + " Before = No"
End If

dmgrJob.Log.LogScript_2(strDetails)
End Event

Note: This example is provided as a sample DMS file (called Logging.dms) that is installed
with the IBM® SPSS® Data Collection Developer Library. For more information, see the topic
Sample DMS Files on p. 467.

You can use Log DSC to read and query log files. Here is the log file viewed in DM Query
using Log DSC.

Event Section

The Event section defines procedural code for cleaning case data, setting up weighting, setting up
case data for derived variables, creating batch tables, etc. The Event section is optional and there
can be more than one Event section in a DMS file. The code must be written in mrScriptBasic.
The syntax is:

Event(name [, "description"])
...
End Event

name and description must be of type Text and name must be a recognized Event section name,
which defines when the processing will take place. The recognized names are:
„ OnBeforeJobStart. Used to define procedural code that is to be executed before any of the
data sources are opened. For example, code that uses the IBM® SPSS® Data Collection
Metadata Model to Quantum component to set up card, column, and punch specifications for
use when exporting case data to a IBM® SPSS® Quantum™-format .dat file, or code that
creates an .mdd file from proprietary metadata.
274

Chapter 1

„ OnAfterMetaDataTransformation1. Used to define procedural code that is to be executed


after any metadata defined in a Metadata section has been merged with the input metadata.
Typically this is used to set up card, column, and punch definitions for variables defined in
the Metadata section.
„ OnJobStart1. Used to define procedural code that is to be executed after all of the data sources
have been opened and before the processing of the first case begins. For example, code to
set up global variables that are used in the OnNextCase, OnBadCase, and OnJobEnd Event
sections.
„ OnNextCase1. Used to define procedural code that is to be applied to each case. For example,
code to clean the case data.
„ OnBadCase1. Used to define procedural code that is to be executed for each record that will
not be transferred because it has failed the validation. Typically used to create a report of
bad cases.
„ OnJobEnd1. Used to define procedural code that is to be executed after all of the processing
of the individual cases has been completed and before the data sources are closed. For
example, code that closes report files, or uses the Weight component to set up weighting
in the output data source.
„ OnAfterJobEnd. Used to define procedural code that is to be executed after the data source
connections are closed. For example, code to create tables, launch a report file, or export a
IBM® SPSS® Quancept™ script.

1 These sections require a metadata source to be specified in the InputDataSource section.

Each Event section can contain any number of functions and subroutines and can access any of
the registered objects. The objects that are registered vary according to the section. Objects in
other object models can be accessed using the function. For more information, see the topic
Using Objects in the Event Sections on p. 274.

Tip: If you are new to objects, see .

Using Objects in the Event Sections

Tip: If you are new to working with objects, see .

Note: If you explicitly call the MDM Document.Open method to open an MDM document, you
need to call the Document.Close method to release the document. For more information, see the
topic OnAfterMetaDataTransformation Event Section on p. 279.

Objects in the OnJobStart, OnNextCase, OnBadCase, and OnJobEnd Event sections

In the OnJobStart, OnNextCase, OnBadCase, and OnJobEnd event sections, the Data Management
Object Model (DMOM) registers a number of objects with the mrScriptBasic engine. This means
that the registered objects are automatically available in the script as intrinsic variables. However,
DMOM requires an input metadata source to do this. This means that you cannot use these Event
sections during a case data-only transfer or when are using a non-IBM® SPSS® Data Collection
Data Model OLE DB provider to read the data.
275

Base Professional

The registered objects are:

Job. The Job object is of type IDataManagerJob and is available as an intrinsic variable called
dmgrJob.

Note that although the Job object gives you access to the input and output metadata through the
TransformedInputMetaData and TransformedOutputMetaData properties, these properties are
designed to enable you to set up card, column, and punch definitions and other custom properties
in the OnAfterMetaDataTransformation Event section. It is important that you do not make
any changes to the structure of the input or output metadata in the OnJobStart, OnNextCase,
OnBadCase, OnJobEnd, or OnAfterMetaDataTransformation Event sections. For example, you
should not add or delete variables or categories or change a variable’s data type.

If you have IBM SPSS Data Collection Survey Reporter Professional, the Job.TableDocuments
property returns a collection of TableDocument objects, one for each output data source
that is written using a CDSC that is also read-enabled. A TableDocument object is not
available for non-Data Model format output data sources or IBM® SPSS® Quantum™-format
output data sources (because the Quantum CDSC is not read-enabled). Note that although
you can define your tables in the OnJobStart, OnNextCase, OnBadCase, OnJobEnd, and
OnAfterMetadataTransformation Event sections, you cannot populate or export your tables in
these sections. (Attempting to do so may lead to an error.) You should populate and export the
tables in the OnAfterJobEnd Event section. For more information, see the topic Table Scripting in
a Data Management Script on p. 451.

GlobalVariables. The GlobalVariables object is of type IDataManagerGlobalVariables and is


available in the script as an intrinsic variable called dmgrGlobal. This collection enables you
to share objects between the OnAfterMetaDataTransformation, OnJobStart, OnNextCase,
OnBadCase, and OnJobEnd Event sections. For example, you can add an object to this collection
in the OnJobStart Event section and access the object in the OnNextCase, OnBadCase, and
OnJobEnd Event sections.

Global SQL variables. If you define any global SQL variables in the GlobalSQLVariables section,
each global SQL variable is available with the name defined for it in the GlobalSQLVariables
section.

Questions. The Questions object is available in the script as an intrinsic variable called
dmgrQuestions. The Questions collection contains a Question object for each variable included in
the SelectQuery statement in the InputDataSource section. In addition, each of these variables is
available as a Question object with the name specified in the SelectQuery statement. For example,
if the SelectQuery statement contains the following SELECT statement, the mrScriptBasic will
automatically contain Question objects called Respondent.Serial, age, and gender.

SELECT Respondent.Serial, age, gender FROM vdata

Log. The Log object is available as an intrinsic variable called dmgrLog. Note that if your DMS
file does not have a Logging section, no logging will be performed.
276

Chapter 1

WeightEngines. The WeightEngines object is available as an intrinsic variable called


dmgrWeightEngines and returns a collection of WeightEngine objects, one for each output data
source. Note that the WeightEngine objects are automatically initialized and so you do not need to
call the WeightEngine.Initialize method. In fact calling this method is likely to lead to an error.

Objects in other object models can be accessed using the function.

Objects in the OnAfterMetaDataTransformation Event section

In the OnAfterMetaDataTransformation Event sections, the Data Management Object Model


(DMOM) registers the Job object with the mrScriptBasic engine and it is available as an intrinsic
variable called dmgrJob. You can access all of the properties on the Job object in this Event
section except for the Questions collection, which is not available.

The Job object gives you access to the input and output metadata through the
TransformedInputMetaData and TransformedOutputMetaData properties. These properties are
designed to enable you to set up card, column, and punch definitions and other custom properties
in the input and output metadata. It is important that you do not make any changes to the structure
of the input or output metadata in this section. For example, you should not add or delete variables
or categories or change a variable’s data type.

The OnAfterMetaDataTransformation Event section requires an input metadata source. This


means that you cannot use this Event section in a case data-only transfer or when are using a
non-Data Model OLE DB provider to read the data.

Objects in the OnAfterJobEnd Event sections

Provided you have specified an input metadata source, the Job object is available as an intrinsic
variable called dmgrJob in the OnAfterJobEnd Event section. You can access all of the
properties on the Job object in this Event section except for the Questions collection, which is
not available. If you have the IBM® SPSS® Data Collection Base Professional Tables Option,
the Job.TableDocuments property returns a collection of TableDocument objects, one for each
suitable output data source. For more information, see the topic Table Scripting in a Data
Management Script on p. 451.

If you have not specified an input metadata source, you can use an OnAfterJobEnd Event section,
but the Job object is not available.

Objects in other object models can be accessed using the function.

Objects in the OnBeforeJobStart Event sections

The Job object, and the other objects listed above to which it gives access, are not available
in the OnBeforeJobStart and Event section. You need to use the function to access any object
in this section.
277

Base Professional

OnBeforeJobStart Event Section

The OnBeforeJobStart Event section defines procedural code that is to be executed before any
of the data sources are opened. For example, code that uses the IBM® SPSS® Data Collection
Metadata Model to Quantum component to set up card, column, and punch specifications for
use when exporting case data to a IBM® SPSS® Quantum™ .dat file, or that sets up custom
properties to customize an export to a .sav file. For more information, see the topic Transferring
Data to IBM SPSS Statistics on p. 342.

In this section you do not have access to any variables you are creating in the Metadata section.
However, you can access the new variables in the OnAfterMetaDataTransformation Event
section, and if necessary you can set up card, column, and punch definitions and other custom
properties for them there.

When exporting proprietary data, you can set up an .mdd file in the OnBeforeJobStart Event
section and then specify the .mdd file as the input metadata source in the InputDataSource section.
For more information, see the topic Creating an .mdd File From Proprietary Metadata on p. 314.

The Job object, and the other objects to which it gives access, are not available in the
OnBeforeJobStart Event sections. You need to use the function to access any object in this section.

Example

The following example shows a DMS file that contains an OnBeforeJobStart Event section that
uses the to set up card, column, and punch definitions in the input metadata so that the case data
can be exported to a Quantum .dat file using . It also uses the new MDSC capability of Quantum
DSC to create a basic Quantum specification based on the card, column, and punch definitions.
'==========================================================
'Licensed Materials - Property of IBM
'
'IBM SPSS Products: Data Collection
'
'(C) Copyright IBM Corp. 2001, 2011
'
'US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP
'Schedule Contract with IBM Corp.
'==========================================================

#define COPY_OF_MUSEUM_MDD "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\museum.mdd"

Event(OnBeforeJobStart, "Set up the card and column definitions")


Dim M2Q, MDM, MyDataSource, fso, f

' Create a copy of museum.mdd so that


' we do not update the original file...
Set fso = CreateObject("Scripting.FileSystemObject")
fso.CopyFile("C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.mdd", _
COPY_OF_MUSEUM_MDD, True)
278

Chapter 1

' Make sure that the read-only attribute is not set


Set f = fso.GetFile(COPY_OF_MUSEUM_MDD)
If f.Attributes.BitAnd(1) Then
f.Attributes = f.Attributes - 1
End If

' Create the MDM object and open the Museum .mdd file in read-write mode
Set MDM = CreateObject("MDM.Document")
MDM.Open(COPY_OF_MUSEUM_MDD)

' Check whether a Quantum DSC DataSource object already exists


On Error Resume Next
Set MyDataSource = MDM.DataSources.Find("mrpunchdsc")
If MyDataSource Is Null Then
' Create a Quantum DSC DataSource object if one doesn't already exist
Set MyDataSource = MDM.DataSources.AddNew("mrPunchDsc", "mrPunchDsc", "MDM2Quantum.dat")
End If
Set MDM.DataSources.Current = MyDataSource
Err.Clear()
' Disable error handling...
On Error Goto 0

' Create the MDM2Quantum object


Set M2Q = CreateObject("MDM2QuantumLib.MDM2Quantum")

' Set the MDM Document into the MDM2Quantum object


Set M2Q.MDMDocument = MDM

' Set the MDM2Quantum properties to use the standard Quantum


' setting of multiple cards with 80 columns. Delete these lines
' if you want to use one card of unlimited length

M2Q.SerialFullName = "Respondent.Serial"
M2Q.SerialColCount = 5
M2Q.CardNumColCount = 2
M2Q.CardNumColStart = 6
M2Q.CardSize = 80

' Run the allocation


M2Q.AllocateCardColPunch(True)

' Write the data map to a .csv file using the


' Now function to create a unique filename
M2Q.WriteToCommaSeparatedFile("C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\MDM2Quantum." + CText(CDouble(Now()

' Save the Museum .mdd file


MDM.Save()

' Use the DSC Registration component to find Quantum DSC


Dim DMComp, dscQuantum
Set DMComp = CreateObject("MRDSCReg.Components")
Set dscQuantum = DMComp.Item["mrPunchDsc"]
279

Base Professional

' Write out the specs


dscQuantum.Metadata.Save(MDM, "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\Museum\Museum", Null)

MDM.Close()

End Event

InputDataSource(myInputDataSource)
ConnectionString = "Provider=mrOleDB.Provider.2; _
Data Source=mrDataFileDsc; _
Location=C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Data\Data Collection File\museum.ddf; _
Initial Catalog=" + COPY_OF_MUSEUM_MDD
End InputDataSource

#define Target "MDM2Quantum"


#Include "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Scripts\Data Management\DMS\Include\QuantumOutput.dms"

Note: This example is provided as a sample DMS file (called MDM2Quantum.dms) that is
installed with the IBM® SPSS® Data Collection Developer Library. For more information,
see the topic Sample DMS Files on p. 467.

OnAfterMetaDataTransformation Event Section

The OnAfterMetaDataTransformation Event section defines procedural code that is to be executed


after the metadata defined in the Metadata section has been merged with the input metadata.
Typically, you would use this section in a DMS file that creates variables in the Metadata section
to set up card, column, and punch specifications or other custom properties for the new variables
before exporting case data to a IBM® SPSS® Quantum™ .dat file or a IBM® SPSS® Statistics
.sav file.

The Job object, and all of the other objects to which it gives access (with the exception of the
Questions collection) are available in the OnAfterMetaDataTransformation Event section. For
more information, see the topic Using Objects in the Event Sections on p. 274.

Note that the OnAfterMetaDataTransformation Event section is not available when you have not
specified an input metadata source.

Example

The following example uses the IBM® SPSS® Data Collection Metadata Model to Quantum
component to set up card, column, and punch definitions in the output metadata after the
metadata defined in the Metadata section has been merged with the metadata specified in
the InputDataSource section. It also uses the new MDSC capability of to create a Quantum
specification based on the card, column, and punch definitions.

The Job.TransformedOutputMetadata property is used to get the name and location of the output
metadata. If you have not specified an output metadata file name, this will be a temporary
metadata file that is created during the transformation and then deleted.
280

Chapter 1

It is important that you do not make any changes to the structure of the input or output metadata
in this section. For example, you should not add or delete variables or categories or change a
variable’s data type.
Event(OnAfterMetaDataTransformation, "Allocate card columns and create Quantum spec")
Dim M2Q, MDM

Set MDM = CreateObject("MDM.Document")


MDM.Open(dmgrJob.TransformedOutputMetaData[0])

' Create the MDM2Quantum object


Set M2Q = CreateObject("MDM2QuantumLib.MDM2Quantum")

' Set the MDM Document into the MDM2Quantum object


Set M2Q.MDMDocument = MDM

' Set the MDM2Quantum properties to use the standard Quantum


' setting of multiple cards with 80 columns. Delete these lines
' if you want to use one card of unlimited length

M2Q.SerialFullName = "Respondent.Serial"
M2Q.SerialColCount = 5
M2Q.CardNumColCount = 2
M2Q.CardNumColStart = 6
M2Q.CardSize = 80

' Run the allocation


M2Q.AllocateCardColPunch(True)

MDM.Save()

' Use the DSC Registration component to find Quantum DSC


Dim DMComp, dscQuantum
Set DMComp = CreateObject("MRDSCReg.Components")
Set dscQuantum = DMComp.Item["mrPunchDsc"]

' Write out the specs


dscQuantum.Metadata.Save(MDM, "C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\OnAfterMetadataTransformation\MySpe

MDM.Close()

End Event

Notice that this example does not check for a DataSource object or set it as the current DataSource
object for the Document. This is because the DataSource object is automatically set up in the
output metadata and made the default DataSource. Also notice that this example does not call the
MDM2Quantum.ClearCardColPunch method. This means that it will preserve any card, column,
and punch definitions that have already been defined in the input metadata.

This example sets up the card, column, and punch definitions in the output metadata
only. If you want to set up the card, column, and punch defninitions in the input metadata
in the OnAfterMetadataTransformation Event section as well, you would need to use
281

Base Professional

Job.TransformedInputMetadata property to get the name and location of the input metadata
and repeat the relevant code.

Notice that the MDM Document.Close method has been called to release the MDM Document. If
we did not do this, we would get an error, typically containing the text “The process cannot access
the file because it is being used by another process”.

Note: This example is included in a sample DMS file (called


OnAfterMetaDataTransformation.dms) and as a separate Include file (called CardColsExtra.dms)
that are installed with the IBM® SPSS® Data Collection Developer Library. For more
information, see the topic Sample DMS Files on p. 467.

OnJobStart Event Section

Used to define procedural code that is to be executed after all of the data sources have been opened
and before the processing of the first case begins. For example, code to set up global variables that
are used in the OnNextCase, OnBadCase, and OnJobEnd Event sections.

Note that the OnJobStart Event section is not available when you have not specified an input
metadata source. If you attempt to include an OnJobStart Event section in a case data-only
transformation or when using a non-Data Model OLE DB provider, you will typically get an
“Object reference not set to an instance of an object” error.

For information about accessing objects in this section, see Using Objects in the Event Sections.

Example

This example shows setting up:


„ A report file. The function is used to create a FileSystemObject, which is a standard Microsoft
object for working with folders and files. The FileSystemObject.CreateTextFile method is then
called to create a text file object, which is added to the job’s global variables collection so that
it can be accessed in the OnNextCase and OnJobEnd Event sections.
„ A default response for a question. The Response.Default property is used to set up a default
response for a question called expect. This means that the Validation.Validate method can be
used to assign the default value to the question in the OnNextCase Event section.

Event(OnJobStart, "Do the set up")


Dim fso, txtfile
Set fso = CreateObject("Scripting.FileSystemObject")
Set txtfile = fso.CreateTextFile("C:\Program Files\IBM\SPSS\DataCollection\6\DDL\Output\Cleaning.txt", True, True)
dmgrGlobal.Add("mytextfile")
Set dmgrGlobal.mytextfile = txtfile
expect.Response.Default = {general_knowledge_and_education}
End Event

For more data cleaning examples, see Data Cleaning Examples.


282

Chapter 1

OnNextCase Event Section

Used to define procedural code that is to be applied to each case. For example, code to clean the
case data or set up case data for persisted derived variables.

An error occurs if you attempt to assign a value of the wrong type to a question, for example,
attempting to assign a string value to a numeric question.

Note that the OnNextCase Event section is not available when you have not specified an input
metadata source. If you attempt to include an OnNextCase Event section in a case data-only
transformation or when using a non-Data Model OLE DB provider, you will typically get an
“Object reference not set to an instance of an object” error. For information about accessing
objects in this section, see Using Objects in the Event Sections.

Unbounded loops are available in the OnNextCase event with an HDATA view (available to
read/write the response value and add/remove iterations).

Examples

Data Cleaning Example

This example tests the response to the visits numeric question. This question asks how many
times the respondent has visited the museum previously and is only asked if he or she selected
Yes in response to the before question, which asks whether he or she has visited the museum
before. If the visits question holds a NULL value or is zero, the response to the before question is
automatically set to No, and otherwise the response to the before question is set to Yes. A string is
set up to hold the respondent’s serial number and the response to the before question after cleaning
and this string is then written to the log file. The DMS file would need to have a Logging section.

Event(OnNextCase, "Clean the data")


Dim strDetails
strDetails = CText(Respondent.Serial)

If visits is not Null then


If visits = 0 Then
before = {No}
strDetails = strDetails + " Before = No"
Else
before = {Yes}
strDetails = strDetails + " Before = Yes"
End If
Else
before = {No}
strDetails = strDetails + " Before = No"
End If

dmgrJob.Log.LogScript_2(strDetails)
End Event
283

Base Professional

For more data cleaning examples, see Data Cleaning Examples. For examples of setting up data
for new variables in the OnNextCase Event section, see Creating New Variables.

Unbounded Loop Question Example

This example uses the Household data set to read, modify, add iterations, and delete iterations in
the person loop.

Event(OnNextCase, "Manipulate unbounded question")


Dim Quest, Level1Quest, Level2Quest, Level3Quest
'Numberic Loop for Person, and Unbound loop for Trip

'read and modify


for each Quest in person
For each Level2Quest in Quest.trip
For each Level3Quest in Level2Quest
'set country value with 'canada'
if Level3Quest.QuestionName = "country" then
Level3Quest = "canada"
end if
Next
Next
Next

'add new record


if person[2].age = null then
'add person's iteration 2
person[2].age = 35
person[2].numtrips = 0
person[2].name = "new added person"
'add trip's iteration one in person iteration 2
person[2].trip[1].country = "canada"
person[2].trip[1].daysaway = 1
person[2].trip[1].trip = 1
end if

'delete existing record


for each Quest in person
For each Level1Quest in Quest.trip
if Level1Quest.QuestionName = 2 then
'remove all value in current record will remove current record
For each Level2Quest in Level1Quest
Level2Quest = null
Next
end if
Next
Next
End Event
284

Chapter 1

OnBadCase Event Section

Used to define procedural code that is to be executed for each “bad” case, that is, a record that will
n