Вы находитесь на странице: 1из 7

DWH – PC8 New Features & Enhancements Page 1 of 7

WK: 1 - Day: 6.1

1. Version 8.1.1 New Features and Enhancements


This describes new features and enhancements in PowerCenter 8.1.1:

 Code Pages
 Command Line Programs
 Data Analyzer (PowerAnalyzer)
 Data Profiling
 Data Stencil (Visio Client for PowerCenter)
 Domains
 Integration Service
 Metadata Manager (SuperGlue)
 Repository and Repository Service
 Transformations and Caches
 Unstructured Data
 Web Services Provider
 Workflow Monitor
 XML
 PowerCenter Connects
 PowerExchange Client for PowerCenter

1. Code Pages
PowerCenter 8.1.1 supports the following code pages:

 IBM833
 IBM 834
 IBM850
 IBM933

2. Command Line Programs


This section describes enhancements to the command line programs.

2.1. infacmd Changes


The infacmd command line programs contain new commands in the following groups:

 Alert. Added commands for the following actions: subscribe to, remove from, list or update SMTP options,
and list alert users.
 Domain. Added commands related to service levels and domain options.
 Folder. Added commands for the following actions: create, get info about, list, move, move objects between,
remove, and update.
 Node. Added commands for the following actions: get node name, list node options, and switch to gateway or
worker node.
 Service. Added commands for the following actions: list nodes assigned to service, list services, and create
or update SAP BW Service.
 User. Added commands for the following actions: add, create, remove users, or reset password.

2.2. infasetup Changes


The infasetup command line program contains a DeleteDomain command.
DWH – PC8 New Features & Enhancements Page 2 of 7
WK: 1 - Day: 6.1

2.3. pmrep Changes


The pmrep command line program contains new commands to manage connections and versions. New commands
are GetConnectionDetails, ListConnections, PurgeVersion, and DeleteObject. Changed commands include
ObjectExport, ObjectImport, UpdateSeqGenValues, ListAllUsers, and TruncateLog.

3. Data Analyzer (PowerAnalyzer)


 Data Analyzer accessibility. We can use the keyboard to access all areas of Data Analyzer.
 Global cache enhancements. Data Analyzer creates a global cache in memory for repository objects
accessed by Data Analyzer users. The global cache uses a locking scheme to prevent deadlocks from
occurring. In a clustered environment, Data Analyzer replicates the global cache across all nodes in the
cluster when we modify an object on one node. We can manage the size of the global cache by configuring
the eviction policy.
 Composite reports. We can use the Data Analyzer SDK to access composite reports from your client
application.

4. Data Profiling
 Partitioning. We can create multiple partitions for profile sessions containing source-level and column-level
functions.
 Data Profiling Data Analyzer reports. We can view the Join Complexity Evaluation and Intersource Structure
Analysis Data Profiling reports through Data Analyzer. We can use the new Data Analyzer Intersource
Function Statistics composite report and Intersource Function Metadata report to view profile statistics for
intersource functions.

5. Data Stencil (Visio Client for PowerCenter)


 Name change. Visio Client for PowerCenter is renamed Data Stencil.
 Publish template. After we create a drawing in Visio (.vsd) we can publish it. When we publish a template, we
generate a parameter file (param.xml) and a mapping template file (.xml). We can create a mapping from the
published template.
 Import mapping template. After we publish a template, we can import and define parameters for Data Stencil
mapping templates. We can create a mapping from the mapping template using the command line utility
mapgen or the Mapping Template Import Wizard.

6. Domains
6.1. Alerts
Subscribe to alerts. We can subscribe to alerts to receive email notification about node events such as node failure or
a master gateway election. When we subscribe to alerts, we receive notifications for all domains and services we
have permission on.

6.2. Users
 Set permissions by user. We can view and edit permissions and privileges for each user on the
Administration tab. We can assign multiple permissions to a user when we add a user account or edit the
permissions.
 Status added to each log event. The User Activity Monitoring report shows the status of each log event. The
report indicates whether each log event succeeded or failed.

6.3. High Availability


 Resilience for sources and Lookup transformations. The Integration Service is resilient to temporary network
failures or relational database unavailability when the Integration Service retrieves data from a database for
sources and Lookup transformations. If the Integration Service loses the connection after making the initial
connection, it attempts to reconnect for the amount of time configured for the retry period in the connection
object.
DWH – PC8 New Features & Enhancements Page 3 of 7
WK: 1 - Day: 6.1

 Operating mode. We can run the Integration Service in safe mode to limit access to the Integration Service
during deployment or maintenance. We can enable the Integration Service in safe mode and we can
configure the Integration Service to run in safe mode on failover.

6.4. Integration Service


6.5. Load Balancer
We can configure the way the Load Balancer dispatches tasks by defining the following properties:

 Resource provision thresholds. We can configure resource provision thresholds for each node in a grid to
cause the Load Balancer to consider or eliminate a node for the dispatch of a task. We can define the
Maximum CPU Run Queue Length and Maximum Memory % thresholds in addition to Maximum Processes.
 Dispatch mode. We can configure the way the Load Balancer dispatches tasks by setting the dispatch mode
for the domain. We can choose round-robin, metric-based, or adaptive dispatch mode.
 Service levels. The Load Balancer uses service levels to determine the order in which to dispatch tasks from
the dispatch queue. We create service levels in the Administration Console. When we configure a workflow,
we assign a service level to the workflow.

6.6. Operating Mode


Normal and safe operating mode. We can run the Integration Service in normal or safe operating mode. Run the
Integration Service in safe operating mode to limit access to the Integration Service during migration and
maintenance activities. We configure the operating mode in the Administration Console.

6.7. Parameters and Variables


New sections in parameter files. We can define service variables, service process variables, workflow variables,
session parameters, mapping parameters, and mapping variables in the following sections in a parameter file:

Section Scope

[Global] Parameters and variables defined in this section apply to all Integration Services and
Integration Service processes in the domain, plus all workflows, worklets, and sessions.
[Service:Integratio Parameters and variables defined in this section apply to the named Integration Service,
n_Service_name] plus all workflows, worklets, and sessions the service runs.

[Service:Integratio Parameters and variables defined in this section apply to the Integration Service that runs
n_Service_name.ND:n on the named node (the Integration Service process), plus all workflows, worklets, and
ode_name] sessions the service process runs.

The parameters and variables we define in these sections apply when we run a workflow or session that uses the
parameter file.
Expanded support for parameters and variables. We can use parameters and variables in the following places:

Place Valid Parameter and Variable Types


Table owner name for relational sources All
Table name prefix for relational targets All
FTP file name and directory All
Lookup cache file name prefix All
Email task (address, subject, and body) Service variables, workflow variables
Suspension email (address, subject, and body) Service variables, workflow variables
Post-session email (address, subject, and body) All
Target pre- and post-session SQL commands All
Pre- and post-session shell commands All
Call text for an unconnected Stored Procedure transformation All
Target update overrides All
DWH – PC8 New Features & Enhancements Page 4 of 7
WK: 1 - Day: 6.1

Table name prefix for relational error logs All


Command tasks Service variables, workflow variables
Workflow log file names Service variables, workflow variables
Workflow variables in sessions. We can use workflow variables in sessions. If we do this, the Integration Service
treats them as parameters. Therefore, the values do not change when the session runs.

6.8. Performance
 Pushdown optimization. We can push transformation logic to the source or target database when the Lookup
transformation contains a lookup override. To perform source-side, target-side, or full pushdown optimization
for a session containing lookup overrides, configure the session for pushdown optimization and select a
pushdown option that allows us to create views. We can also use full pushdown optimization when we use
the target loading option to treat source rows as delete.
 Sorter transformation. The Sorter transformation has been enhanced to use a more compact format which
will significantly reduce the temporary disk space necessary to sort large a data set that does not fit into in-
memory sort. In addition, the sort algorithm is improved to significantly reduce sort time.

6.9. Session Parameters


 General session parameter. We can parameterize general session properties such as table owner name,
table name prefix, FTP file name and directory, Lookup cache file name prefix, and email addresses using
the general session parameter "$ParamName."

6.10. Targets
Generate flat file targets by transaction. We can generate a separate flat file target each time the Integration Service
starts a new transaction. We can dynamically name each file. To generate a separate output file for each transaction,
we add a FileName port to the flat file target definition. When we connect the FileName port in the mapping, the
Integration Service writes a target file at each commit. The Integration Service uses the FileName port value to name
the output file. We can generate output files from source-based or user-defined commits.

7. Metadata Manager (SuperGlue)


This section describes changes to the Metadata Manager functionality.

7.1. Administration
 Authentication. We can use LDAP for Metadata Manager on JBoss Application Server, WebLogic Application
Server, or WebSphere Application Server.
 Clustering. We can use clusters for the Metadata Manager application on JBoss Application Server,
WebLogic Application Server, or WebSphere Application Server. We use clusters to distribute the Metadata
Manager application load across multiple machines. Clustering may increase performance when the
application has a large number of concurrent users.
 Secure Sockets Layer (SSL). We can configure Metadata Manager to use the SSL protocol to provide a
secure and authentic connection between the Metadata Manager server and a web browser client.

7.2. Installation
Sybase ASE. We can install the Metadata Manager Warehouse and PowerCenter repository for Metadata Manager
on Sybase ASE.

7.3. Utilities
 EAR Repackager utility. The EAR Repackager utility extracts files from and repackages customized files to
the Metadata Manager EAR file. We may need to customize files in the Metadata Manager EAR file to
customize the Metadata Manager images, color schemes, style sheets, or configuration files.
 License Update utility. The License Update utility updates the Metadata Manager license. We may need to
update an existing license with a new license key to extend the expiration date.
 Repository Backup utility. The Repository Backup utility backs up and recovers a Metadata Manager
repository.
DWH – PC8 New Features & Enhancements Page 5 of 7
WK: 1 - Day: 6.1

 Update System Accounts utility. The Update System Accounts utility modifies the user name and password
for the Metadata Manager system administrator and system daemon.

7.4. XConnects
 Business Objects XConnect. The Business Objects XConnect extracts metadata from Business Objects XI.
 PowerCenter XConnect. We can configure the PowerCenter XConnect to capture information about flat file
sources, lookups, and targets that are specified in parameters files. We can view the flat files in data lineage
and where-used analysis.

8. Repository and Repository Service


This section describes changes to the repository functionality.

 Cancel auto-reconnect. In the Designer, Workflow Manager, and Repository Manager, we can temporarily
disable PowerCenter Client resilience to prevent the client from attempting to reestablish a repository
connection during the resilience timeout period. We cancel automatic reconnection if we do not want to wait
for the 180-second resilience timeout period to expire.
 Purge object versions. We can purge object versions from the repository or folder. We can purge versions of
all deleted objects or objects deleted before a specified end time. We can also purge specified versions of
active objects based on a cutoff time. We specify the versions by designating the number of checked-in
versions to keep, by setting a purge cutoff time, or both. We can preview the purge results to verify that the
purge criteria produce the expected results. We can also save the purge results to an output file.

9. Transformations and Caches


 HTTP transformation. We can connect to an HTTP server to use its services and applications. When we run
a session with an HTTP transformation, the Integration Service connects to the HTTP server and issues a
request to retrieve data from or update data on the HTTP server, depending on how we configure the
transformation.
 SQL transformation. We can process ANSI SQL queries midstream in a pipeline. We can insert, delete,
update, and retrieve rows from a database. When we create an SQL transformation, we configure it to run in
script mode or query mode. Script mode allows we to run scripts that are external to the transformation.
Query mode allows we to run queries that we define in the transformation. We can use an existing database
connection or we can pass database connection information to the SQL transformation as input data at run
time.
 Configure caches. When we configure caches sizes, we can use a cache calculator to estimate the amount
of memory required to process the transformation. Based on the type of mapping object, we provide
information such as the number of groups, the number of source rows, or the number of ranks.

10. Unstructured Data


Unstructured Data transformation. We can run the Unstructured Data transformation on a 64-bit Integration Service
on HP-UX (Itanium) and Solaris.

11. Web Services Provider


 Stateful services. All services hosted by the Web Services Hub are now stateful services. This state is the
same state as in version 7.1.4. Stateful operations available in version 7.1.4 are also available in version
8.1.1. A client application must call the Login operation to access the repository before it calls other web
service operations.
 Advanced properties. On the Administration Console, two additional advanced properties are available for the
Web Services Hub:
 SessionExpiryPeriod. The number of seconds that a session can remain idle before the session times out
and the session ID becomes invalid.
 MaxLMConnections. The maximum number of connections to the Integration Service that can be open at one
time for the Web Services Hub.
 Web Services Hub URL. The URL for the Web Services Hub can use the context name PowerCenter or wsh:
DWH – PC8 New Features & Enhancements Page 6 of 7
WK: 1 - Day: 6.1

http://<hostname>:<portnum>/PowerCenter

http://<hostname>:<portnum>/wsh

12. Workflow Monitor


View binary log files. We can view existing binary log files for sessions and workflows in the Log Events window.

13. XML
XML foreign key column names. We can configure the Designer to prefix generated foreign key columns with the
name of the XML view and the name of the primary key column.

14. PowerCenter Connects


14.1. PowerCenter Connect for IBM MQSeries
Secure Sockets Layer (SSL) protocol. We can configure IBM MQSeries to use the SSL protocol to provide a reliable
end-to-end secure and authentic connection between the Integration Service and MQSeries.

14.2. PowerCenter Connect for JMS


 Durable subscriptions. PowerCenter Connect for JMS uses durable subscriptions to enable a JMS subscriber
receive messages, even if the subscriber is inactive. Using durable subscriptions, the Integration Service
retains the messages that are published when the subscriber is not active and delivers the messages after
the subscriber becomes active again.
 Secure Sockets Layer (SSL). PowerCenter Connect for JMS uses SSL to encrypt information and
authenticate data. SSL creates a secure connection between a client and a server, over which data is
transmitted securely. When we configure PowerCenter Connect for JMS, add the SSL properties to the
jndi.properties file. The SSL properties are specific to JMS providers. We need to add the SSL properties to
the jndi.properties file and copy it to the Integration Service source files directory.

14.3. PowerCenter Connect for Salesforce.com


HTTP proxy server support. PowerCenter Connect for Salesforce.com supports HTTP proxy server authentication.
We can create a pmsfdc.ini file to configure the PowerCenter Client to connect to Salesforce using an HTTP proxy
server. Similarly, we can configure the Integration Service to connect to Salesforce using an HTTP proxy server
during a session.

14.4. PowerCenter Connect for SAP NetWeaver BW Option


(PowerCenter Connect for SAP BW)
Certification. PowerCenter Connect for SAP NetWeaver BW Option is certified with SAP BW 7.0.

14.5. PowerCenter Connect for SAP NetWeaver mySAP Option


(PowerCenter Connect for SAP R/3)
Certification. PowerCenter Connect for SAP NetWeaver mySAP Option is certified with SAP ECC 6.0.

14.6. PowerCenter Connect for TIBCO


 64-bit support. We can read messages from a TIB/Rendezvous source or write messages to
TIB/Rendezvous when the Integration Service runs on 64-bit Solaris.
 Secure connections. We can read and write TIB/Rendezvous messages using a secure daemon.
TIB/Rendezvous uses SSL to read and write TIB/Rendezvous messages. We must configure the secure
daemon if clients in your network use SSL to connect to the secure daemon. TIB/Rendezvous supports the
following modes of communication through its Rendezvous daemons:
 Non-secure TCP connections
 SSL connections, allowing secure client communication over non-secure networks
DWH – PC8 New Features & Enhancements Page 7 of 7
WK: 1 - Day: 6.1

 Each secure daemon instance authorizes a set of trusted users. The secure daemon allows a client transport
to connect if the client presents valid identification as an authorized user.

14.7. PowerCenter Connect for Web Services


WSDL parsing. When we import a web service source or target definition, we can prefix the XML view name to every
foreign key column. Prefixing the XML view name to foreign key columns prevents foreign key conflicts. We can
generate a foreign key with the format FK_<nameOfChildView>_<nameOfPKView_nameOfPKColumn>. The foreign
key naming convention is applicable as long as the generated name is less than 80 characters long.

14.8. PowerExchange Client for PowerCenter


 Target support. We can write to ADABAS databases, sequential files, and VSAM datasets.
 Restart token file. We can automatically keep numerous versions of the Restart token file by specifying the
number to keep in the new CHANGE CAPTURE connection attribute Number of Runs to Keep Restart Token
File.
 Restart token file directory. We no longer need to pre-allocate the Restart token file directory. There is a new
default of $PMRootDir/Restart in the CHANGE CAPTURE connection attribute "Restart Token File Folder"
which will be automatically created if it does not exist.
 Real-time extraction. We can stop real-time extraction at the end of the current log data by setting the idle
time to 0 in the CHANGE CAPTURE connection object.

Вам также может понравиться