Вы находитесь на странице: 1из 28

Question #1

What is a control task?


A control task is used to alter the normal processing of a workflow by stopping, aborting,
or failing a workflow or worklet.

Question #2
What is a pipeline partition and how does provide a session with higher
performance?
Within a mapping, a session can break apart different source qualifier to target pipelines
into their own reader/transformation/writer thread(s). This allows the Integration Service
to run the partition in parallel with other pipeline partitions in the same mapping. The
parallelism creates a higher performing session.

Question #3
What is the maximum number of partitions that can be defined for in a single
pipeline?
You can define up to 64 partitions at any partition point in a pipeline.

Question #4
Pipeline partitions is designed to increase performance, however list one of its
disadvantages?
Increasing the number of partitions increases the load on the node. If the node does not
contain enough CPU bandwidth, you can overload the system.

Question #5
What is a dynamic session partition?

A dynamic session partition is where the Integration Service scales the number of
session partitions at runtime. The number of partitions is based on a number of factors
including number of nodes in a grid or source database partitions.

Question #6
List three dynamic partitioning configurations that cause a session to run with
one partition
1. You set dynamic partitioning to the number of nodes in the grid, and the session does
not run on a grid.
2. You create a user-defined SQL statement or a user-defined source filter.
3. You use dynamic partitioning with an Application Source Qualifier.

Question #7
What is pushdown optimization?
Pushdown optimization is a feature within Informatica PowerCenter that allows us to
push the transformation logic in a mapping into SQL queries that are executed by the
database. If not all mapping transformation logic can be translated into SQL, then the
Integration Service will process what is left.

Question #8
List the different types of pushdown optimization that can be configured?
1. Source-side pushdown optimization The Integration Service pushes as much
transformation logic as possible to the source database.
2. Target-side pushdown optimization The Integration Service pushes as much
transformation logic as possible to the target database.

3. Full pushdown optimization The Integration Service attempts to push all


transformation logic to the target database. If the Integration Service cannot push all
transformation logic to the database, it performs both source-side and target-side
pushdown optimization.

Question #9
What databases are we able to configure pushdown optimization?
IBM DB2
Microsoft SQL Server
Netezza
Oracle
Sybase ASE
Teradata
Databases that use ODBC drivers

Question #10
List several transformations that work with pushdown optimization to push logic
to the database?
Aggregator, Expression, Filter, Joiner, Lookup, Router, Sequence Generator, Sorter,
Source Qualifier, Target, Union, Update Strategy

Question #11
What is real-time processing?

Data sources such as JMS, WebSphere MQ, TIBCO, webMethods, MSMQ, SQP, and
webservices can publish data in real-time. These real-time sources can be leveraged by
Informatica PowerCenter to process data on-demand. A session can be specifically
configured for real-time processing.

Question #12
What types of real-time data can be processed with Informatica PowerCenter
1. Messages and message queues. Examples include WebSphere MQ, JMS, MSMQ,
SAP, TIBCO, and webMethods sources.
2. Web service messages. Example includes receiving a message from a web service
client through the Web Services Hub.
3. Change data from PowerExchange change data capture sources.

Question #13
What is a real-time processing terminating condition?
A real-time processing terminating condition determines when the Integration Service
stops reading messages from a real-time source and ends the session.

Question #14
List three real-time processing terminating conditions?
1. Idle time Time Integration Service waits to receive messages before it stops reading
from the source.
2. Message count Number of messages the Integration Service reads from a real-time
source before it stops reading from the source.

3. Reader time limit Amount of time in seconds that the Integration Service reads
source messages from the real-time source before it stops reading from the source

Question #15
What is real-time processing message recovery?
Real-time processing message recovery allows the Integration Service to recover
unprocessed messages from a failed session. Recovery files, tables, queues, or topics
are used to recover the source messages or IDs. Recovery mode can be used to
recover these unprocessed messaged.

Question #16
What factors play a part in determining a commit point?
1. Commit interval
2. Commit interval type
3. Size of the buffer blocks

Question #17
List all configurable commit types?
1. Target-based commit Data committed based on the number of target rows and the
key constraints on the target table.
2. Source-based commit Data committed based on the number of source rows.
3. User-defined commit Data committed based on transactions defined in the mapping
properties.

Question #18

What performance concerns should you be aware of when logging error rows?
Session performance may decrease when logging row errors because the Integration
Service processes one row at a time instead of a block of rows at once.

Question #19
What functionality is provided by the Integration Service when error logging is
enabled?
Error logging builds a cumulative set of error records in a error log file or error table
created by the Integration Service.

Question #20
What is the difference between stopping and aborting a workflow session task?
A stop command tells the Integration Service to stop reading session data, but will
continue writing and committing data to targets.
A abort command works exactly like the stop command, however it will tell the
Integration to stop processing and committing data to targets after 60 seconds. If all
processes are not complete after this time out period, the session gets terminated.

Question #21
What is a concurrent workflow?
A concurrent workflow is a workflow that can run as multiple instances concurrently.
Concurrent workflows can be configured in one of two ways:
1. Allow concurrent workflows with the same instance name.
2. Configure unique workflow instances to run concurrently.

Question #22

What is Informatica PowerCenter grid processing and its benefits?


Grid processing is a feature of PowerCenter that enables workflows and sessions to be
run across multiple domain nodes. PowerCenter grids parallel processing provides
increased performance and scalability.

Question #23
List the types of parameters and variables that can be defined within a parameter
file?
Service variables, service process variables, workflow and worklet variables, session
parameters, and mapping parameters and variables.

Question #24
With PowerCenter, what two locations can one specify a parameter file?
1. Within the session task.
2. Within the workflow.

Question #25
How are mapplet parameters and variables defined withing a parameter file
different?
Mapplet parameters and variables are different because they must be preceded with the
mapplet name they were definee within. For example, a parameter by name of
MyParameter, defined within mapplet MyMapplet, would be set to a value of 10 in a
related parameter file by using syntax: MyMapplet.MyParameter=10.

Question #26
What is an SQL Transformation in Informatica?

The SQL transformation is is active, passive, and connected. It allows for runtime SQL
processing. It allows data to be retrieved, inserted, updated, and deleted midstream in a
mapping pipeline. SQL transformations have two modes, script mode and query mode.
Script mode allows for external located script files to be called to execute SQL. Query
mode allows for SQL to be placed within the transformations editor to execute SQL
logic.

Question #27
What is dynamic lookup cache?
Dynamic lookup cache is cache that has been built from the first lookup request. Each
subsequent row that passes through the lookup will query the cache. As these rows are
processed or inserted into the lookups target table, the lookup cache is also updated
dynamically.

Question #28
What is a Unstructured Data transformation?
The Unstructured Data transformation is active, passive, and connected. It leverages
the Data Transformation application to transform unstructured, semi-structured, and
structured file formats such as messaging formats, HTML pages, PDF documents,
ACORD, HIPAA, HL7, EDI-X12, EDIFACT, and SWIFT. Once data has been
transformed by Data Transformation, it can be returned in a mapping pipeline and
further transformed and/or loaded to an appropriate target
Question #1

What is an Expression Transformation in Informatica?


An expression transformation in Informatica is a common Powercenter mapping
transformation. It is used to transform data passed through it one record at a time. The
expression transformation is passive and connected. Within an expression, data can be
manipulated, variables created, and output ports generated. We can write conditional

statements within output ports or variables to help transform data according to our
business requirements.
Checkout my Expression Transformation in Informatica post for more in depth
Informatica interview question knowledge.
Question #2

What is a Sorter Transformation in Informatica?


The sorter transformation in Informatica helps us sort collections of data by port or
ports. This functionality very much like an ORDER BY SQL statement where we specify
certain field(s) we want to ORDER BY. The sorter transformation
is active and connected. The sorter transformation contains some additional
functionality that is very useful. For example we can check the Distinct Output Rows
property to pass only distinct rows through a pipeline. We can also check the Case
Sensitive property to sort data with case sensitivity in mind.
Checkout my Sorter Transformation in Informatica post for more in depth Informatica
interview question knowledge.
Question #3

What is a Decode in Informatica?


A decode in Informatica is a function used within an Expression Transformation. It
functions very much like a CASE statement in SQL. The decode allows us to search for
specific values in a record, and then set a corresponding port value based on search
results.
Checkout my Decode in Informatica post for more in depth Informatica interview
question knowledge.
Question #4

What is a Master Outer Join in Informatica?


A master outer join in Informatica is a specific join type setting within a joiner
transformation. There are other join types, but lets focus on the master outer join. The
joiner transformation allows us to join two separate pipelines of data by specifying key
port(s) from each pipeline. We define each pipeline as either a master or detail. This
master outer join is very much like a LEFT OUTER JOIN in SQL, considering the detail
pipeline as the LEFT side. So the master outer join returns all rows from the detail
pipeline, and the matched rows from the master pipeline.

Checkout my Master Outer Join in Informatica post for more in depth Informatica
interview question knowledge.
Question #5

What is a Mapplet in Informatica?


The Mapplet in Informatica is essential to creating reusable mappings in Informatica.
That is essentially what is a mapplet is for. Reuse is essential to building efficient
business intelligence systems quickly. Within InformaticaPowercenter is a mapplet
designer where we can copy existing mapping logic or create new logic we want to
reuse in multiple mappings. We can create input and output transformations within a
mapplet that define input and output ports for our mapplet within a mapping.
Checkout my Mapplet in Informatica post for more in depth Informatica interview
question knowledge.
Question #6

What is an Update Strategy Transformation in Informatica


The Update Strategy Transformation in Informatica helps us tag each record passed
through it for insert, update, delete, or reject. This information lets the Integration
Service know how to treat each record passed to a target. We can set this with any type
of conditional logic within the update strategy expression property. This functionality is
essential to control how data is stored within our business intelligence databases.
Checkout my Update Strategy Transformation in Informatica post for more in depth
Informatica interview question knowledge.
Question #7

What is a Router Transformation in Informatica


The Router Transformation in Informatica allows us to split a single pipeline of data into
multiple. We do this by using the transformations group tab, group filter condition. We
can create as many groups as we want and therefore as many new pipelines. The
router checks one record at a time with the group filter condition and routes it down the
appropriate path or paths. Keep in mind a single record may be copied to many records
if it matches multiple group filter conditions. Many times we use the router to check
specific primary key conditions to determine if we want to insert, update, or delete data
from a target table.
Checkout my Router Transformation in Informatica post for more in depth Informatica
interview question knowledge.

Question #8

What is a Rank Transformation in Informatica?


The Rank Transformation in Informatica lets us sort and rank the top or bottom set of
records based on a specific port. You can only have one rank port. Once we have
selected which port to rank on, we must set a Top/Bottom attribute value and Number of
Ranks attribute value. Top/Bottom set a descending (Top) or ascending (Bottom) rank
order. The Number of Ranks attribute determines how many records to return. If we set
it to 10, we will return 10 records.
Checkout my Rank Transformation in Informatica post for more in depth Informatica
interview question knowledge.
Question #9

What is a Filter Transformation in Informatica?


A Filter Transformation in Informatica is active and connected. Records passed through
it, are allowed through based on evaluating TRUE for the developer defined filter
condition. To help mapping performance, always place filter condition as early in the
mapping data flow as possible. This will reduce processing needed by downstream
mapping transformations.
Checkout my Filter Transformation in Informatica post for more in depth Informatica
interview question knowledge.
Question #10

What is a Sequence Generator Transformation in Informatica?


The Sequence Generator Transformation in Informatica is both passive and connected.
Since its primary purpose is to generate integer values from both its NEXTVAL and
CURRVAL default output ports, it is very helpful in creating new surrogate key values.
We would generally leverage the NEXTVAL port to accomplish this.
Checkout my Sequence Generator Transformation in Informatica post for more in depth
Informatica interview questions knowledge.
Question #11

What is a Joiner Transformation in Informatica?


The Joiner Transformation in Informatica is used to join two data pipelines in a mapping.
We specify one or more ports from each pipeline as a join condition to relate each

pipeline. This functionality is essential to relating data from multiple sources in a


mapping including flat files, databases, XML files, and more. Within the joiner
transformation we can specify different join types (Normal, Master Outer, Detail Outer,
Full Outer) similar to our join options in SQL (INNER, LEFT OUTER, RIGHT OUTER,
FULL OUTER).
Checkout my Joiner Transformation in Informatica post for more in depth Informatica
interview questions knowledge.
Question #12

What are Active and Passive Transformations in Informatica?


Most Informatica transformations are either passive or active. A passive transformation
is one where records pass through without ever dropping or adding records. In contrast,
an active transformation reduces or adds records to our target record pipeline count.
Some transformations such as the lookup transformation in Informatica are both active
and passive.
Checkout my Active and Passive Transformations in Informatica post for more in depth
Informatica interview question knowledge.
Question #13

What is a Aggregator Transformation in Informatica?


A Aggregator Transformation in Informatica is very useful for aggregating data in a
Informatica Powercenter mapping. The aggregator behaves similarly to a GROUP BY
statement in SQL. With the aggregator transformation we can select a specific port or
ports to aggregate by and apply aggregator type functions (SUM, MAX, MIN, etc) to
all non group by ports. If we do not apply an aggregate function to a non group by port,
the last records value will be set for this port. Aggregation is an important part of
integrating data so make sure you understand the aggregator transformation in
Informatica.
Checkout my Aggregator Transformation in Informatica post for more in depth
Informatica interview question knowledge.
Question #14

What is a Union Transformation in Informatica?


A Union Transformation in Informatica functions just like a UNION ALL statement in
SQL. It allows us to incorporate two pipelines of data with the same port names and
data types, into a single pipeline of data. Keep in mind that the union transformation

does not collapse records with the same port values into a single distinct record. In SQL
we could write a UNION statement for this instead of UNION ALL, but this functionality
does not currently exist through the union transformation in Informatica.
Checkout my Union Transformation in Informatica post for more in depth Informatica
interview question knowledge.
Question #15

What is a Informatica Worklet?


Informatica Worklets allow us to apply re-usability at the Informatica workflow level. This
concept is very similar to the mapplet. However mapplets apply re-usability at the
mapping level.
So with a worklet we can add tasks (session, command, etc) and logic just as we
would in a workflow. The difference with the worklet is, we can add a worklet to any
number of workflows we want. Again, this reuse is very helpful, saving us time by
developing once and keeping logic in one place instead of many.
Checkout my Informatica Worklets post for more in depth Informatica interview question
knowledge.
Question #16

Walk through a simple Informatica mapping on how to load a Type 2 slowly


changing dimension (SCD)?
A typical mapping of this nature is going to start with a source table or set of tables.
These tables may reside in an ODS or set of historical table in a data warehouse,
depending on a companys architecture.
Before populating a dimension table with similar attributes, we need to determine if our
new historical or ODS records contain data that differs from our target dimension. In our
source qualifier we should use some custom SQL to limit the records that have been
updated since our last run. The general idea is to then place lookup transformation(s)
pointing to our target dimension table to retrieve attributes comparable to our source
attributes. After getting values for all our attributes from our dimension table, we can use
data concatenation to merge our lookup data with our original source data in an
expression transformation. A great function to use at this point is MD5. This will allow
us to compare our recently updated record attributes to our existing dimension record
attributes. After doing this compare, we can filter out records that have not changed and
insert newly changed records.

Question #17

What does it mean to transform data?


Within PowerCenter we have many transformations that can modify record counts and
change/add attribute values. Altering data in this fashion is what it means to transform
data.
Question #18

What is an Informatica mapping?


A mapping exists in Informatica PowerCenter Mapping Designer. A mapping consists of
at least one source and one target. Many times a variety of transformation are used in a
mapping between sources and targets to modify/enhance data according to business
needs. So we might say a mapping logically defines the ETL process.
Question #19

What SQL statement is comparable to a Union Transformation, UNION or UNION


ALL?
UNION ALL
Question #20

What is an Informatica PowerCenter Task?


An Informatica PowerCenter task allows us to build workflows and worklets. There are a
variety of tasks that allow a developer to call code to be executed in a specific order. A
series of session tasks for example can be used to initiate mappings to extract,
transform, and load data.
Question #21

Describe an Informatica PowerCenter Workflow?


A workflow is developed in the workflow manager. It is constructed through connecting
tasks in a logical fashion to execute specific sets of code (mappings, scripts, etc). A
final workflow can therefore be started which will begin to run tasks within it, in the order
defined.
Question #22

List each PowerCenter client application and their basic purpose?

Repository Manager

manages repository connections, folders, objects, users, and groups

Administration Console

Performs domain and repository service tasks such as create/configure,


updgrade/delete, start/stop, and backup/restore nodes and repository services

Designer

Creates ETL Mappings

Workflow Manager

Creates and starts workflows/tasks

Workflow Monitor

Monitors and controls workflows. Access to runtime metrics and session logs are also
available.

Question #23

Describe the purpose of a variable port within an expression transformation?


A variable port is designated by checking the V checkbox next to the port designated as
a variable. A variable is set and persist for the entire data set passed through the
expression transformation. The useful feature can be used in conjunction with
conditional logic to assist with applying business logic of some kind. A major way I have
personally used variable ports is to generate a new surrogate key as a new insert
record is found.
Question #24

What is the purpose of the INITCAP function?


The INITCAP function capitalizes the first letter in each word of a string and converts all
other letters to lowercase.
EX: INITCAP(IN_DATA)

IN_DATA

RETURN VALUE

informatica interview questions

Informatica Interview Questions

Question #25

When thinking performance, should active transformation be placed at the


beginning or end of a mapping?
The beginning of the mapping, since it can reduce the number records passed to
downstream transformations, reducing mapping overhead.
Question #26

List the different join types within a Joiner transformation and describe each
one?
Normal Join keeps only matching rows based on the condition
Master Outer Join keeps all rows from the detail and matching rows from master
Detail Outer Join keeps all rows from the master and matching rows from detail
Full Outer Join keeps all rows from both master and detail
Question #27

What is a PowerCenter Shortcut and what are its benefits?


A shortcut is a dynamic link to an original Informatica PowerCenter object. If the original
object is edited, all shortcuts inherit the changes.
Question #28

What does the Revert to Saved feature do?


If unwanted changes are made to a source, target, transformation, mapplet, or mapping,
these objects will be reverted to their previously saved version.
Question #29

What does Auto-link by Name in Designer do?

It adds links between input and output ports across transformations with the same port
names (case sensitive).
Question #30

What does the Scale-to-Fit option do in Designer?


This option zooms in or out the designer workspace to allow for every object in a
mapping to fit within the viewable workspace.
Question #31

Can you copy Informatica objects between mappings in different folders?


No
Question #32

When would you use a Joiner transformation rather than joining in a Source
Qualifier?
When you need to relate data from different databases or perhaps a source flat file.
Question #33

Can you copy a reusable transformation as a non-reusable transformation into a


mapping?
Yes, by depressing the Ctrl key while dragging.
Question #34

What is the recommended order for Optimizing Informatica PowerCenter


performance tuning bottlenecks?
1. Target
2. Source
3. Mapping
4. Transformation

5. Session
6. Grid Deployments
7. PowerCenter Components
8. System
Question #35

Should we strive to create more or less re-usable transformations?


We should strive to create more re-usable transformations. Re-usability is a Velocity
best practice. Benefits include decreased development time and mapping consistency.
Question #36

Describe Data Concatenation?


We can bring together different pieces of the same record with data concatenation. This
is only possible if combining branches of the same source pipeline where neither branch
contains an active transformation.
Question #37

What are the rules for a self-join in an Informatica mapping?


1. We must place at least 1 transformation between the source qualifier and the joiner in at
least 1 branch.
2. Before joining, data must be pre-sorted by the join key.
3. We must configure the joiner to accept sorted input.
Question #38

What performs better, a single router transformation or multiple filter


transformations? Why?
A single router transformation because a record is read into the transformation once
instead of multiple reads of the same row data with a filter transformation.

Question #39

What is an Informatica Task?


An Informatica task allows us to build PowerCenter workflows and worklets. There are a
variety of tasks that allow a developer to call code to be executed in a specific order. A
series of session tasks for example can be used to initiate mappings to extract,
transform, and load data.
Question #40

Describe an Informatica Workflow?


A workflow is developed in the workflow manager. It is constructed through connecting
tasks in a logical fashion to execute specific sets of code (mappings, scripts, etc). A
final workflow can therefore be started which will begin to run tasks within it, in the order
defined.
Question #41

List the different join types within a Joiner transformation and describe each
one?
Normal Join keeps only matching rows based on the condition
Master Outer Join keeps all rows from the detail and matching rows from master
Detail Outer Join keeps all rows from the master and matching rows from detail
Full Outer Join keeps all rows from both master and detail
Question #42

Can you copy objects between mappings in different PowerCenter folders?


No
Question #43

Can we make a reusable transformation non-reusable?


No, this process is non-reversible.
Question #44

What does the debugger option Next Instance do?

Runs until it reaches the next transformation or satisfies a breakpoint.


Question #45

What does the debugger option Step to Instance do?


Runs until it reaches a breakpoint or reaches a selected transformation.
Question #46

Can the debugger help you determine why a mapping is invalid?


No, the designer output window will show you why a mapping is invalid, not the
debugger.
Question #47

What is the Velocity best practice prefix naming standard for a shortcut object
and reusable transformation?
Reusable Transformation re or RE
Shortcut Object sc_ or SC_
Question #48

What is persistent lookup cache and its advantages?


Persistent lookup cache is stored on the server hard drive available in the next session.
Since the data is stored on the Informatica server, performance is increased the next
time the lookup is called since a database query does not need to occur.
Question #49

What happens to data overflow when not enough memory is specified in the
index and data cache properties of lookup transformation?
It is written to hard disk.
Question #50

What is the rule of thumb when deciding whether or not to cache a lookup or not?
Cache if the number of mapping records passing through the lookup is greater relative
to the lookup tables record number (and size).

Question #51

What does the Find in Workspace feature allow us to do in Mapping Designer?


Perform string searches for the names of objects, tables, columns, or ports in the
currently open mapping.
Question #52

What does the View Object Dependencies feature in Designer allow us to do?
Developers can identify objects that may be affected by making changes to a mapping,
mapplet, and its sources/targets.
Question #53

What is the difference between PowerCenter variables and parameters?


Parameters in Informatica are constant values (datatypes strings, numbers, etc).
Variables on the other hand can be constant or change values within a single session
run.
Informatica interview questions 1
We have a target source table containing 3 columns: Col1, Col2 and Col3. There is only 1 row in the
table as follows:
Col1 Col2 Col3

a
b
c
There is target table contain only 1 column Col. Design a mapping so that the target table contains 3
rows as follows:
Col

a
b
c
Informatica interview questions 2

There is a source table that contains duplicate rows. Designs a mapping to load all the unique rows
in 1 target while all the duplicate rows (only 1 occurrence) in another target.

<![endif]>
Informatica interview questions 3
There is a source table containing 2 columns Col1 and Col2 with data as follows:
Col1

a
b
a
a
b
x

Col2

l
p
m
n
q
y

Design a mapping to load a target table with following values from the above mentioned source:
Col1

Col2

a
b
x

l, m, n
p, q
y

Informatica interview questions 4


Design an Informatica mapping to load first half records to 1 target while other half records to a
separate target.
Informatica interview questions 5
A source table contains emp_name and salary columns. Develop an Informatica mapping to load
all records with 5th highest salary into the target table.
Informatica interview questions 6
Lets say I have more than have record in source table and I have 3 destination table A,B,C. I have
to insert first 1 to 10 records in A then 11 to 20 in B and 21 to 30 in C.
Then again from 31 to 40 in A, 41 to 50 in B and 51 to 60 in CSo on up to last record.
Informatica interview questions 6
Validation rules for connecting transformations in Informatica?

Informatica interview questions 7


Source is a flat file and want to load unique and duplicate records separately into two separate
targets; right??
Informatica interview questions 8
Input file

10
10
10
20
20
30
output file

1
2
3
1
2
1
scenario-it will count the no of records for example in this above case first 10 is there so it will count
1,den again 10 is there so it will be 2, when 20 comes it will be 1 again.

Informatica interview questions 9


Input file

10
10
10
20
20
30
output file
1
2
3
4
5
6
Informatica interview questions 10
Input file


10
10
10
20
20
30
output file
->
1
1
1
2
2
3
Informatica interview questions 11
There are 2 tables(input table)
table aa table bb

id name id name
101 ramesh 106 harish
102 shyam 103 hari
103 - 104 ram
104 output file
id name
101 ramesh
102 shyam
103 hari
104 ram
Informatica interview questions 12
There are 2 tables(input table)
table aa table bb

id name id name
101 ramesh 106 harish
102 shyam 103 hari

103 - 104 ram


104 output file
id name
101 ramesh
102 shyam
103 hari
104 ram
Informatica interview questions 12
table aa(input file)

id name
10 aa
10 bb
10 cc
20 aa
20 bb
30 aa

Output

id name1 name2 name3



10 aa bb cc
20 aa bb
30 aa
Informatica interview questions 14
table aa(input file)

id name
10 a
10 b
10 c
20 d
20 e

output
id name
10 abc
20 de
Informatica interview questions 15
In the below scenario how can I split the row into multiple depending on date range?
The source rows are as
ID Value from_date(mm/dd) To_date(mm/dd)
1 $10 1/2 1/3
2 $5 1/5 1/8
3 $20 1/9 1/11
The target should be
ID Value Date
1 $10 1/2
1 $10 1/3
2 $5 1/5
2 $5 1/6
2 $5 1/7
2 $5 1/8
3 $20 1/9
3 $20 1/10
3 $20 1/11
What is the informatica solution?
Informatica interview questions 16
How is the following be achieved with single Informatica Mapping.
* If the Header table has error value or no value (NULL) then
those records and their corresponding child records in the
SUBHEADER and DETAIL tables should be rejected from the target (TARGET1,TARGET 2 or
TARGET3).
* If the HEADER table record is valid, but the SUBHEADER or
DETAIL table record has an error value (NULL) then the no
data should be loaded into either of the target TARGET1,TARGET 2 or TARGET3.
* If the HEADER table record is valid and the SUBHEADER or DETAIL table record also has valid
records only then the
data should be loaded into the target TARGET1,TARGET 2 and TARGET3.

HEADER
C1 C2 C3 C4 C5 C6
1 ABC null null C1
2 ECI 756 CENTRAL TUBE C2
3 GTH 567 PINCDE C3
SUBHEADER
C1 C2 C3 C4 C5 C6
1 01001 VALUE3 748 543
1 01002 VALUE4 33 22
1 01003 VALUE6 23 11
2 02001 AAP1 334 443
2 02002 AAP2 44 22
3 03001 RADAR2 null 33
3 03002 RADAR3 null 234
3 03003 RADAR4 83 31
DETAIL
C1 C2 C3 C4 C5 C6
1 D01 TXXD2 748 543
1 D02 TXXD3 33 22
1 D03 TXXD4 23 11
2 D01 PXXD2 56 224
2 D02 PXXD3 666 332

TARGET1
2 XYZ 756 CENTRALTUBE CITY2
TARGET2
2 02001 AAP1 334 443
2 02002 AAP2 44 22
TARGET3
2 D01 PXXD2 56 224
2 D02 PXXD3 666 332

Informatica interview questions 17


If i had source like unique & duplicate records like 1,1,2,3,3,4 then i want load unique records in one
target like 2,4 and i want load duplicate records like 1,1,3,3

Informatica interview questions 18


I Have 100 Records in a relational table and i want to load the record in 3 targets , first records goes
to target 1 and second to target 2 and third to target 3 and so on ,what are the tx used in this.
Informatica interview questions 19
There are three columns empid, salmonth, sal contains the values
101,january,1000
101 febuary 1000
like twelve rows are there then my required out put is like contains 13 columns empid jan feb march
. dec and the values are 101 1000, 1000, 1000 etc
Informatica interview questions 20
I have a source as a file or db table.
E-no e-name sal and dept
0101 Max 100 1
0102 steve 200 2
0103 Alex 300 3
0104 Sean 76 1
0105 swaroop 120. 2
If i Want to run one session 3 times.
First Run : It should populate department 1.
Second Run : Only department 2
Third Run : Only department 3
How can we achieve this?

Вам также может понравиться