Вы находитесь на странице: 1из 20

Talend Touch Points

1. Concurrent Load (Talend ETL):

If two different child jobs will be called inside tRunJob the jobs will run in parallel.

Apart from the Master Job, It also needed to set the "Enable Parallel Execution" in
EACH Child job. Doing this will allow the Parent and Child run parallel.

2. Lookups:
There are different types of lookup methodologies in talend are available. We can use
any of them based on our convenience.

2.1. Efficient Lookups with Talend Open Studio's Hash


Components
If you are using the same data source in several Talend Open Studio subjobs, consider
loading the data data into an internal data structure for use throughout a job.
A HashMap is a Java data structure maintained in RAM. Talend Open Studio lets you load
manipulate data in HashMaps using the tHashInput and tHashOutput components.
Because the data exists in RAM, lookups are faster, especially if secondary storage is
based on spinning hard disks.
Without a HashMap
This Talend Open Studio job loads data from two spreadsheets, EmployeeHires and
EmployeeTerminations, into a target table, EmployeeActions. The spreadsheet sources
contain a data (hireDate and terminationDate) that is used as a key into a table called
BusinessDates. Although the date could simply be carried over into the target table
(without the lookup), many data warehouses maintain date information in a separate
table. This is because calendar-related business information is merged with the
timestamp.
This data includes flags for Holiday, Payday, and Weekday that supplement that
timestamp (Month, Day, Year, Quarter) fields. The spreadsheet has been loaded into the
MS SQL Server table "BusinessDates".

Basic Date Fields PLUS Business-specific Information

A typical Talend Open Studio job will use BusinessDates as a lookup table. The main flow
comes from two sources: an Employee Hires spreadsheet, and an Employee Terminations
spreadsheet.

BusinessDates is Repeated for Each Subjob

The Employee Hires spreadsheet is similar to the Terminations spreadsheets with


hireDate replaced with terminationDate.

Employee Action Source (Hires)

Hash Alternative
The preceding job works. 5 Hire and 3 Termination records are written to the database.
However, the job has a drawback. Even if caching is enabled, the lookup is read at least
once for each subjob, leading to poorer performance. An alternative is to use the Talend
Hash components: tHashInput and tHashOutput.

Job Rewritten with Talend Hash Component

This version creates a tMap with the tHashOutput_1 component which is loaded by a
database input, BusinessDates. I flag three columns as keys: year, month, day. This is
for informational purposes; any fields can be used in the later tMap joins.
The tHashOutput component is configured as follows.

tHashOutput Configuration

The schema -- with the 3 informational keys -- used in the tHashOutput follows.

The tHashOutput schema can now be applied to joins (tMap) by adding tHashInput
components as lookup flows. This is the configuration of tHashInput_1 which is identical
to tHashInput_2. More inputs can be added for other data loading subjobs.

tHashInput Configuration Links to tHashOutput

In the UI, you must define a schema for both the tHashInput and tHashOutput
components. I do this by setting the value to the Repository, then changing the value to
Built-in and re-defining the keys from "id" to "year/month/day".
The join is performed as an Inner Join on three columns. Because the dates are
represented differently, I use a TalendDate routine. Note the important +1 which dealing
with the zero-based Java month and day values.

Multi-Part Join on a Hash Map Input

The job loads 8 records in the database. The Hires are flagged with "HI" and the
Terminations with "TE".

Data Loading Results

RAM-based data structures can provide a performance improvement since slower


spinning disks aren't involved in data reads. This is a good pattern when you're dealing
with data from different sources (different dbs, spreadsheets, etc). If your data
processing is table-to-table in the same database, huge performance improvements can
be made with the ELT* components that keep all processing within the database,
eliminating the network latency of pulling the data into Talend Open Studio's JVM.

Enabling tHashInput and tHashOutput


Many of the exercises rely on the use of tHashInput and tHashOutput
components. Talend 5.2.3 does not automatically enable these
components for use in jobs. To enable these components perform the
instructions in the following section:
How to do it
1. On the main menu bar navigate to File | Edit Project
properties to open the properties dialogue.
2. Select Designer then Palette Settings.
3. Click on the Technical folder and then click on the button shown in
the following screenshot to add this folder to the Show panel.

4. Click on OK to exit the project settings.


2.1.2. Loading multiple lookup flows in parallel

Warning
This feature is not available in the Map/Reduce version of tMap.
By default, when multiple lookup flows are handled in the tMap component, these
lookup flows are loaded and processed one after another, according to the sequence
of the lookup connections. When a large amount of data is processed, the Job
execution speed is slowed down. To maximize the Job execution performance,
the tMapcomponent allows parallel loading of multiple lookup flows.
1. To enable parallel loading of multiple lookup flows:
2. Double-click the tMap component to launch the Map Editor.
3. Click the Property Settings button at the top of the input area to open
the [Property Settings] dialog box.
4. Select the Lookup in parallel check box and click OK to validate the
setting and close the dialog box.

5. Click OK to close the Map Editor.

With this option enabled, all the lookup flows will be loaded and processed in
the tMap component simultaneously, and then the main input flow will be
processed.

Note: It seems to be not there in TOS which we are using currently 5.6.1
BD and its there in talend Integration Suite (Commercial Version) description
Hi
This new feature is only available in Talend Integration Suite(Commercial Version).
Regards,
Pedro

3. Recover Job execution in case of failure:


Talend Studio along with Talend Administration Centre offer the concept of "recovery
checkpoints" as Job execution restores facility. Checkpoints are taken in anticipation
of the potential need to restart a Job execution beyond its starting point.
General concept
Job execution processes can be time-consuming, as are backup and restore
operations. If checkpointing is possible, this will minimize the amount of time and
effort wasted when the process of Job execution is interrupted by failure.

With Talend Studio, you can set checkpoints in your Job design at specified intervals
(On Subjob Ok andOn Subjob Error connections) in terms of bulks of the data
flow.
With Talend Administration Center, and in case of failure during Job execution, the
execution process can be restarted from the latest checkpoint previous to the failure
rather than from the beginning.
A two-step procedure
The only prerequisite for this facility offered in Talend Studio, is to have trigger
connections of the typesOn Subjob OK and On Subjob Error in your Job design.
To be able to recover Job execution in case of failure, you need to:
1. Define checkpoints manually on one or more of the trigger connections
you use in the Job you design in Talend Studio.
2. For more information on how to initiate recovery checkpoints, see section
3.1 below called How to set checkpoints on trigger connections.
3. In case of failure during the execution of the designed Job, recover Job
execution from the latest checkpoint previous to the failure through
the Error recovery Management page in Talend Administration Center.
3.1. How to set checkpoints on trigger connections
You can set "checkpoints" on one or more trigger connections of the types
OnSubjobOK and OnSubjobError you use to connect components together in your
Job design. Doing that will allow, in case of failure during execution, to recover the
execution of your Job from the last checkpoint previous to the error.
Therefore, checkpoints within Job design can be defined as reference points that can
precede or follow a failure point during Job execution.
Icon

Note
The Error recovery settings can be edited only in a remote project

To define a checkpoint on a trigger connection in a Job, do the following:


1. In the design workspace and after designing your Job, click the trigger
connection you want to set as a checkpoint.
2. The Basic settings view of the selected trigger connection appears.
3. Click the Error recovery tab in the lower left corner to display the Error
recovery view.

4.

5. Select the Recovery Checkpoint check box to define the selected trigger

6.

7.

8.
9.

connection as a checkpoint in the Job data flow. The


icon is appended
on the selected trigger connection.
In the Label field, enter a name for the defined checkpoint. This name will
display in the Label column in the Recovery checkpoints view in Talend
Administration Center. For more information, see Talend Administration
Center User Guide.
In the Failure Instructions field, enter a free text to explain the problems
and what do you think the failure reason could be. These instructions will
display in the Failure Instructions column in the Recovery
checkpoints view in Talend Administration Center, for more information,
see Talend Administration Center User Guide.
Save your Job before closing or running it in order for the define properties
to be taken into account.
Later, and in case of failure during the execution of the designed Job, you
can recover this Job execution from the latest checkpoint previous to the
failure through the Error Recovery Management page in Talend
Administration Center.

For more information, see the recovering job execution chapter in Talend
Administration Center User Guide.
Note: These features mostly available in Subscription Version.

4. How to launch a Job periodically (feature deprecated)


This section describes the Job scheduler feature which is deprecated but still
available for use.
The Scheduler view in Talend Studio helps you to schedule a task that will launch
periodically a Job via a task scheduling (crontab) program.
Through the Scheduler view, you can generate a crontab file that holds croncompatible entries (the data required to launch the Job). These entries will allow you
to launch periodically a Job via the crontab program.
This Job launching feature is based on the crontab command, found in Unix and Unixlike operating systems. It can be also installed on any Windows system.
To access the Scheduler view, click the Scheduler tab in the design workspace.
Icon

Note
If the Scheduler tab does not display on the tab system of your design workspace,
go to Window > Show View... > Talend, and then select Scheduler from the list.

This view is empty if you have not scheduled any task to run a Job. Otherwise, it lists
the parameters of all the scheduled tasks.
The procedure below explains how to schedule a task in the Scheduler view to run a
specific Job periodically and then generate the crontab file that will hold all the data
required to launch the selected Job. It also points out how to use the generated file
with the crontab command in Unix or a task scheduling program in Windows.
1.

Click the

icon in the upper right corner of the Scheduler view.

The [Open Scheduler] dialog box displays.

2.

From the Project list, select the project that holds the Job you want to launch
periodically.

3.

Click the three-dot button next to the Job field and select the Job you want to
launch periodically.

4.

From the Context list, if more than one exists, select the desired context in
which to run the Job.

5.

Set the time and date details necessary to schedule the task.
The command that will be used to launch the selected Job is generated automatically
and attached to the defined task.

6.

Click Add this entry to validate your task and close the dialog box.
The parameters of the scheduled task are listed in the Scheduler view.

7.

Click the
icon in the upper right corner of the Scheduler view to generate
a crontab file that will hold all the data required to start the selected Job.
The [Save as] dialog box displays.

8.

Browse to set the path to the crontab file you are generating, enter a name
for the crontab file in the File name field, and then click Save to close the dialog
box.

The crontab file corresponding to the selected task is generated and stored locally in
the defined path.
9.

In Unix, paste the content of the crontab file into the crontab configuration of
your Unix system; in Windows, install a task scheduling program that will use the
generated crontab file to launch the selected Job.
You can use the
icon to delete any of the listed tasks and the
parameters of any of the listed tasks.

icon to edit the

Note: This feature is mostly on Talent Studio Subscription Model.


5. How to enable parallelization of data flows
The Parallelization vertical tab allows you to configure parameters for partitioning
a data flow into multiple threads, so as to handle those threads in parallel for better
performance. The options that appear in this tab vary depending on the sequence of
the row connection in the flow. In addition, different icons will appear in the row
connection according to the options you selected.
Note that the feature explained in this section is available only on the condition that
you have subscribed to one of the Talend Platform solutions or Big Data solutions.
In Talend Studio, parallelization of data flows means to partition an input data flow
of a Subjob into parallel processes and to simultaneously execute them, so as to gain
better performance. These processes are executed always in a same machine.

You can enable or disable the parallelization by one single click, and then the Studio
automates the implementation across a given Job.

The implementation of the parallelization requires four key steps as explained as


follows:
1.

Partitioning (
number of threads.

): In this step, the Studio splits the input records into a given

2.

Collecting (
): In this step, the Studio collects the split threads and sends
them to a given component for processing.

3.

Departitioning (
): In this step, the Studio groups the outputs of the parallel
executions of the split threads.

4.

Recollecting ( ): In this step, the Studio captures the grouped execution


results and outputs them to a given component.
Once the automatic implementation is done, you can alter the default configuration
by clicking the corresponding connection between components. The following
scenario presents more configuration details using a sample Job:

Scenario: sorting the customer data of large size in parallel


The Job in this scenario puts in order 20 million customer records by running
parallelized executions.

Linking the components


1.

In the Integration perspective of your Studio, create an empty Job from the
Job Designs node in the Repository tree view.
For further information about how to create a Job, see Chapter 4, Designing a Job.

2.

Drop the following components onto the workspace: tFileInputDelimited,


tSortRow and tFileOutputDelimited.
The tFileInputDelimited component (labeled test file in this example) reads the 20
million customer records from a .txt file generated by tRowGenerator.

3.

Connect the components using the Row > Main link.


For further information about the components used in this scenario, see Talend
Components Reference Guide.
Enabling parallelization

Right-click the start component of the Job, tFileInputDelimited in the


scenario, and from the contextual menu, select Set parallelization.
Then the parallelization is automatically implemented.
Splitting the input data flow
Configuring the input flow

1.

Double-click tFileInputDelimited to open its Component view.

2.

In the File name/Stream field, browse to, or enter the path to the file storing
the customer records to be read.

3.

Click the
button to open the schema editor where you need to create the
schema to reflect the structure of the customer data.

4.

Click the
button five times to add five rows and rename them as follows:
FirstName, LastName, City, Address and ZipCode.
In this scenario, we leave the data types with their default value String. In the realworld practice, you can change them depending on the data types of your data to be
processed.

5.

Click OK to validate these changes and accept the propagation prompted by


the pop-up dialog box.

6.

If needs be, complete the other fields of the Component view with values
corresponding to your data to be processed. In this scenario, we leave them as is.
Configuring the partitioning step

1.

Click the link representing the partitioning step to open its Component view
and click the Parallelization tab.

The Partition row option has been automatically selected in the Type area. If you
select None, you are actually disabling parallelization for the data flow to be handled
over this link. Note that depending on the link you are configuring, a Repartition
row option may become available in the Type area to repartition a data flow already
departitioned.
In this Parallelization view, you need to define the following properties:

Number of Child Threads: the number of threads you want to split


the input records up into. We recommend that this number be N-1 where N is the
total number of CPUs or cores on the machine processing the data.
Buffer Size: the number of rows to cache for each of the threads
generated.
Use a key hash for partitions: this allows you to use the hash mode
to dispatch the input records into threads.
Once selecting it, the Key Columns table appears, in which you set the column(s)
you want to apply the hash mode on. In the hash mode, the records meeting the
same criteria are dispatched into the same threads.
If you leave this check box clear, the dispatch mode is Round-robin, meaning records
are dispatched one-by-one to each thread, in a circular fashion, until the last record
is dispatched. Be aware that this mode cannot guarantee that records meeting the
same criteria go into the same threads.

2.

In the Number of Child Threads field, enter the number of the threads you
want to partition the data flow into. In this example, enter 3 because we are using 4
processors to run this Job.

3.

If required, change the value in the Buffer Size field to adapt the memory
capacity. In this example, we leave the default one.

At the end of this link, the Studio automatically collect the split thread to accomplish
the collecting step.
Sorting the input records
Configuring tSortRow
1.

Double-click tSortRow to open its Component view.

2.

Under the Criteria table, click the


the table.

button three times to add three rows to

3.

In the Schema column column, select, for each row, the schema column to
be used as the sorting criterion. In this example, select ZipCode, City and Address,
sequentially.

4.

In the Sort num or alpha? column, select alpha for all the three rows.

5.

In the Order asc or desc column, select asc for all the three rows.

6.
7.

If the schema does not appear, click the Sync columns button to retrieve the
schema from the preceding component.
Click Advanced settings to open its view.

8.

Select Sort on disk. Then the Temp data directory path field and the
Create temp data directory if not exist check box appear.

9.

In Temp data directory path, enter the path to, or browse to the folder you
want to use to store the temporary data processed by tSortRow. In this approach,
tSortRow is enabled to sort considerably more data.
As the threads will overwrite each other if they are written in the same directory, you
need to create the folder for each thread to be processed using its thread ID.
To use the variable representing the thread IDs, you need to click Code to open its
view and in that view, find this variable by searching for thread_id. In this example,
this variable is tCollector_1_THREAD_ID.

Then you need to enter the path using this variable This path reads like:
"E:/Studio/workspace/temp"+((Integer)globalMap.get("tCollector_1_THREAD_ID")).
10.

Ensure that the Create temp data directory if not exists check box is
selected.
Configuring the departitioning step

1.

Click the link representing the departitioning step to open its Component
view and click the Parallelization tab.

The Departition row option has been automatically selected in the Type area. If
you select None, you are actually disabling parallelization for the data flow to be
handled over this link. Note that depending on the link you are configuring, a
Repartition row option may become available in the Type area to repartition a data
flow already departitioned.
In this Parallelization view, you need to define the following properties:

Buffer Size: the number of rows to be processed before the memory is


freed.

Merge sort partitions: this allows you to implement the Mergesort


algorithm to ensure the consistency of data.

2.

If required, change the values in the Buffer Size field to adapt the memory
capacity. In this example, we leave the default value.
At the end of this link, the Studio automatically accomplish the recollecting step to
group and output the execution results to the next component.
Outputting the sorted data

1.

2.

view.

Double click the tFileOutputDelimited component to open its Component

In the File Name field, browse to the file, or enter the directory and the name
of the file, that you want to write the sorted data in. At runtime, this file will be
created if it does not exist.

Executing the Job


Then you can press F6 to run this Job.
Once done, you can check the file holding the sorted data and the temporary folders
created by tSortRow for sorting data on disk. These folders were emptied once the
sorting had been done.

Note: This Option is mostly available in Talend Studio Subscription.

Some General Understandings of Talend Features:

Talend wont follow client Server architecture. So there is no configuration or


installation required.
Talend comes with a package which is not required to install just the systems JVM
should be up and running.
At a time in one machine with one talend Open Studio One workspace can be
opened. Same workspace cannot be override for other user sharing talend in
same machine.
Talend automatically wont store in Versions. We need to define them while
creating Jobs or editing it.
Like other ETL jobs Talend wont require a Service of talend should run in the
machine from where talend needed to run or scheduled. Only java should be
configured, which is enough for a talend job to run.
Talend jobs are platform independent.
While building a job, it always builds a ZIP file which has all packaged information
to run a talend job in any platform. EX: For windows platform a .bat file is there,
which needs to be executed and for UNIX .sh file and supported Jars for job
execution.
In talend we can schedule in any granularity. Means: Packaging many sub jobs
into one package and schedule it or schedule individual jobs.
Talend generates java code while designing. So if we want to debug it we can go
to code behind of each job and solve the errors by seeing the code itself (Required
a little bit java knowledge).

There is an option of developing custom component as per our needs by filling up


the template provided by talend following the steps and customizing the codes to
have the required feature.
Talend has many individual components inbuilt or readily available in talend
Marketplace to accomplish many small tasks which ease the effort of writing
complex expressions or algorithm.
We can use java code in between a talend by in Java components provided by
talend to accomplish tasks, if familiar with Java.
Like all other ETL and ELT tools it supports all functionalities.

Вам также может понравиться