Академический Документы
Профессиональный Документы
Культура Документы
All applications run under UNIX , Windows NT or Windows 2000 and all Teradata software
runs under PDE. All share the resources of CPU and memory on the node.
AMPs and PEs are virtual processors running under control of the PDE.Their numbers
are software configurable.In addition to user applications, gateway software and channel
driver support may also be running.
The Teradata RDBMS has a "shared-nothing" architecture, which means that the vprocs
(which are the PEs and AMPs) do not share common components. For example, each
AMP manages its own dedicated memory space (taken from the memory pool) and the
data on its own vdisk -- these are not shared with other AMPs. Each AMP uses system
resources independently of the other AMPs so they can all work in parallel for high system
performance overall.
Symmetric Multi-Processor (SMP):A single node is a Symmetric Multi-Processor (SMP)
Massively Parallel Processing (MPP):When multiple SMP nodes are connected to form
a larger configuration,we refer to this as a Massively Parallel Processing (MPP) system.
Session Control
The major functions performed by Session Control are logon and logoff. Logon takes a textual request for
session authorization, verifies it, and returns a yes or no answer. Logoff terminates any ongoing activity and
deletes the sessions context.
Parser
The Parser interprets SQL statements, checks them for proper SQL syntax and evaluates them semantically.
The PE also consults the Data Dictionary to ensure that all objects and columns exist and that the user has
authority to access these objects.
Optimizer
The Optimizer is responsible for developing the least expensive plan to return the requested response set.
Processing alternatives are evaluated and the fastest alternative is chosen. This alternative is converted to
executable steps, to be performed by the AMPs, which are then passed to the dispatcher.
Dispatcher
The Dispatcher controls the sequence in which the steps are executed and passes the steps on to the
BYNET. It is composed of execution control and response-control tasks. Execution control receives the step
definitions from the Parser and transmits them to the appropriate AMP(s) for processing, receives status
reports from the AMPs as they process the steps, and passes the results on to response control once the
AMPs have
completed processing. Response control returns the results to the user. The Dispatcher sees that all AMPs
have finished a step before the next step is dispatched. Depending on the nature of the SQL request, a step
will be sent to one AMP, or broadcast to all AMPs.
The BYNET handles the internal communication of the Teradata RDBMS. All communication between PEs
and AMPs is done via the BYNET.
When the PE dispatches the steps for the AMPs to perform, they are dispatched onto the BYNET. The
messages are routed to the appropriate AMP(s) where results sets and status information are generated.
This response information is also routed back to the requesting PE via the BYNET.
Depending on the nature of the dispatch request, the communication may be a:
Broadcastmessage is routed to all nodes in the system.
Point-to-pointmessage is routed to one specific node in the system.
Once the message is on a participating node, PDE handles the multicast(carries the message to just the
AMPs that should get it). So, while a teradata system does do multicast messaging, the BYNET hardware
alone cannot do it - the BYNET can only do point-to-point and broadcast between nodes.
FEATURES OF BYNET:
The BYNET has several unique features:
Fault tolerant: each network has multiple connection paths. If the BYNET detects an unusable path in either
network, it will automatically reconfigure that network so all messages avoid the unusable path. Additionally,
in the rare case that BYNET 0 cannot be reconfigured, hardware on BYNET 0 is disabled and messages are
re-routed to BYNET 1 (or equally distributed if there are more than two BYNETs present), and vice versa.
Load balanced: traffic is automatically and dynamically distributed between both BYNETs.
Scalable: as you add nodes to the system, overall network bandwidth scales linearly - meaning an increase
in system size without loss of performance.
High Performance: an MPP system typically has two or more BYNET networks. Because all networks are
active, the system benefits from the full aggregate bandwidth of all networks. Since the number of networks
can be scaled, performance can also be scaled to meet the needs of demanding applications. The
technology of the BYNET is what makes the Teradata parallelism possible.
The Access Module Processor (AMP)
The Access Module Processor (AMP) is the virtual processor. An AMP will control some portion of each
table on the system. AMPs do the physical work associated with generating an answer set including sorting,
aggregating, formatting and converting. An
AMP can control up to 64 physical disks. The AMPs perform all database management functions in the
system.An AMP responds to Parser/Optimizer steps transmitted across the
BYNET by selecting data from or storing data to its disks. For some requests, the AMPs may redistribute a
copy of the data to other AMPs.
The Database Manager subsystem resides on each AMP. The Database Manager:
Receives the steps from the Dispatcher and processes the steps. It has the ability to:
Lock databases and tables.
Create, modify, or delete definitions of tables.
Insert, delete, or modify rows within the tables.
Retrieve information from definitions and tables.
Collects accounting statistics, recording accesses by session so
users can be billed appropriately.
Returns responses to the Dispatcher.
The Database Manager provides a bridge between that logical organization and the physical organization of
the data on disks. The Database Manager performs a space-management function that controls the use and
allocation of space.
A disk array is a configuration of disk drives that utilizes specialized controllers to manage and distribute
data and parity across the disks while providing fast access and data integrity.
Each AMP vproc must have access to an array controller that in turn accesses the physical disks. AMP
vprocs are associated with one or more ranks of data. The total disk space associated with an AMP vproc is
called a vdisk. A vdisk may have up to three ranks.
Teradata supports several protection schemes:
RAID Level 5Data and parity protection striped across multiple disks.
RAID Level 1Each disk has a physical mirror replicating the data.
RAID Level SData and parity protection similar to RAID 5 but used for EMC disk arrays.
The disk array controllers are referred to as dual active array controllers, which means that both
controllers are actively used in addition to serving as backup for each other.
3.How is Teradata parallel?
Teradata is Parallel for the following reasons:
Each PE can support up to 120 user sessions in parallel.
Each session may handle multiple requests concurrently. While only one request at a time may be
active on behalf of a session, the session itself can manage the activities of 16 requests and their
associated answer sets.
The MPL is implemented differently for different platforms, this means that it will always be well
within the needed bandwidth for each particular platforms maximum throughput.
Each AMP can perform up to 80 tasks in parallel. This means that AMPs are not dedicated at any
moment in time to the servicing of only one request, but rather are multi-threading multiple requests
concurrently. Because AMPs are designed to operate on only one portion of the database, they must
operate in parallel to accomplish their intended results.
In addition to this, the optimizer may direct the AMPs to perform certain steps in parallel if there are
no contingencies between the steps. This means that an AMP might be concurrently performing
more than one step on behalf of the same request.
Query Parallelism:
Breaking the request into smaller components, all components being worked on at the same time,
with one single answer delivered. Parallel execution can incorporate all or part of the operations
within a query, and can significantly reduce the response time of an SQL statement, particularly if the
query reads and analyzes a large amount of data.
Query parallelism is enabled in Teradata by hash-partitioning the data across all the VPROCs
defined in the system. A VPROC provides all the database services on its allocation of datablocks.All
relational operations such as table scans, index scans, projections, selections, joins, aggregations,
and sorts execute in parallel across all the VPROCs simultaneously and unconditionally. Each
operation is performed on a VPROCs data independently of the data associated with the other
VPROCs.
4.Explain mechanism in data distribution and data retrieval
Data Distribution:
Teradata uses hash partitioning and distribution to randomly and evenly distribute data across all AMPs.
The rows of every table are distributed among all AMPs - and ideally will be evenly distributed among all
AMPs.
The rows of all tables are distributed across the AMPs according to their Primary Index value. The
Primary Index value goes into the hashing algorithm and the output is a 32-bit
Row Hash. The high order 16 bits are referred to as the bucket number and are used to
identify a hash map entry. The hash bucket is also referred to as then DSW Destination
Selection Word. This entry, in turn, is used to identify the AMP that will be targeted. The
remaining 16 bits are not used to locate the AMP. Each hash map is simply an array that associates DSW
values (or bucket numbers) with specific AMPs
.
To locate a row, the AMP file system searches through a memory-resident structure called the Master Index.
An entry in the Master Index will indicate that if a row with this Table ID and row hash exists, then it must be
on a specific disk cylinder.
The file system will then search through the designated Cylinder Index. There it will find an
entry that indicates that if a row with this Table ID and row hash exists, it must be in one specific data block
on that cylinder.
The file system then searches the data block until it locates the row(s) or returns a No Rows Found condition
code.
Data retrival:
Retrieving data from the Teradata RDBMS simply reverses the storage model process. A request
made for data is passed on to a Parsing Engine(PE). The PE optimizes the request for efficient processing
and creates tasks for the AMPs to perform, which results in the request being satisfied. Tasks are then
dispatched to the AMPs via the BYNET. Often, all AMPs must participate in creating the answer set, such as
returning all rows of a table to a client application. Other times, only one or a few AMPs need participate. The
PE will ensure that only the AMPs that need to will be assigned tasks. Once the AMPs have been given their
assignments, they retrieve the desired rows from their respective disks. The AMPs will sort, aggregate,or
format if needed. The rows are then returned to the requesting PE viathe BYNET. The PE takes the returned
answer set and returns it to the requesting client application.
When a user writes an SQL query that has a SI in the WHERE clause, the Parsing Engine will hash the
Secondary Index Value. The output is the Row Hash of the SI. The PE creates a request containing the Row
Hash and gives the request to the Message Passing Layer (which includes the BYNET software and
network). The Message Passing Layer uses a portion of the Row Hash to point to a bucket in the Hash Map.
That bucket contains an AMP number to which the PE's request will be sent. The AMP gets the request and
accesses the Secondary Index Subtable pertaining to the requested SI information. The AMP will check to
see if the Row Hash exists in the subtable and double check the subtable row with the actual secondary
index value. Then, the AMP will create a request containing the Primary Index Row ID and send it back to the
Message Passing Layer. This request is directed to the AMP with the base table row, and the AMP easily
retrieves the data row.
Secondary indexes can be useful for :
Processing aggregates
value comparision
Joining tables
Processing aggregates
value comparison
Joining tables
The client application is either written by a programmer or is one of Teradatas provided utility programs.
Many client applications are written as front ends for SQL submission,but they also are written for file
maintenance and report generation. Any client-supported language may be used provided it can interface to
the Call Level Interface (CLI).
The Call Level Interface (CLI) is the lowest level interface to the Teradata RDBMS. It consists of system
calls which create sessions, allocate request and response buffers, create and de-block parcels of
information, and fetch response information to the requesting client.
The Teradata Director Program (TDP) is a Teradata-supplied program that must run on any client system
that will be channel-attached to the Teradata RDBMS. The TDP manages the session traffic between the
Call-Level Interface and the RDBMS. Its functions include session initiation and termination, logging,
verification, recovery, and restart, as well as physical input to and output from the PEs, (including session
balancing) and the maintenance of queues. The TDP may also handle system security.
The Host Channel Adapter is a mainframe hardware component that allows the mainframe to connect to an
ESCON or Bus/Tag channel.
The PBSA (PCI Bus ESCON Adapter) is a PCI adapter card that allows a WorldMark server to connect to an
ESCON channel.
The PBCA (PCI Bus Channel Adapter) is a PCI adapter card that allows a WorldMark server to connect to a
Bus/Tag channel.
14.What are the connections involved in Network attached system?
In network-attached systems, there are four major software components that play important roles in getting
the requests to and from the Teradata RDBMS.
The Call Level Interface (CLI) is a library of routines that resides on the client side. Client
application programs use these routines to perform operations such as logging on and off, submitting SQL
queries and receiving responses which contain the answer set. These routines are 98% the same in a
network-attached environment as they are in a channel attached.
The Teradata ODBC (Open Database Connectivity) driver uses an open standardsbased
ODBC interface to provide client applications access to Teradata across LAN-based
environments. NCR has ODBC drivers for both UNIX and Windows-based applications.
The Micro Teradata Director Program (MTDP) is a Teradata-supplied program that must be linked to any
application that will be network-attached to the Teradata RDBMS. The MTDP performs many of the functions
of the channel based TDP including session management. The MTDP does not control session balancing
across PEs. Connect and Assign Servers that run on the Teradata system handle this activity.
The Micro Operating System Interface (MOSI) is a library of routines providing operating system
independence for clients accessing the RDBMS. By using MOSI, we only need one version of the MTDP to
run on all network-attached platforms.
15.How do you replace a null value with a default value while loading?
Using COALESCE function
Syntax: COALESCE( COL, 'DEFAULT')
16.What is COMPRESS?
The COMPRESS clause works in two different ways:
When issued by itself, COMPRESS causes all NULL values for that column to be compressed to zero
space.
When issued with an argument (e.g., COMPRESS "constant"), the COMPRESS clause will compress
every occurrence of constant in that column to zero space as well as cause every NULL value to be
compressed.
17.How many values can we compress in Teradata?
Up to 255 values (plus NULL) can be compressed per column.
Only fixed width columns can be compressed.
Primary index columns cannot be compressed.
18.Difference between volatile and global volatile table?
Global Temporary tables (GTT) 1. When they are created, its definition goes into Data Dictionary.
2. When materialized data goes in temp space.
3. thats why, data is active up to the session ends, and definition will remain there upto its not dropped using
Drop table statement.
If dropped from some other session then its should be Drop table all;
4. you can collect stats on GTT.
Volatile Temporary tables (VTT) 1. Table Definition is stored in System cache
2. Data is stored in spool space.
3. thats why, data and table definition both are active only upto session ends.
Phase1:It will get the import file and checks the script
Phase2:It reads the record from the base table and store in the work table
Phase3:In this Application phase it locks the table header
Phase4:In the DML operation will done in the tables
Phase 5: In this table locks will be released and work tables will be dropped.
Multiload allows nonunique secondary indexes - automatically rebuilds them after loading.
Multiload can load at max 5 tbls at a time and can also update and delete the data
FastLoad:
Fastload performs the loading of the data in 2phase..and it no need a work table for loading the data so it is
faster as well as it follows the below steps to load the data in the table
Phase1-It moves all the records to all the AMP first without any hashing
Phase2-After giving end loading command,Amp will hashes the record and send it to the appropriate AMPS .
Fastload is used to load empty tables and is very fast, can load one table at a time.
27. Advantages of PPI
PPI:-Partitioned Primary Index.
When a Index is given on a partitioned table on the partitioned column that is the column on
which the partitioned has done the same column has been given as a primary index then,
If there are more partitions, then it will be faster to scan the table, that too with the PI
value itself.
28. Disadvatages of PPI
If there are no partition declared for the row to be inserted in a particular partition then it is waste to
declare the primary index itself.
It is better to use the secondary index for partition for better performance.
29.Teradata joins?
Join Processing
A join is the combination of two or more tables in the same FROM of a single SELECT statement. When
writing a join, the key is to locate a column in both tables that is from a common domain. Like the
correlated subquery, joins are normally based on an equal comparison between the join columns.
The following is the original join syntax for a two-table join:
SELECT
[<table-name>.]<column-name>
[,<table-name>.<column-name> ]
FROM <table-name1> [ AS <alias-name1> ]
,<table-name2> [ AS <alias-name2> ]
[ WHERE [<table-name1>.]<column-name>= [<table-name2>.]<column-name> ]
JoIN keyword is used in an SQL statement to query data from two or more tables, based on a
relationship between certain columns in these tables
Common Join Types in Teradata
1.Self Join
2.Inner Join
3.Outer Join
The three formats of an OUTER JOIN are:
Self Join
A Self Join is simply a join that uses the same table more than once in a single join operation. The
first requirement for this type of join is that the table must contain two different columns of the same
domain. This may involve de-normalized tables.
For instance, if the Employee table contained a column for the manager's employee number and
the manager is an employee, these two columns have the same domain. By joining on these two
columns in the Employee table, the managers can be joined to the employees.
Example:
SELECT Mgr.Last_name (Title 'Manager Name', format 'X(10) )
,Department_name (Title 'For Department ')
FROM Employee_table AS Emp
INNER JOIN Employee_table AS Mgr
ON Emp.Manager_Emp_ID = Mgr.Employee_Number
INNER JOIN Department_table AS Dept
ON Emp.Department_number = Dept.Department_number
ORDER BY 2 ;
INNER JOIN:
INNER JOIN keyword return rows when there is at least one match in both tables
INNER JOIN Syntax:
SELECT column_name(s)
FROM table_name1
INNER JOIN table_name2
ON table_name1.column_name=table_name2.column_name
LEFT OUTER JOIN
The LEFT OUTER JOIN keyword returns all rows from the left table (table_name1), even if there are
no matches in the right table(table_name2).
LEFT OUTER JOIN Syntax:
SELECT column_name(s)
FROM table_name1
LEFT OUTER JOIN table_name2
ON table_name1.column_name=table_name2.column_name
RIGHT OUTER JOIN:
The RIGHT OUTER JOIN keyword Return all rows from the right table (table_name2), even if there
are no matches in the left table (table_name1).
Product Join
It is very important to use an equal condition in the WHERE clause. Otherwise you get a product join.
This means that one row of a table is joined to multiple rows of another table. A mathematic product
means that multiplication is used.
30. Difference between Primary index and secondary index?
1. primary index cannot create after table creation, whereas secondary index can be created dynamically.
2. primary index is 1 AMP operation, secondary index is 2 AMP operation and non unique secondary index
is ALL AMP operation.
31. what are Journals?
Journaling is a data protection mechanism in teradata Journals are generated to maintain preimages and post images of a DML transaction starting/ending at/from a checkpoint. When a DML
transaction fails,the table is restored back to the last available checkpoint using the journal
Images.
There are two types of Journals (1) permanent (2) Transient journal.
The purpose of the permanent journal is to provide selective or full database recovery to a
specified point in time. It permits recovery from unexpected hardware or software disasters. The
permanent journal also reduces the need for full table backups that can be costly in both time and
resources.
1. Permanent journals are explicitly created during database and/or table creation time.This
journaling can be implemented depending upon the need and available disk space.
PJ processing is a user selectable option on a database which allows the user to select extra
journaling for changes made to a table. There are more options and the data can be rolled
forward or backward (depending if you selected the correct options) at points of the customers
choosing. They are permanent because the changes are kept until the customer deletes them or
unloads them to a backup tape. They are usually kept in conjunction with backups of the database
and allow partial rollback or roll forward for some corrupted data or operational error like someone
deleted a months worth of data because they messed up the where clause
2.Transient Journal
The transient journal permits the successful rollback of a failed transaction (TXN). Transactions
are not committed to the database until the AMPs have received an End Transaction request,
either implicitly or explicitly. There is always the possibility that the transaction may fail. If
so, the participating table(s) must be restored to their pre-transaction state.
The transient journal maintains a copy of before images of all rows affected by the transaction. In
the event of transaction failure, the before images are reapplied to the affected tables, then are
deleted from the journal, and a rollback operation is completed. In the event of transaction
success, the before images for the transaction are discarded from the journal at the point of
transaction commit.
Transient Journal activities are automatic and transparent to the user
32.Teradata fast export script?
.LOGTABLE RestartLog1_fxp;
.RUN
.BEGIN
FILE logon ;
EXPORT SESSIONS 4 ;
.LAYOUT
.FIELD
.FIELD
Record_Layout ;
in_City
in_Zip
.IMPORT
1 CHAR(20) ;
* CHAR(5);
.EXPORT
SELECT
OUTFILE cust_acct_outfile2 ;
A.Account_Number
, C.Last_Name
, C.First_Name
, A.Balance_Current
FROM
Accounts A
INNER JOIN
Accounts_Customer AC INNER JOIN
Customer C
ON
C.Customer_Number = AC.Customer_Number
ON
A.Account_Number = AC.Account_Number
WHERE
A.City
= :in_City
AND
A.Zip_Code = :in_Zip
ORDER BY 1 ;
.END EXPORT ;
.LOGOFF ;
33.Teradata statistics.
Statistics collection is essential for the optimal performance of the Teradata query optimizer. The query
optimizer relies on statistics to help it determine the best way to access data. Statistics also help the
optimizer ascertain how many rows exist in tables being queried and predict how many rows will qualify for
given conditions. Lack of statistics, or out-dated statistics, might result in the optimizer choosing a less-thanoptimal method for accessing data tables.
Points:
1: Once a collect stats is done on the table(on index or column) where is this information stored so
that the optimizer can refer this?
Ans: Collected statistics are stored in DBC.TVFields or DBC.Indexes. However, you cannot query these two
tables.
2: How often collect stats has to be made for a table that is frequently updated?
Answer: You need to refresh stats when 5 to 10% of table's rows have changed. Collect stats could be pretty
resource consuming for large tables. So it is always advisable to schedule the job at off peak period and
normally after approximately 10% of data changes.
3: Once a collect stats has been done on the table how can i be sure that the optimizer is considering
this before execution ? i.e; until the next collect stats has been done will the optimizer refer this?
Ans: Yes, optimizer will use stats data for query execution plan if available. That's why stale stats is
dangerous as that may mislead the optimizer.
4: How can i know the tables for which the collect stats has been done?
Ans: You run Help Stats command on that table. e.g HELP STATIISTICS TABLE_NAME ; this will give you
Date and time when stats were last collected. You will also see stats for the columns ( for which stats were
defined) for the table. You can use Teradata Manager too.
5: To what extent will there be performance issues when a collect stats is not done?Can a
performance issue be related only due to collect stats? Probably a HOT AMP could be the reason for
lack of spool space which is leading to performance degradation !!!
As: 1stpart: Teradata uses a cost based optimizer and cost estimates are done based on statistics. So if you
dont have statistics collected then optimizer will use a Dynamic AMP Sampling method to get the stats. If
your table is big and data was unevenly distributed then dynamic sampling may not get right information and
your performance will suffer.
2nd Part: No, performance could be related to bad selection of indexes ( most importantly PI) and the access
path of a particular query.
6: Also let me know what can lead to lack of spool space apart from HOT AMP !!!
Ans: One reason comes to my mind, a product join on two big data sets may lead to the lack of spool space.
34.
never sure". When Teradata was originally designed it did not allow duplicate rows in a table. If any row in
the same table had the same values in every column Teradata would throw one of the rows out. They
believed a second row was a mistake. Why would someone need two watches and why would someone
need two rows exactly the same? This is SET theory and a SET table kicks out duplicate rows.
The ANSI standard believed in a different philosophy. If two rows are entered into a table that are exact
duplicates then this is acceptable. If a person wants to wear two watches then they probably have a
good reason. This is a MULTISET table and duplicate rows are allowed. If you do not specify SET or
MULTISET, one is used as a default. Here is the issue: the default in Teradata mode is SET and the
default in ANSI mode is MULTISET.
Therefore, to eliminate confusion it is important to explicitly define which one is desired. Otherwise, you
must know in which mode the CREATE TABLE will execute in so that the correct type is used for each
table. The implication of using a SET or MULTISET table is discussed further.
SET and MULTISET Tables
A SET table does not allow duplicate rows so Teradata checks to ensure that no two rows in a table are
exactly the same. This can be a burden. One way around the duplicate row check is to have a column in
the table defined as UNIQUE. This could be a Unique Primary Index (UPI), Unique Secondary Index
(USI) or even a column with a UNIQUE or PRIMARY KEY constraint. Since all must be unique, a
duplicate row may never exist. Therefore, the check on either the index or constraint eliminates the
need for the row to be examined for uniqueness. As a result, inserting new rows can be much faster by
eliminating the duplicate row check.
However, if the table is defined with a NUPI and the table uses SET as the table type, now a duplicate
row check must be performed. Since SET tables do not allow duplicate rows a check must be
performed every time a NUPI DUP (duplicate of an existing row NUPI value) value is inserted or
updated in the table. Do not be fooled! A duplicate row check can be a very expensive operation in
terms of processing time. This is because every new row inserted must be checked to see if it is a
duplicate of any existing row with the same NUPI Row Hash value. The number of checks increases
exponentially as each new row is added to the table.
What is the solution? There are two: either make the table a MULTISET table (only if you want duplicate
rows to be possible) or define at least one column or composite columns as UNIQUE. If neither is an
option then the SET table with no unique columns will work, but inserts and updates will take more time
because of the mandatory duplicate row check.
Below is an example of creating a SET table:
CREATE SET Table TomC.employee
( emp
,dept
INTEGER
INTEGER
,lname
CHAR(20)
,fname
VARCHAR(20)
,salary
DECIMAL(10,2)
,hire_date DATE )
UNIQUE PRIMARY INDEX(emp);
Notice the UNIQUE PRIMARY INDEX on the column emp. Because this is a SET table it is much more
efficient to have at least one unique key so the duplicate row check is eliminated.
The following is an example of creating the same table as before, but this time as a MULTISET table:
CREATE MULTISET TABLE employee
( emp
,dept
INTEGER
INTEGER
,lname
CHAR(20)
,fname
VARCHAR(20)
,salary
DECIMAL(10,2)
,hire_date DATE )
PRIMARY INDEX(emp);
Notice also that the PI is now a NUPI because it does not use the word UNIQUE. This is important! As
mentioned previously, if the UPI is requested, no duplicate rows can be inserted. Therefore, it acts more
like a SET table. This MULTISET example allows duplicate rows. Inserts will take longer because of the
mandatory duplicate row check.
38. What is macro? Advatages of it.
Macros:A macro is a predefined, stored set of one or more SQL commands and report-formatting
commands. Macros are used to simplify the execution of frequently used SQL commands. Macros
do not require permanent space.
39.What are the functions of AMPs in Teradata?
Each AMP is designed to hold a portion of the rows of each table. An AMP is responsible for the storage,
maintenance and retrieval of the data under its control. Teradata uses hash partitioning to randomly and
evenly distribute data across all AMPs for balanced performance
points:
40. How Does Teradata Store Rows?
Teradata uses hash partitioning and distribution to randomly and evenly distribute data across all AMPs.
The rows of every table are distributed among all AMPs - and ideally will be evenly distributed among all
AMPs. Each AMP is responsible for a subset of the rows of each table.
Evenly distributed tables result in evenly distributed workloads.
41. Which one will take care when an AMP goes down?
Down amp recovery journal will start when AMP goes down to restore the data for the down amp
2.fall back is like it has redundant data,if one amp goes down in the cluster also it wont affect
your queries.the query will use data from fall back rows.the down amp wont be updated use the
data from fall back.
For your doubt,When amp is down you ran the update,so fall back rows will be updated.Still amp
is in down condition and if you run the query,the query will use the updated ones and
run.whenever down amp active it will use downamp recovery journal and data will be updated.
42.Which one will take care when a NODE goes down?
In the event of node failure, all virtual processors can migrate to another available node in the
clique. All nodes in the clique must have access to the same disk arrays
43.What is the use of EXPLIN plan?
The EXPLAIN facility allows you to preview how Teradata will execute a requested query. It
returns a summary of the steps the Teradata RDBMS would perform to execute the request.
EXPLAIN also discloses the strategy and access method to be used, how many rows will be
involved, and its cost in minutes and seconds. Use EXPLAIN to evaluate a query performance
and to develop an alternative processing strategy that may be more efficient. EXPLAIN works on
any SQL request. The request is fully parsed and optimized, but not run. The complete plan is
returned to the user in readable English statements.
EXPLAIN provides information about locking, sorting, row selection criteria, join strategy and
conditions, access method, and parallel step processing. EXPLAIN is useful for performance
tuning, debugging, pre-validation of requests, and for technical training.
44.Use of COALESCE function?
The newer ANSI standard COALESCE can also convert a NULL to a zero. However, it can convert a NULL
value to any data value as well. The COALESCE searches a value list, ranging from one to many values,
and
returns the first Non-NULL value it finds. At the same time, it returns a NULL if all values in the list are
NULL.
To use the COALESCE, the SQL must pass the name of a column to the function. The data in the
column is then compared for a NULL. Although one column name is all that is required, normally more than
one column is normally passed to it. Additionally, a literal value, which is never NULL, can be returned to
provide a default value if all of the previous column values are NULL.
The syntax for the COALESCE follows:
SELECT
FROM <table-name>
GROUP BY 1 ;
In the above syntax the <column-list> is a list of columns. It is written as a series of column names
separated by commas.
SELECT
COALESCE(NULL,0) AS Col1
,COALESCE(NULL,NULL,NULL) AS Col2
,COALESCE(3) AS Col3
,COALESCE('A',3) AS Col4 ;
45.Diff between role , privilege and profile?
A role can be assisgned a collection of access rights in the same way a user can.
You then grant the role to a set of users, rather than grant each user the same rights.
This cuts down on maintenance, adds standardisation (hence reducing erroneous access to sensitive data)
and reduces the size of the dbc.allrights table, which is very important in reducing DBC blocking in a large
environment.
Profiles assign different characteristics on a User, such as spool space, permspace and account strings.
Again this helps with standardisation. Note that spool assigned to a profile will overrule spool assigned on a
create user statement. Check the on line manuals for the full lists of properties
Data Control Language is used to restrict or permit a user's access. It can selectively limit a user's ability to
retrieve, add, or modify data. It is used to grant and revoke access privileges on tables and views.
46.Diff between database and user?
Both may own objects such as tables, views, macros, procedures, and functions. Both users and databases
may hold privileges. However, only users may log on, establish a session with the Teradata Database, and
submit requests.
A user performs actions where as a database is passive. Users have passwords and startup strings;
databases do not. Users can log on to the Teradata Database, establish sessions, and submit SQL
statements; databases cannot.
Creator privileges are associated only with a user because only a user can log on and submit a CREATE
statement. Implicit privileges are associated with either a database or a user because each can hold an
object and an object is owned by the named space in which it resides
47.How many molad scripts are required for the below scenario
First I want to load data from source to volatile table.
After that I want to load data from volatile table to Permanent table.
48.What are the types of CASE statements available in Teradata?
The CASE function provides an additional level of data testing after a row is accepted by the WHERE
clause. The additional test allows for multiple comparisons on multiple columns with multiple outcomes.
It also incorporates logic to handle a situation in which none of the values compares equal.
When using CASE, each row retrieved is evaluated once by every CASE function. Therefore, if two
CASE operations are in the same SQL statement, each row has a column checked twice, or two
distinct values
56. Diff between logical and physical data modeling?
Logical Versus Physical Database Modeling
After all business requirements have been gathered for a proposed database, they must be modeled. Models
are created to visually represent the proposed database so that business requirements can easily be
associated with database objects to ensure that all requirements have been completely and accurately
gathered. Different types of diagrams are typically produced to illustrate the business processes, rules,
entities, and organizational units that have been identified. These diagrams often include entity relationship
diagrams, process flow diagrams, and server model diagrams. An entity relationship diagram (ERD)
represents the entities, or groups of information, and their relationships maintained for a business. Process
flow diagrams represent business processes and the flow of data between different processes and entities
that have been defined. Server model diagrams represent a detailed picture of the database as being
transformed from the business model into a relational database with tables, columns, and constraints.
Basically, data modeling serves as a link between business needs and system requirements.
Two types of data modeling are as follows:
Logical modeling
Physical modeling
If you are going to be working with databases, then it is important to understand the difference between
logical and physical modeling, and how they relate to one another. Logical and physical modeling are
described in more detail in the following subsections.
Logical Modeling
Logical modeling deals with gathering business requirements and converting those requirements into a
model. The logical model revolves around the needs of the business, not the database, although the needs
of the business are used to establish the needs of the database. Logical modeling involves gathering
information about business processes, business entities (categories of data), and organizational units. After
this information is gathered, diagrams and reports are produced including entity relationship diagrams,
business process diagrams, and eventually process flow diagrams. The diagrams produced should show the
processes and data that exists, as well as the relationships between business processes and data. Logical
modeling should accurately render a visual representation of the activities and data relevant to a particular
business.
The diagrams and documentation generated during logical modeling is used to determine whether the
requirements of the business have been completely gathered. Management, developers, and end users alike
review these diagrams and documentation to determine if more work is required before physical modeling
commences.
Typical deliverables of logical modeling include
Physical Modeling
Physical modeling involves the actual design of a database according to the requirements that were
established during logical modeling. Logical modeling mainly involves gathering the requirements of the
business, with the latter part of logical modeling directed toward the goals and requirements of the database.
Physical modeling deals with the conversion of the logical, or business model, into a relational database
model. When physical modeling occurs, objects are being defined at the schema level. A schema is a group
of related objects in a database. A database design effort is normally associated with one schema.
During physical modeling, objects such as tables and columns are created based on entities and attributes
that were defined during logical modeling. Constraints are also defined, including primary keys, foreign keys,
other unique keys, and check constraints. Views can be created from database tables to summarize data or
to simply provide the user with another perspective of certain data. Other objects such as indexes and
snapshots can also be defined during physical modeling. Physical modeling is when all the pieces come
together to complete the process of defining a database for a business.
Physical modeling is database software specific, meaning that the objects defined during physical modeling
can vary depending on the relational database software being used. For example, most relational database
systems have variations with the way data types are represented and the way data is stored, although basic
data types are conceptually the same among different implementations. Additionally, some database
systems have objects that are not available in other database systems.
57. what is derived Table?
Derived tables are always local to a single SQL request. They are built dynamically using an additional
SELECT within the query. The rows of the derived table are stored in spool and discarded as soon as
the query finishes. The DD has no knowledge of derived tables. Therefore, no extra privileges are
necessary. Its space comes from the users spool space.
Following is a simple example using a derived table named DT with a column alias called avgsal and its
data value is obtained using the AVG aggregation:
SELECT *
FROM (SELECT AVG(salary) FROM Employee_table) DT(avgsal) ;
58.what is the use of WITH CHECK OPTION in Teradata?
In Teradata, the additional key phase: WITH CHECK OPTION, indicates that the WHERE clause
conditions should be applied during the execution of an UPDATE or DELETE against the view.
This is not a concern if views are not used for maintenance activity due to restricted privileges.
59.what is soft referential integrity and batch referential integrity?
Soft RI is just an indication that there is a PK-FK relation between the columns and is not implemented at
TD side.
But having it would help in cases like Join processing etc.
Batch:
- Tests an entire insert, delete, or update batch operation for referential integrity.
- If insertion, deletion, or update of any row in the batch violates referential integrity, then parsing engine
software rolls back the entire batch and returns an abort message.
Lets say that I had a table called X with some number of rows and I wanted to insert these rows into table Y
(insert into X select * from y). However, some of the rows violated an RI constraint that table Y had. From
reading the manuals, it seemed to me that if using standard RI, all of the valid rows would be inserted but the
invalid ones would not. But with batch RI (which is "all or nothing") I would expect nothing to get inserted
since it would check for problem rows up front and return an error right away.
If in fact there is no difference except in how Teradata processes things internally (i.e. where it checks for
invalid rows) then why would you want to use one over the other? Wouldn't you always want to use batch
since it does the checking up front and saves processing time?
Points:
lets suppose that we have 3 dimensions and 1 facts table (like in the example above).
lets suppose that join index (or aji) is based on 3 dims and facts (all tables inner joined).
INTEGER
INTEGER
,lname
CHAR(20)
,fname
VARCHAR(20)
,salary
DECIMAL(10,2)
,hire_date DATE )
UNIQUE PRIMARY INDEX(emp);
71.what is value ordered NUSI
When we define a value ordered NUSI on a column the rows in the secondary subtable get sorted based on
the secondary index value. The columns should be of integer or date type.
This is used for range queries and to avoid full table scans on large tables.
table. It allows the request to specify either an absolute number of rows or a percentage of rows to return.
Additionally, it provides an ability to return rows from multiple samples.
SELECT
TOP Clause
The TOP clause is used to specify the number of records to return.
The TOP clause can be very useful on large tables with thousands of records. Returning a large number of
records can impact on perfor mance.
Note:
Example:
1.SELECT TOP 50 PERCENT * FROM EMP
2. SELECT TOP 2 * FROM EMP
77.How to improve performance of the query
78.Explain Primary Index and how do we select that
The Primary Index determines which AMP stores an individual row of a table. The PI data is converted
into the Row Hash using a mathematical hashing formula. The result is used as an offset into the Hash
Map to determine the AMP number. Since the PI value determines how the data rows are distributed
among the AMPs, requesting a row using the PI value is always the most efficient retrieval mechanism
for Teradata.
POINTS:
.It determines how data will be distributed and is also the most efficient access path.
79.What is difference between Role, Privilege and profile
A role can be assigned a collection of access rights in the same way a user can.
You then grant the role to a set of users, rather than grant each user the same rights.
This cuts down on maintenance, adds standardisation (hence reducing erroneous access to sensitive data)
and reduces the size of the dbc.allrights table, which is very important in reducing DBC blocking in a large
environment.
Profiles assign different characteristics on a User, such as spool space, permspace and account strings.
Again this helps with standardisation. Note that spool assigned to a profile will overrule spool assigned on a
create user statement. Check the on line manuals for the full lists of properties
Data Control Language is used to restrict or permit a user's access. It can selectively limit a user's ability to
retrieve, add, or modify data. It is used to grant and revoke access privileges on tables and views.
80.What are different spaces in Teradata and difference ?
Perm Space
Temp Space
spool space
Perm Space :All databases have a defined upper limit of permanent space.
Permanent space is used for storing the data rows of tables. Perm space is not pre-allocated. It
represents a maximum limit.
Spool Space :
All databases also have an upper limit of spool space. If there is no limit defined for a particular
database or user, limits are inherited from parents. Theoretically, a user could use all unallocated
space in the system for their query. Spool space is temporary space used to hold intermediate
query results or formatted answer sets to queries. Once the query is complete, the spool space is
released.
Example: You have a database with total disk space of 100GB. You have
84.What is difference between database and user in Teradata. what are the things you can do or can
not do in both.
Both may own objects such as tables, views, macros, procedures, and functions. Both users and databases
may hold privileges. However, only users may log on, establish a session with the Teradata Database, and
submit requests.
A user performs actions where as a database is passive. Users have passwords and startup strings;
databases do not. Users can log on to the Teradata Database, establish sessions, and submit SQL
statements; databases cannot.
Creator privileges are associated only with a user because only a user can log on and submit a CREATE
statement. Implicit privileges are associated with either a database or a user because each can hold an
object and an object is owned by the named space in which it resides
85.What is Checkpoint ?
86.When do you use BTEQ. What other softwares have you used or can we use rather than BTEQ.
When the query is performing operations on lesser amount of data in a table then we go for BTEQ.
Any kind of SQL operations like SELECT, UPDATE, INSERT and delete.
Can be used for import, export and reporting purposes.
Macros and Stored procs can also be run using BTEQ.
The other utilities which we can use instead of BTEQ for loading purposes are FASTLOAD and MLOAD.
And exporting is FASTEXPORT. But these are used while accessing large amount of data.
87.How many type of files have you loaded and their differences. (Fixed and Variable) ?
88.How do you execute your jobs in Teradata Environment.
In a channel environment I.e mainframes, the load utilities can be execute through a JCL.
In a network I.e from a command prompt the load scripts can be run through the following command.
<utility name> <scriptname>
89.What was the environment of your latest project (Number of Amps, Nodes, Teradata Server
Number etc)
Number of Amps production and integration 24 development 12
Number of nodes - production and integration 4 development 2
90.What is the process to restart the multiload if it fails
If Mload failed in the Acquisition phase just rerun the
job. If Mload failed in Application Phase:
a) Try to drop error tables, work tables, log tables,
release Mload if required n submit the job
from .Begin Import onwards.
b) if ur table is fallback protected u need to make sure un
fallback and use RELEASE MLOAD IN APPLY sql.
Then resubmit the job.
1.
2.
3.
insert/update/delete is done on the table the indexes will also need to be updated and maintained.
Indexes cannot be accessed directly by users. Only the optimizer has access to the index.
92.What is difference between Multiload, FastLoad and TPUMP
93.what are the different functions you do in BTEQ (Errorcode, ErrorLevel, etc) ?
Error Level :Assigns severity to errors
you can assign an error level (severity) for each error code returned.
you can make decisions can be based on error level.
94.what is difference between ZEROIFNULL and NULLIFZERO ?
The ZEROIFNULL function: will pass zero when data coming as null
The NULLIFZERO function: will pass null when data coming as zero.
95.What is Range_N
Range_N is defined on a partition primary index to specify the range of values of a column that should be
assigned to a partition.
The number of partitions = the number of ranges specified + no case + unknown
no case if the value does not belong to any range
unknown- for the values like nulls, spaces etc
96.Explain PPI?
PPI :Partitioned Primary Indexes are Created so as to divide the table onto partitions based on Range or Values
as Required .the data is first Hashed into Amps , then Stored in amps based on the Partitions !!! which
when Retrived for a single partition / multiple Partitions , will be a all amps
Scan, but not a Full Table Scan !!!! . this is effective for Larger Tables partitioned on the Date Specially !!!
there is no extra Overhead on the System (no Spl Tables Created ect )
Operator
Returns
UNION
UNION ALL
INTERSECT
MINUS
All distinct rows selected by the first query but not the second.
UNION Example
The following statement combines the results with the UNION operator, which eliminates duplicate selected
rows. This statement shows that you must match datatype (using the TO_DATE and TO_NUMBER
functions) when columns do not exist in one or the other table:
SELECT part, partnum, to_date(null) date_in FROM orders_list1
UNION
SELECT part, to_number(null), date_in FROM orders_list2;
PART
PARTNUM DATE_IN
---------- ------- -------SPARKPLUG 3323165
SPARKPLUG
10/24/98
FUEL PUMP 3323162
FUEL PUMP
12/24/99
TAILPIPE 1332999
TAILPIPE
01/01/01
CRANKSHAFT 9394991
CRANKSHAFT
09/12/02
SELECT part
FROM orders_list1
UNION
SELECT part
FROM orders_list2;
PART
---------SPARKPLUG
FUEL PUMP
TAILPIPE
CRANKSHAFT
MINUS Example
The following statement combines results with the MINUS operator, which returns only rows returned by the
first query but not by the second:
SELECT part
FROM orders_list1
MINUS
SELECT part
FROM orders_list2;
PART
---------SPARKPLUG
FUEL PUMP
the same column is getting updated because of another flat file. Which utility will be more applicable
in this case?
Tpump is better as it locks at row level
The table got loaded with wrong data using Fastload and it failed. The error message shown was:
RDBMS error 2652: Operation not allowed: _db_._table_ is being Loaded. How to realese lock on
this table?
When the data got loaded completely and still its locked, submit another fastload script with
BEGIN LOADING AND END LOADING atetments alone.
I need to create a delimited file using fastexport. As fast export do not support delimited format, so I
have written the following select to get the delimited output:
select
trim(col1) || '|' ||
trim(col2) || '|' ||
trim(col3) || '|' || ...........
...............................
trim(col50)
from table
but the above script prefix each line with 2 junk characters.
How to get the data without the junk characters.
when the fastload check point value is <= 60 and > 60, how is that going to matter?
When the checkpoint interval is <= 60, that indicates the minutes (time) interval. If the value
is more than 60, it will be considered as the no. of records but not the time.
123. I am loading a delimited flat file with a time format as the following:
HH:MM PM/AM
Examples would be :
9:45 AM
10:25 PM
And there is no zero if the hours is a single integer value.
Is there any way that I would get the mload acquisition phase count in the mload script? MLOAD
support environment provides different variables (total ins, upd, del etc.) at the application phase,
but not at the acquisition phase.
Is there any way other than scan the log file?
There are various commands available for the same.
SYSAPLYCNT
SYSNOAPLYCNT
SYSRCDCNT
SYSRJCTCNT
124. I have this requirement when error table gets generated during the MLOAD, I want to send an
email. How can I achieve this?\
After Mload use a BTEQ to query for the error table if present quit on some value say '99'
and use your OS to mail when the return code is 99.
I am using the following syntax to logon to Teradata Demo thru BTEQ/BTEQWin:
.logon demotdat/dbc,dbc;
and having the following error:
*** Error: Invalid logon!
*** Total elapsed time was 1 second.
Teradata BTEQ 08.02.00.00 for WIN32. Enter your logon or BTEQ command:
The hosts file shows the following:
TROUBLE SHOOTING
Solution : When ever you want to open a fresh batch id, first of all you should
close the existing batch id and open a fresh batch id.
2) source is Flat file and I am staging the this flat file in teradata.
I found that the initial zeros are truncating in teradata. What could be the
reason.
Solution : The reason is that in teradata you are defined the column datatype
as Integer. Thats why initial values are truncating. So, change the target table
data type to VARCHAR. VARCHAR datatype it wont trucate the initial zeros.
3) Can`t determine current batch ID for Data Source 47
Solution : For any fresh stage load you should open a batch id for the current
data source id.
4) Unique Primary key violation CFDW_ECTL_CURRENT_BATCH table.
Solution : In CFDW_ECTL_CURRENT_BATCH table unique primary key defined
on ECTL_DATA_SRCE_ID,
ECTL_DATA_SRCE_INST_ID columns. At any point of time you shold have only
one record for ECTL_DATA_SRCE_ID,
ECTL_DATA_SRCE_INST_ID columns.
5) cant insert a NULL value in a NOT NULL column.
Solution : First find all the NOT NULL columns in a target table and cross verify
with the corresponding source columns and identify for which source column
you are getting NULL value and take necessary action.
6) source is Flat file and I am staging the this flat file in teradata.
I found that the initial zeros are truncating in teradata. What could be the
reason.
Solution : The reason is that in teradata you are defined the column datatype
as Integer. Thats why initial values are truncating. So, change the target table
data type to VARCHAR. VARCHAR datatype it wont trucate the initial zeros.
7) I am passing one record to target look up but the look up is not returning
the matching record.I know that the record is present in loo up. What action
you will take ?
Solution : use LTRIM,RTRIM in look up sql override.this will remove the
unwanted blank spaces. Then look up will find the matching record in look up.
8) I am getting duplicate records for natural key (ECTL_DATA_SRCE_KEY) what
will you do to eliminate duplicate records natural key.
Solution: we will concatenate 2 ,3 or more source columns and check for
duplicate records. If you are not getting duplicates after concatenating then
use those columns to populate ECTL_DATA_SRCE_KEY column in target.
9) Accti_id is a Not null column in AGREEMENT table. You are getting a NULL
value from CFDW_AGREEMENT_XREF look up ? what will you do to eliminate
NULL records.
Solution : After stage load, I will populate CFDW_AGREEMENT_XREF table (this
table basically contain surrogate keys). Once you populate XREF table then you
wont get any NULL records
Accti_id column.
10) Unique primary key violation on CFDW_ECTL_BATCH_HIST table.
Solution : In CFDW_ECTL_BATCH_HIST table Unique primary index defined on
ectl_btch_id column. So, there should be only one uniue record for a
ectl_btch_id column.
11) when will you use ECTL_PGM_ID column in target look up sql overirde ?
Solution : when you are populating a single target table (AGREEMENT table)
from multiple mappings in the same informatica folder then we will use
ECTL_PGM_ID in taget look up sql override. This will eliminate unnecessary
updating records.
12) you are defined the primary keys as per the ETL spec but you are getting
*Teradata makes itself the decision to use the index or not - if you are not careful you spend time in
table updates to keep up an index which is no used at all (one cannot give the query optimizer hints
to use some index - though collecting of statistics may affect the optimizer strategy
*In the MP-RAS environment, look at the script "/etc/gsc/bin/perflook.sh". This will provide a
system-wide snapshot in a series of files. The GSC uses this data for incident analysis.
* When using an index one must keep sure that the index condition is met in the sub queries "using
IN, nested queries, or derived tables"
* Indication of the proper index use is found by explain log entry "a ROW HASH MATCH SCAN
across ALL-AMPS"
* If the index is not used the result of the analysis is the 'FULL TABLE SCAN' where the
performance time grows when the size of the history table grows
* Keeping up an index information is a time/space consuming issue. Sometimes Teradata is much
better when you "manually" imitatate the index just building it from scratch.
* keeping up join index might help, but you cannot multiload to a table which is a part of the join
index - loading with 'tpump' or pure 'SQL' is OK but does not perform as well. Dropping and recreating a join index with a big table takes time and space.
* when your Teradata "explain" gives '25' steps from your query (even without the update of the
results) and the actual query is a join of six or more tables
Case e.g.
We had already given up updating the secondary indexes - because we have not had much use for
them.
After some trials and errors we ended up to the strategy, where the actual "purchase frequency
analysis" is never made "directly" against the history table.
Instead:
1) There is a "one-shot" run to build the initial "customer's previous purchase" from the "purchase
history" - it takes time, but that time is saved later
2) The purchase frequency is calculated by joining the "latest purchase" with the "customer's
previous purchase".
3) When the "latest purchase" rows are inserted to the "purchase history" the "customer's previous
purchase" table is dropped and recreated by merging the "customer's previous purchase" with the
"latest purchase"
4) By following these steps the performance is not too fast yet (about 25 minutes in our two node
system) for a bunch of almost 1.000.000 latest receipts - but it is tolerable now.
(We also tested by adding both the previous and latest purchase to the same table, but because its
size was in average case much bigger than the pure "latest purchase", the self-join was slower in
that case)
*********
concurrent queries and concurrent users that can result from active warehousing and e-commerce
initiatives. Expected service levels vary widely among different groups of users, as do query types.
And, of course, the entire workload must scale upward linearly as the demand increases, ideally
with a minimum of effort required from users and systems staff. Here's a look at some of the most
frequent questions I receive on the subject of mixed workloads and concurrency requirements.
How do I balance the work coming in across all nodes of my Teradata
configuration?
You don't. Teradata automatically balances sessions across all nodes to evenly distribute work
across the entire parallel configuration. Users connect to the system as a whole rather than a specific
node, and the system uses a balancing algorithm to assign their sessions to a node. Balancing
requires no effort from users or system administrators.
Does Teradata balance the work queries cause?
The even distribution of data is the key to parallelism and scalability in Teradata. Each query
request is sent to all units of parallelism, each of which has an even portion of the data to process,
resulting in even work distribution across the entire system.
For short queries and update flow typical of Web interactions, the optimizer recognizes that only a
single unit of parallelism is needed. A query coordinator routes the work to the unit of parallelism
needed to process the request. The hashing algorithm does not cluster related data, but spreads it out
across the entire system. For example, this month's data and even today's data is evenly distributed
across all units of parallelism, which means the work to update or look at that data is evenly
distributed.
Will many concurrent requests cause bottlenecks in query coordination?
Query coordination is carried out by a fully parallel parsing engine (PE) component. Usually, one or
more PEs are present on each node. Each PE handles the requests for a set of sessions, and sessions
are spread evenly across all configured PEs. Each PE is multithreaded, so it can handle many
requests concurrently. And each PE is independent of the others with no required crosscoordination. The number of users logged on and requests in flight are limited only by the number
of PEs in the configuration.
How do you avoid bottlenecks when the query coordinator must retrieve
information from the data dictionary?
In Teradata, the DBMS itself manages the data dictionary. Each dictionary table is simply a
relational table, parallelized across all nodes. The same query engine that manages user workloads
also manages the dictionary access, using all nodes for processing dictionary information to spread
the load and avoid bottlenecks. The PE even caches recently used dictionary information in
memory. Because each PE has its own cache, there is no coordination overhead. The cache for each
PE learns the dictionary information most likely to be needed by the sessions assigned to it.
With a large volume of work, how can all requests execute at once?
As in any computer system, the total number of items that can execute at the same time is always
limited to the number of CPUs available. Teradata uses the scheduling services Unix and NT
provide to handle all the threads of execution running concurrently. Some requests might also exist
on other queues inside the system, waiting for I/O from the disk or a message from the BYNET, for
example. Each work item runs in a thread; each thread gets a turn at the CPU until it needs to wait
for some external event or until it completes the current work. Teradata configures several units of
parallelism in each SMP node. Each unit of parallelism contains many threads of execution that
aren't restricted to a particular CPU; therefore, every thread gets to compete equally for the CPUs in
the SMP node.
There is a limit, of course, to the number of pieces of work that can actually have a thread allocated
in a unit of parallelism. Once that limit is reached, Teradata queues work for the threads. Each
thread is context free, which means that it is not assigned to any session, transaction, or request.
Therefore, each thread is free to work on whatever is next on the queue. The unit of work on the
queue is a processing step for a request. Combining the queuing of steps with context-free threads
allows Teradata to share the processing service equally across all the concurrent requests in the
system. From the users' point of view, all the requests in the system are running, receiving service,
and sharing system resources.
How does Teradata avoid resource contention and the resulting performance and
management problems?
Teradata algorithms are very resource efficient. Other DBMSs optimize for single-query
performance by giving all resources to the single query. But Teradata optimizes for throughput of
many concurrent queries by allocating resources sparingly and using them efficiently. This kind of
optimization helps avoid wide performance variations that can occur depending on the number of
concurrent queries.
When faced with a workload that requires more system resources than are available, Teradata tunes
itself to that workload. Thrashing, a common performance failure mode in computer systems,
occurs when the system has fewer resources than the current workload requires and begins using
more processing time to manage resources than to do the work. With most databases, a DBA would
tune the system to avoid thrashing. However, Teradata adjusts automatically to workload changes
by adjusting the amount of running work and internally pushing back incoming work. Each unit of
parallelism manages this flow control mechanism independently.
If all concurrent work shares resources evenly, how are different service levels
provided to different users?
The Priority Scheduler Facility (PSF) in Teradata manages service levels among different parts of
the workload. PSF allows granular control of system resources. The system administrator can define
up to five resource partitions; each partition contains four available priorities. Together, they
provide 20 allocation groups (AGs) to which portions of the workload are assigned by an attribute
of the logon ID for the user or application. The administrator assigns each AG a portion of the total
system resources and a scheduling policy.
For example, the administrator can assign short queries from the Web site a guaranteed 20 percent
of system resources and a high priority. In contrast, the administrator might assign medium priority
and 10 percent of system resources to more complex queries with lower response-time
requirements. Similarly, the administrator might assign data mining queries a low priority and five
percent of the total resources, effectively running them in the background. You can define policies
so that the resources adjust to the work in the system. For example, you could allow data mining
queries to take up all the resources in the system if nothing else is running.
Unlike other scheduling utilities, PSF is fully integrated into the DBMS, not managed at the task or
thread level, which makes it easier to use for parallel database workloads. Because PSF is an
attribute of the session, it follows the work wherever it goes in the system. Whether that piece of
work is executed by a single thread in a single unit of parallelism or in 2,000 threads in 500 units of
parallelism, PSF manages it without system administrator involvement.
CPU scheduling is a primary component of PSF, using all the normal techniques (such as quantum
size, CPU queues by priority, and so on). However, PSF is endemic throughout the Teradata DBMS.
There are many queues inside a DBMS handling a large volume mixed workload. All of those
queues are prioritized based on the priority of the work. Thus, a high priority query entered after
several lower priority requests that are awaiting their turn to run will go to the head of the queue
and will be executed first. I/O is managed by priority. Data warehouse workloads are heavy I/O
users, so a large query performing a lot of I/O could hold up a short, high-priority request. PSF puts
the high-priority request I/Os to the head of the queue, helping to deliver response time goals.
Data warehouse databases often set the system environment to allow for fast scans.
Does Teradata performance suffer when the short work is mixed in?
Because Teradata was designed to handle a high volume of concurrent queries, it doesn't count on
sequential scans to produce high performance for queries. Although other DBMS products see a
large fall in request performance when they go from a single large query to multiple queries or
when a mixed workload is applied, Teradata sees no such performance change. Teradata never plans
on sequential access in the first place. In fact, Teradata doesn't even store the data for sequential
accesses. Therefore, random accesses from many concurrent requests are just business as usual.
Sync scan algorithms provide additional optimization. When multiple concurrent requests are
scanning or joining the same table, their I/O is piggybacked so that only a single I/O is performed to
the disk. Multiple concurrent queries can run without increasing the physical I/O load, leaving the
I/O bandwidth available for other parts of the workload.
What if work demand exceeds Teradata's capabilities?
There are limits to how much work the engine can handle. A successful data warehouse will almost
certainly create a demand for service that is greater than the total processing power available on the
system. Teradata always puts into execution any work presented to the DBMS.
If the total demand is greater than the total resources, then controls must be in place before the work
enters the DBMS. When your warehouse reaches this stage, you can use Database Query Manager
(DBQM) to manage the flow of user requests into the warehouse. DBQM, inserted between the
users' ODBC applications and the DBMS, evaluates each request and then applies a set of rules
created by the system administrator. If the request violates any of the rules, DBQM notifies the user
that the request is denied or deferred to a later time for execution.
Rules can include, for example, system use levels, query cost parameters, time of day, objects
accessed, and authorized users. You can read more about DBQM in a recent Teradata Review article
("Field Report: DBQM," Summer 1999, available online at
www.teradatareview.com/summer99/truet.html).
How do administrators and DBAs stay on top of complex mixed workloads?
The Teradata Manager utility provides a single operational system view for administrators and
DBAs. The tool provides real-time performance, logged past performance, users and queries
currently executing, management of the schema, and more.
STAYING ACTIVE
The active warehouse is a busy place. It must handle all decision making for the organization,
including strategic, long-range data mining queries, tactical decisions for daily operations, and
event-based decisions necessary for effective Web sites. Nevertheless, managing this diversity of
work does not require a staff of hundreds running a complex architecture with multiple data marts,
operational data stores, and a multitude of feeds. It simply requires a database management system
that can manage multiple workloads at varying service levels, scale with the business, and provide
2437 availability year round with a minimum of operational staff.
2. Use COMPRESS in whichever attribute possible. This helps in reducing IO and hence
Improves performance. Especially for attribute having lots of NULL values/Unique known
values.
3. COLLECT STATISTICS on daily basis (after every load) in order to improve performance.
4. Drop and recreate secondary indices before and after every load. This helps in improving load
performance (if critical)
5. Regularly Check for EVEN data distribution across all AMPs using Teradata Manager or thru
queryman
6. Check for the combination on CPU, AMPs, PE, nodes for performance optimization.
Each AMP can handle 80 tasks and each PE can handle 120 sessions.
MLOAD Customize the number sessions for each MLOAD jobs depending on the
Number of concurrent MLOAD jobs &
Number of PEs in the system
e.g
SCENARIO 1
# of AMPS = 10
# of MAx load Jobs handled by Teradata=5 (Parameter which can be
set values-5 to 15)
# of Sessions per load job= 1 (parameter that can be set at Global
or at each MLOAD script level)
# of PE's=1
So 10*5*1= 50 + 10 (2 per job overhead) = 60 is the Max sessions
on Teradata box
This is LESS then 120, which is max # of sessions a PE can handle
SCENARIO 2
#AMPS = 16
#Max load Jobs handles by Teradata=15
#Sessions per load job= 1
#of PE's=1
So 16*15*1= 240 + 30 (2 per job ovehead) = 270 (Max sessions on
Teradata box).
This is MORE then 120, which is the max sessions a PE can handle.
Hence MLOAD fail, inspite of the usage of the SLEEP & TENACITY
features.
Use the SLEEP and TENACITY features of MLOAD for scheduling MLOAD jobs.
Check the TABLEWAIT parameter. If omitted can cause immediate load job failure if you
submit two MLOADS loads that are trying to update the same table.
JOIN INDEX - Check the limit on number of fields for a join Index (max 16 fields). It may
vary by version
Join Index is like building the table physically. Hence it has the advantage like BETTER
Performance since data is physically stored and not calculated ON THE FLY etc. Cons are
of LOADING time(MLOAD needs Join Indices to be dropped before loading) and additional
space since it is a physical table.