Академический Документы
Профессиональный Документы
Культура Документы
All applications run under UNIX, Windows NT or Windows 2000 and all Teradata software
runs under PDE. All share the resources of CPU and memory on the node.
AMPs and PEs are virtual processors running under control of the PDE.Their numbers
are software configurable. In addition to user applications, gateway software and channel
driver support may also be running.
The Teradata RDBMS has a "shared-nothing" architecture, which means that the vprocs
(which are the PEs and AMPs) do not share common components. For example, each
AMP manages its own dedicated memory space (taken from the memory pool) and the
data on its own vdisk -- these are not shared with other AMPs. Each AMP uses system
resources independently of the other AMPs so they can all work in parallel for high system
performance overall.
Symmetric Multi-Processor (SMP):A single node is a Symmetric Multi-Processor (SMP)
Massively Parallel Processing (MPP):When multiple SMP nodes are connected to form
a larger configuration,we refer to this as a Massively Parallel Processing (MPP) system.
Session Control
The major functions performed by Session Control are logon and logoff. Logon takes a textual request for
session authorization, verifies it, and returns a yes or no answer. Logoff terminates any ongoing activity and
deletes the sessions context.
Parser
The Parser interprets SQL statements, checks them for proper SQL syntax and evaluates them semantically.
The PE also consults the Data Dictionary to ensure that all objects and columns exist and that the user has
authority to access these objects.
Optimizer
The Optimizer is responsible for developing the least expensive plan to return the requested response set.
Processing alternatives are evaluated and the fastest alternative is chosen. This alternative is converted to
executable steps, to be performed by the AMPs, which are then passed to the dispatcher.
Dispatcher
The Dispatcher controls the sequence in which the steps are executed and passes the steps on to the
BYNET. It is composed of execution control and response-control tasks. Execution control receives the step
definitions from the Parser and transmits them to the appropriate AMP(s) for processing, receives status
reports from the AMPs as they process the steps, and passes the results on to response control once the
AMPs have
completed processing. Response control returns the results to the user. The Dispatcher sees that all AMPs
have finished a step before the next step is dispatched. Depending on the nature of the SQL request, a step
will be sent to one AMP, or broadcast to all AMPs.
The BYNET handles the internal communication of the Teradata RDBMS. All communication between PEs
and AMPs is done via the BYNET.
When the PE dispatches the steps for the AMPs to perform, they are dispatched onto the BYNET. The
messages are routed to the appropriate AMP(s) where results sets and status information are generated.
This response information is also routed back to the requesting PE via the BYNET.
Depending on the nature of the dispatch request, the communication may be a:
Broadcastmessage is routed to all nodes in the system.
Point-to-pointmessage is routed to one specific node in the system.
Once the message is on a participating node, PDE handles the multicast(carries the message to just the
AMPs that should get it). So, while a teradata system does do multicast messaging, the BYNET hardware
alone cannot do it - the BYNET can only do point-to-point and broadcast between nodes.
FEATURES OF BYNET:
The BYNET has several unique features:
Fault tolerant: each network has multiple connection paths. If the BYNET detects an unusable path in either
network, it will automatically reconfigure that network so all messages avoid the unusable path. Additionally,
in the rare case that BYNET 0 cannot be reconfigured, hardware on BYNET 0 is disabled and messages are
re-routed to BYNET 1 (or equally distributed if there are more than two BYNETs present), and vice versa.
Load balanced: traffic is automatically and dynamically distributed between both BYNETs.
Scalable: as you add nodes to the system, overall network bandwidth scales linearly - meaning an increase
in system size without loss of performance.
High Performance: an MPP system typically has two or more BYNET networks. Because all networks are
active, the system benefits from the full aggregate bandwidth of all networks. Since the number of networks
can be scaled, performance can also be scaled to meet the needs of demanding applications. The
technology of the BYNET is what makes the Teradata parallelism possible.
The Access Module Processor (AMP)
The Access Module Processor (AMP) is the virtual processor. An AMP will control some portion of each
table on the system. AMPs do the physical work associated with generating an answer set including sorting,
aggregating, formatting and converting. An
AMP can control up to 64 physical disks. The AMPs perform all database management functions in the
system.An AMP responds to Parser/Optimizer steps transmitted across the
BYNET by selecting data from or storing data to its disks. For some requests, the AMPs may redistribute a
copy of the data to other AMPs.
The Database Manager subsystem resides on each AMP. The Database Manager:
Receives the steps from the Dispatcher and processes the steps. It has the ability to:
Lock databases and tables.
Create, modify, or delete definitions of tables.
Insert, delete, or modify rows within the tables.
Retrieve information from definitions and tables.
Collects accounting statistics, recording accesses by session so
users can be billed appropriately.
Returns responses to the Dispatcher.
The Database Manager provides a bridge between that logical organization and the physical organization of
the data on disks. The Database Manager performs a space-management function that controls the use and
allocation of space.
A disk array is a configuration of disk drives that utilizes specialized controllers to manage and distribute
data and parity across the disks while providing fast access and data integrity.
Each AMP vproc must have access to an array controller that in turn accesses the physical disks. AMP
vprocs are associated with one or more ranks of data. The total disk space associated with an AMP vproc is
called a vdisk. A vdisk may have up to three ranks.
Teradata supports several protection schemes:
RAID Level 5Data and parity protection striped across multiple disks.
RAID Level 1Each disk has a physical mirror replicating the data.
RAID Level SData and parity protection similar to RAID 5 but used for EMC di5sk arrays.
The disk array controllers are referred to as dual active array controllers, which means that both
controllers are actively used in addition to serving as backup for each other.
3.How is Teradata parallel?
Teradata is Parallel for the following reasons:
Each PE can support up to 120 user sessions in parallel.
Each session may handle multiple requests concurrently. While only one request at a time may be
active on behalf of a session, the session itself can manage the activities of 16 requests and their
associated answer sets.
The MPL is implemented differently for different platforms, this means that it will always be well
within the needed bandwidth for each particular platforms maximum throughput.
Each AMP can perform up to 80 tasks in parallel. This means that AMPs are not dedicated at any
moment in time to the servicing of only one request, but rather are multi-threading multiple requests
concurrently. Because AMPs are designed to operate on only one portion of the database, they must
operate in parallel to accomplish their intended results.
In addition to this, the optimizer may direct the AMPs to perform certain steps in parallel if there are
no contingencies between the steps. This means that an AMP might be concurrently performing
more than one step on behalf of the same request.
Query Parallelism:
Breaking the request into smaller components, all components being worked on at the same time,
with one single answer delivered. Parallel execution can incorporate all or part of the operations
within a query, and can significantly reduce the response time of an SQL statement, particularly if the
query reads and analyzes a large amount of data.
Query parallelism is enabled in Teradata by hash-partitioning the data across all the VPROCs
defined in the system. A VPROC provides all the database services on its allocation of datablocks.All
relational operations such as table scans, index scans, projections, selections, joins, aggregations,
and sorts execute in parallel across all the VPROCs simultaneously and unconditionally. Each
operation is performed on a VPROCs data independently of the data associated with the other
VPROCs.
4.Explain mechanism in data distribution and data retrieval
Data Distribution:
Teradata uses hash partitioning and distribution to randomly and evenly distribute data across all AMPs.
The rows of every table are distributed among all AMPs - and ideally will be evenly distributed among all
AMPs.
The rows of all tables are distributed across the AMPs according to their Primary Index value. The
Primary Index value goes into the hashing algorithm and the output is a 32-bit
Row Hash. The high order 16 bits are referred to as the bucket number and are used to
identify a hash map entry. The hash bucket is also referred to as then DSW Destination
Selection Word. This entry, in turn, is used to identify the AMP that will be targeted. The
remaining 16 bits are not used to locate the AMP. Each hash map is simply an array that associates DSW
values (or bucket numbers) with specific AMPs
.
To locate a row, the AMP file system searches through a memory-resident structure called the Master Index.
An entry in the Master Index will indicate that if a row with this Table ID and row hash exists, then it must be
on a specific disk cylinder.
The file system will then search through the designated Cylinder Index. There it will find an
entry that indicates that if a row with this Table ID and row hash exists, it must be in one specific data block
on that cylinder.
The file system then searches the data block until it locates the row(s) or returns a No Rows Found condition
code.
Data retrival:
Retrieving data from the Teradata RDBMS simply reverses the storage model process. A request
made for data is passed on to a Parsing Engine(PE). The PE optimizes the request for efficient processing
and creates tasks for the AMPs to perform, which results in the request being satisfied. Tasks are then
dispatched to the AMPs via the BYNET. Often, all AMPs must participate in creating the answer set, such as
returning all rows of a table to a client application. Other times, only one or a few AMPs need participate. The
PE will ensure that only the AMPs that need to will be assigned tasks. Once the AMPs have been given their
assignments, they retrieve the desired rows from their respective disks. The AMPs will sort, aggregate,or
format if needed. The rows are then returned to the requesting PE viathe BYNET. The PE takes the returned
answer set and returns it to the requesting client application.
When a user writes an SQL query that has a SI in the WHERE clause, the Parsing Engine will hash the
Secondary Index Value. The output is the Row Hash of the SI. The PE creates a request containing the Row
Hash and gives the request to the Message Passing Layer (which includes the BYNET software and
network). The Message Passing Layer uses a portion of the Row Hash to point to a bucket in the Hash Map.
That bucket contains an AMP number to which the PE's request will be sent. The AMP gets the request and
accesses the Secondary Index Subtable pertaining to the requested SI information. The AMP will check to
see if the Row Hash exists in the subtable and double check the subtable row with the actual secondary
index value. Then, the AMP will create a request containing the Primary Index Row ID and send it back to the
Message Passing Layer. This request is directed to the AMP with the base table row, and the AMP easily
retrieves the data row.
Secondary indexes can be useful for :
Processing aggregates
value comparision
Joining tables
Processing aggregates
value comparison
Joining tables
updating the data. They also have minimal effect on locking out others - when
you use an access lock, virtually all requests are compatible with yours.
11.How to set default database?
Setting the default database:
The user name you logon with is your temporary database.
For example ,if you logon as
.logon abc;
password:xyz
then abc is normally default database
Queries you make that do not specify database name will be made against your default database.
Changing the default database:
The DATABASE command is used to change the default database
For example:
DATABASE birla;
set your default database to birla and the subsequent queries are made against birla database.
12.What is a cluster?
A cluster is a group of AMPs that act as a single fallback unit. Clustering has no effect on primary row
distribution of the table, but the fallback row copy will always go to another AMP in the same cluster.
Should an AMP fail, the primary and fallback row copies stored on that AMP cannot be accessed.
However, their alternate copies are available through the other AMPs in the same cluster.
The loss of an AMP in one cluster has no effect upon other clusters. It is possible to lose one AMP in each
cluster and still have full access to all fallback-protected table data. If there are two AMP failures in the same
cluster, the entire Teradata system halts.While an AMP is down, the remaining AMPs in the cluster must do
their own work plus the work of the down AMP.
The example shows an 8-AMP system set up in two clusters of 4-AMPs each.
The client application is either written by a programmer or is one of Teradatas provided utility programs.
Many client applications are written as front ends for SQL submission,but they also are written for file
maintenance and report generation. Any client-supported language may be used provided it can interface to
the Call Level Interface (CLI).
The Call Level Interface (CLI) is the lowest level interface to the Teradata RDBMS. It consists of system
calls which create sessions, allocate request and response buffers, create and de-block parcels of
information, and fetch response information to the requesting client.
The Teradata Director Program (TDP) is a Teradata-supplied program that must run on any client system
that will be channel-attached to the Teradata RDBMS. The TDP manages the session traffic between the
Call-Level Interface and the RDBMS. Its functions include session initiation and termination, logging,
verification, recovery, and restart, as well as physical input to and output from the PEs, (including session
balancing) and the maintenance of queues. The TDP may also handle system security.
The Host Channel Adapter is a mainframe hardware component that allows the mainframe to connect to an
ESCON or Bus/Tag channel.
The PBSA (PCI Bus ESCON Adapter) is a PCI adapter card that allows a WorldMark server to connect to an
ESCON channel.
The PBCA (PCI Bus Channel Adapter) is a PCI adapter card that allows a WorldMark server to connect to a
Bus/Tag channel.
14.What are the connections involved in Network attached system?
In network-attached systems, there are four major software components that play important roles in getting
the requests to and from the Teradata RDBMS.
The Call Level Interface (CLI) is a library of routines that resides on the client side. Client
application programs use these routines to perform operations such as logging on and off, submitting SQL
queries and receiving responses which contain the answer set. These routines are 98% the same in a
network-attached environment as they are in a channel attached.
The Teradata ODBC (Open Database Connectivity) driver uses an open standardsbased
ODBC interface to provide client applications access to Teradata across LAN-based
environments. NCR has ODBC drivers for both UNIX and Windows-based applications.
The Micro Teradata Director Program (MTDP) is a Teradata-supplied program that must be linked to any
application that will be network-attached to the Teradata RDBMS. The MTDP performs many of the functions
of the channel based TDP including session management. The MTDP does not control session balancing
across PEs. Connect and Assign Servers that run on the Teradata system handle this activity.
The Micro Operating System Interface (MOSI) is a library of routines providing operating system
independence for clients accessing the RDBMS. By using MOSI, we only need one version of the MTDP to
run on all network-attached platforms.
15.How do you replace a null value with a default value while loading?
Using COALESCE function
Syntax: COALESCE( COL, 'DEFAULT')
16.What is COMPRESS?
Compress: By default compresses the null values. In order to compress any values explicitly we need to
give the characters or values in order to compress those values.
17.How many values can we compress in Teradata?
Any column can be compressed except the indexed column and non volatile.
18.Difference between volatile and global volatile table?
Global Temporary tables (GTT) 1. When they are created, its definition goes into Data Dictionary.
2. When materialized data goes in temp space.
3. thats why, data is active up to the session ends, and definition will remain there upto its not dropped using
Drop table statement.
If dropped from some other session then its should be Drop table all;
4. you can collect stats on GTT.
Volatile Temporary tables (VTT) 1. Table Definition is stored in System cache
2. Data is stored in spool space.
3. thats why, data and table definition both are active only upto session ends.
4. No collect stats for VTT.
19.Difference between PK and PI?
Primary Key:
A relational concept used to determine relationships among entities and to define referential
constraints.
Primary Index:
Used to store rows on disk.
Defined by CREATE TABLE STATEMENT .
Unique or Non unique.
It is used to distribute rows.
Values can be changed.
Can be null.
Related to access path.
20.What is multiple statement processing?
Multiple statement processing increases the performance when loading into large tables. All
statements are sent to parser simultaneously. All statements are executed parallel.
21.What is TDPID?
TDPID is the IP address of the teradata server machine.
22.What is tenacity?
Specifies the no. of hours that teradata FLOAD continuous trying to logon when the maximum no of load
jobs is already running on teradata database.
23.What is Sleep?
Specifies the no. of minutes that teradata FLOAD pauses before retrying on logon operation.
24.What is database skewing?
Skew factor occurs when the primary index column selected is not a good candidate.
Mean, If for a table when the PI selected having highly non unique values then SKEW factor will
be getting by default it will be zero, if skew factor selected is greater than 25 then it is not a good
sign.
25.What is soft Referential Integrity and Batch Referential Integrity?
Soft Referential Integrity:
It provides a mechanism to allow user-specified Referential Integrity (RI) constraints that are not
enforced by the database.
Enables optimization techniques such as Join Elimination.
Batch Referential Integrity:
Tests an entire insert, delete, or update batch operation for referential integrity. If insertion, deletion, or
update of any row in the batch violates referential integrity, then parsing engine software rolls back the entire
batch and returns an abort message.
26.Difference Between MLOAD & FLOAD
MLOAD:
Multiload allows nonunique secondary indexes - automatically rebuilds them after loading.
Multiload can load at max 5 tbls at a time and can also update and delete the data
FastLoad:
Fastload performs the loading of the data in 2phase and it no need a work table for loading the data so it is
faster as well as it follows the below steps to load the data in the table
Phase1-It moves all the records to all the AMP first without any hashing
Phase2-After giving end loading command,Amp will hashes the record and send it to the appropriate AMPS .
Fastload is used to load empty tables and is very fast, can load one table at a time.
27. Advantages of PPI
PPI:-Partitioned Primary Index.
When a Index is given on a partitioned table on the partitioned column that is the column on
which the partitioned has done the same column has been given as a primary index then,
If there are more partitions, then it will be faster to scan the table, that too with the PI
value itself.
28. Disadvatages of PPI
If there are no partition declared for the row to be inserted in a particular partition then it is waste to
declare the primary index itself.
It is better to use the secondary index for partition for better performance.
29.Teradata joins?
Join Processing
A join is the combination of two or more tables in the same FROM of a single SELECT statement. When
writing a join, the key is to locate a column in both tables that is from a common domain. Like the
correlated subquery, joins are normally based on an equal comparison between the join columns.
The following is the original join syntax for a two-table join:
SELECT
[<table-name>.]<column-name>
[,<table-name>.<column-name> ]
FROM <table-name1> [ AS <alias-name1> ]
,<table-name2> [ AS <alias-name2> ]
[ WHERE [<table-name1>.]<column-name>= [<table-name2>.]<column-name> ]
JoIN keyword is used in an SQL statement to query data from two or more tables, based on a
relationship between certain columns in these tables
Common Join Types in Teradata
1.Self Join
2.Inner Join
3.Outer Join
The three formats of an OUTER JOIN are:
Left_table LEFT OUTER JOIN Right_table -left table is outer table
Left_table RIGHT OUTER JOIN Right_table -right table is outer table
Left_table FULL OUTER JOIN Right_table -both are outer tables
Self Join
A Self Join is simply a join that uses the same table more than once in a single join operation. The
first requirement for this type of join is that the table must contain two different columns of the same
domain. This may involve de-normalized tables.
For instance, if the Employee table contained a column for the manager's employee number and
the manager is an employee, these two columns have the same domain. By joining on these two
columns in the Employee table, the managers can be joined to the employees.
Example:
SELECT Mgr.Last_name (Title 'Manager Name', format 'X(10) )
,Department_name (Title 'For Department ')
FROM Employee_table AS Emp
INNER JOIN Employee_table AS Mgr
ON Emp.Manager_Emp_ID = Mgr.Employee_Number
INNER JOIN Department_table AS Dept
ON Emp.Department_number = Dept.Department_number
ORDER BY 2 ;
INNER JOIN:
INNER JOIN keyword return rows when there is at least one match in both tables
INNER JOIN Syntax:
SELECT column_name(s)
FROM table_name1
INNER JOIN table_name2
ON table_name1.column_name=table_name2.column_name
LEFT OUTER JOIN
The LEFT OUTER JOIN keyword returns all rows from the left table (table_name1), even if there are
no matches in the right table(table_name2).
LEFT OUTER JOIN Syntax:
SELECT column_name(s)
FROM table_name1
LEFT OUTER JOIN table_name2
ON table_name1.column_name=table_name2.column_name
RIGHT OUTER JOIN:
The RIGHT OUTER JOIN keyword Return all rows from the right table (table_name2), even if there
are no matches in the left table (table_name1).
RIGHT OUTER JOIN Syntax:
SELECT column_name(s)
FROM table_name1
RIGHT OUTER JOIN table_name2
ON table_name1.column_name=table_name2.column_name
Product Join
It is very important to use an equal condition in the WHERE clause. Otherwise you get a product join.
This means that one row of a table is joined to multiple rows of another table. A mathematic product
means that multiplication is used.
30. Difference between Primary index and secondary index?
1. primary index cannot create after table creation, whereas secondary index can be created dynamically.
2. primary index is 1 AMP operation, secondary index is 2 AMP operation and non unique secondary index
is ALL AMP operation.
31. what are Journals?
Journaling is a data protection mechanism in teradata Journals are generated to maintain preimages and post images of a DML transaction starting/ending at/from a checkpoint. When a DML
transaction fails,the table is restored back to the last available checkpoint using the journal
Images.
There are two types of Journals (1) permanent (2) Transient journal.
The purpose of the permanent journal is to provide selective or full database recovery to a
specified point in time. It permits recovery from unexpected hardware or software disasters. The
permanent journal also reduces the need for full table backups that can be costly in both time and
resources.
1. Permanent journals are explicitly created during database and/or table creation time. This
journaling can be implemented depending upon the need and available disk space.
PJ processing is a user selectable option on a database which allows the user to select extra
journaling for changes made to a table. There are more options and the data can be rolled
forward or backward (depending if you selected the correct options) at points of the customers
choosing. They are permanent because the changes are kept until the customer deletes them or
unloads them to a backup tape. They are usually kept in conjunction with backups of the database
and allow partial rollback or roll forward for some corrupted data or operational error like someone
deleted a months worth of data because they messed up the where clause
2.Transient Journal
The transient journal permits the successful rollback of a failed transaction (TXN). Transactions
are not committed to the database until the AMPs have received an End Transaction request,
either implicitly or explicitly. There is always the possibility that the transaction may fail. If
so, the participating table(s) must be restored to their pre-transaction state.
The transient journal maintains a copy of before images of all rows affected by the transaction. In
the event of transaction failure, the before images are reapplied to the affected tables, then are
deleted from the journal, and a rollback operation is completed. In the event of transaction
success, the before images for the transaction are discarded from the journal at the point of
transaction commit.
Transient Journal activities are automatic and transparent to the user
32.Teradata fast export script?
.LOGTABLE RestartLog1_fxp;
.RUN
.BEGIN
FILE logon ;
EXPORT SESSIONS 4 ;
.LAYOUT
.FIELD
.FIELD
Record_Layout ;
in_City
in_Zip
.IMPORT
1 CHAR(20) ;
* CHAR(5);
.EXPORT
SELECT
OUTFILE cust_acct_outfile2 ;
A.Account_Number
, C.Last_Name
, C.First_Name
, A.Balance_Current
FROM
Accounts A
INNER JOIN
Accounts_Customer AC INNER JOIN
Customer C
ON
C.Customer_Number = AC.Customer_Number
ON
A.Account_Number = AC.Account_Number
WHERE
A.City
= :in_City
AND
A.Zip_Code = :in_Zip
ORDER BY 1 ;
.END EXPORT ;
.LOGOFF ;
33.Teradata statistics.
Statistics collection is essential for the optimal performance of the Teradata query optimizer. The query
optimizer relies on statistics to help it determine the best way to access data. Statistics also help the
optimizer ascertain how many rows exist in tables being queried and predict how many rows will qualify for
given conditions. Lack of statistics, or out-dated statistics, might result in the optimizer choosing a less-thanoptimal method for accessing data tables.
Points:
1: Once a collect stats is done on the table(on index or column) where is this information stored so
that the optimizer can refer this?
Ans: Collected statistics are stored in DBC.TVFields or DBC.Indexes. However, you cannot query these two
tables.
2: How often collect stats has to be made for a table that is frequently updated?
Answer: You need to refresh stats when 5 to 10% of table's rows have changed. Collect stats could be pretty
resource consuming for large tables. So it is always advisable to schedule the job at off peak period and
normally after approximately 10% of data changes.
3: Once a collect stats has been done on the table how can i be sure that the optimizer is considering
this before execution ? i.e; until the next collect stats has been done will the optimizer refer this?
Ans: Yes, optimizer will use stats data for query execution plan if available. That's why stale stats is
INTEGER
INTEGER
,lname
CHAR(20)
,fname
VARCHAR(20)
,salary
DECIMAL(10,2)
,hire_date DATE )
UNIQUE PRIMARY INDEX(emp);
Notice the UNIQUE PRIMARY INDEX on the column emp. Because this is a SET table it is much more
efficient to have at least one unique key so the duplicate row check is eliminated.
The following is an example of creating the same table as before, but this time as a MULTISET table:
CREATE MULTISET TABLE employee
( emp
,dept
INTEGER
INTEGER
,lname
CHAR(20)
,fname
VARCHAR(20)
,salary
DECIMAL(10,2)
,hire_date DATE )
PRIMARY INDEX(emp);
Notice also that the PI is now a NUPI because it does not use the word UNIQUE. This is important! As
mentioned previously, if the UPI is requested, no duplicate rows can be inserted. Therefore, it acts more
like a SET table. This MULTISET example allows duplicate rows. Inserts will take longer because of the
mandatory duplicate row check.
38. What is macro? Advatages of it.
Macros:A macro is a predefined, stored set of one or more SQL commands and report-formatting
commands. Macros are used to simplify the execution of frequently used SQL commands. Macros
do not require permanent space.
39.What are the functions of AMPs in Teradata?
Each AMP is designed to hold a portion of the rows of each table. An AMP is responsible for the storage,
maintenance and retrieval of the data under its control. Teradata uses hash partitioning to randomly and
evenly distribute data across all AMPs for balanced performance
points:
40. How Does Teradata Store Rows?
Teradata uses hash partitioning and distribution to randomly and evenly distribute data across all AMPs.
The rows of every table are distributed among all AMPs - and ideally will be evenly distributed among all
AMPs. Each AMP is responsible for a subset of the rows of each table.
Evenly distributed tables result in evenly distributed workloads.
Fallback & Down Amp recovery journal!!!
Hi,
When a Fallback protected AMP goes down during a write operation, the update takes place
in the Fallback AMP in the same cluster to later update in the original AMP when it recovers.
When an AMP goes down the updates are also recorded in the Down AMP Recovery journal to later
update when AMP recovers.
My doubt is when an AMP goes down are the updates made in both Fallback AMP & Down AMP
recovery journal?
Because if Yes, it looks like a redundant recovery measure or
Is it like Down AMP Recovery journal is used for only Non Fallback protected AMPs or
for Fallback protected AMPs when both the AMPs in the cluster are down.
Regards,
Annal T
Hi Annal,
According to my knowledge
1.Down amp recovery journal will start when AMP goes down to restore the data for the down amp
2.fall back is like it has redundant data,if one amp goes down in the cluster also it wont affect your
queries.the query will use data from fall back rows.the down amp wont be updated use the data
from fall back.
For your doubt,When amp is down you ran the update,so fall back rows will be updated.Still amp is
in down condition and if you run the query,the query will use the updated ones and run.whenever
down amp active it will use downamp recovery journal and data will be updated.
FROM <table-name>
GROUP BY 1 ;
In the above syntax the <column-list> is a list of columns. It is written as a series of column names
separated by commas.
SELECT
COALESCE(NULL,0) AS Col1
,COALESCE(NULL,NULL,NULL) AS Col2
,COALESCE(3) AS Col3
,COALESCE('A',3) AS Col4 ;
45.Diff between role , privilege and profile?
A role can be assisgned a collection of access rights in the same way a user can.
You then grant the role to a set of users, rather than grant each user the same rights.
This cuts down on maintenance, adds standardisation (hence reducing erroneous access to sensitive data)
and reduces the size of the dbc.allrights table, which is very important in reducing DBC blocking in a large
environment.
Profiles assign different characteristics on a User, such as spool space, permspace and account strings.
Again this helps with standardisation. Note that spool assigned to a profile will overrule spool assigned on a
create user statement. Check the on line manuals for the full lists of properties
Data Control Language is used to restrict or permit a user's access. It can selectively limit a user's ability to
retrieve, add, or modify data. It is used to grant and revoke access privileges on tables and views.
46.Diff between database and user?
Both may own objects such as tables, views, macros, procedures, and functions. Both users and databases
may hold privileges. However, only users may log on, establish a session with the Teradata Database, and
submit requests.
A user performs actions where as a database is passive. Users have passwords and startup strings;
databases do not. Users can log on to the Teradata Database, establish sessions, and submit SQL
statements; databases cannot.
Creator privileges are associated only with a user because only a user can log on and submit a CREATE
statement. Implicit privileges are associated with either a database or a user because each can hold an
object and an object is owned by the named space in which it resides
47.How many mload scripts are required for the below scenario
First I want to load data from source to volatile table.
After that I want to load data from volatile table to Permanent table.
48.What are the types of CASE statements available in Teradata?
The CASE function provides an additional level of data testing after a row is accepted by the WHERE
clause. The additional test allows for multiple comparisons on multiple columns with multiple outcomes.
It also incorporates logic to handle a situation in which none of the values compares equal.
When using CASE, each row retrieved is evaluated once by every CASE function. Therefore, if two
CASE operations are in the same SQL statement, each row has a column checked twice, or two
different values each checked one time.
The basic syntax of the CASE follows:
CASE <column-name>
WHEN <value1> THEN <true-result1>
WHEN <value2> THEN <true-result2>
WHEN <valueN> THEN <true-resultN>
[ ELSE <false-result> ]
END
Types:
1.Flexible Comparisons within CASE
When it is necessary to compare more than just equal conditions within the CASE, the format is
modified slightly to handle the comparison. Many people prefer to use the following format because it is
reporting works, one output line below the previous. Horizontal reporting shows the next value on the
same line as the next column, instead of the next line.
Using the next SELECT statement, we achieve the same information in a horizontal reporting format by
making each value a column:
SELECT AVG(CASE Class_code
WHEN 'FR' THEN Grade_pt
ELSE NULL END) (format 'Z.ZZ') AS Freshman_GPA
,AVG(CASE Class_code
WHEN 'SO' THEN Grade_pt
ELSE NULL END) (format 'Z.ZZ') AS Sophomore_GPA
,AVG(CASE Class_code
WHEN 'JR' THEN Grade_pt
ELSE NULL END) (format 'Z.ZZ') AS Junior_GPA
,AVG(CASE Class_code
WHEN 'SR' THEN Grade_pt
ELSE NULL END) (format 'Z.ZZ') AS Senior_GPA
FROM Student_Table
WHERE Class_code IS NOT NULL ;
Logical modeling
Physical modeling
If you are going to be working with databases, then it is important to understand the difference between
logical and physical modeling, and how they relate to one another. Logical and physical modeling are
described in more detail in the following subsections.
Logical Modeling
Logical modeling deals with gathering business requirements and converting those requirements into a
model. The logical model revolves around the needs of the business, not the database, although the needs
of the business are used to establish the needs of the database. Logical modeling involves gathering
information about business processes, business entities (categories of data), and organizational units. After
this information is gathered, diagrams and reports are produced including entity relationship diagrams,
business process diagrams, and eventually process flow diagrams. The diagrams produced should show the
processes and data that exists, as well as the relationships between business processes and data. Logical
modeling should accurately render a visual representation of the activities and data relevant to a particular
business.
The diagrams and documentation generated during logical modeling is used to determine whether the
requirements of the business have been completely gathered. Management, developers, and end users alike
review these diagrams and documentation to determine if more work is required before physical modeling
commences.
Typical deliverables of logical modeling include
Physical Modeling
Physical modeling involves the actual design of a database according to the requirements that were
established during logical modeling. Logical modeling mainly involves gathering the requirements of the
business, with the latter part of logical modeling directed toward the goals and requirements of the database.
Physical modeling deals with the conversion of the logical, or business model, into a relational database
model. When physical modeling occurs, objects are being defined at the schema level. A schema is a group
of related objects in a database. A database design effort is normally associated with one schema.
During physical modeling, objects such as tables and columns are created based on entities and attributes
that were defined during logical modeling. Constraints are also defined, including primary keys, foreign keys,
other unique keys, and check constraints. Views can be created from database tables to summarize data or
to simply provide the user with another perspective of certain data. Other objects such as indexes and
snapshots can also be defined during physical modeling. Physical modeling is when all the pieces come
together to complete the process of defining a database for a business.
Physical modeling is database software specific, meaning that the objects defined during physical modeling
can vary depending on the relational database software being used. For example, most relational database
systems have variations with the way data types are represented and the way data is stored, although basic
data types are conceptually the same among different implementations. Additionally, some database
systems have objects that are not available in other database systems.
57. what is derived Table?
Derived tables are always local to a single SQL request. They are built dynamically using an additional
SELECT within the query. The rows of the derived table are stored in spool and discarded as soon as
the query finishes. The DD has no knowledge of derived tables. Therefore, no extra privileges are
necessary. Its space comes from the users spool space.
Following is a simple example using a derived table named DT with a column alias called avgsal and its
data value is obtained using the AVG aggregation:
SELECT *
FROM (SELECT AVG(salary) FROM Employee_table) DT(avgsal) ;
58.what is the use of WITH CHECK OPTION in Teradata?
In Teradata, the additional key phase: WITH CHECK OPTION, indicates that the WHERE clause
conditions should be applied during the execution of an UPDATE or DELETE against the view.
This is not a concern if views are not used for maintenance activity due to restricted privileges.
59.what is soft referential integrity and batch referential integrity?
Soft RI is just an indication that there is a PK-FK relation between the columns and is not implemented at
TD side.
But having it would help in cases like Join processing etc.
Batch:
- Tests an entire insert, delete, or update batch operation for referential integrity.
- If insertion, deletion, or update of any row in the batch violates referential integrity, then parsing engine
software rolls back the entire batch and returns an abort message.
Lets say that I had a table called X with some number of rows and I wanted to insert these rows into table Y
(insert into X select * from y). However, some of the rows violated an RI constraint that table Y had. From
reading the manuals, it seemed to me that if using standard RI, all of the valid rows would be inserted but the
invalid ones would not. But with batch RI (which is "all or nothing") I would expect nothing to get inserted
since it would check for problem rows up front and return an error right away.
If in fact there is no difference except in how Teradata processes things internally (i.e. where it checks for
invalid rows) then why would you want to use one over the other? Wouldn't you always want to use batch
since it does the checking up front and saves processing time?
Points:
lets suppose that we have 3 dimensions and 1 facts table (like in the example above).
lets suppose that join index (or aji) is based on 3 dims and facts (all tables inner joined).
1. with or without referential integrity:
if you submit query which joins dim1, dim2, dim3 and facts index can be used
2. with referential integrity:
if you submit query which joins dim1 and facts index can be used because optimizer knows that facts rows
reference rows from other dims (so he knows that inner join will not throw away those records)
3. without referential integrity
if you submit query which joins dim1 and facts index cannot be used because optimizer does not know if
rows from facts reference rows from other dims and optimizer does not know if it is one-to-many or manyto-one or anything else.
"Hard" referential integrity is the "normal" referential integrity that enforces any RI constraints
and ensures that any data loaded into the tables meets the RI rules. You should keep in mind that
neither Multiload or Fastload allow the target table to have foreign key references. Tpump does
allow this.
"Soft" referential integrity is a feature that is more about accessing the data than about loading it.
Soft referential integrity does not enforce any RI constraints. However, when you
specify soft RI, you are telling the optimizer that the foreign key references do exist. Therefore, it
is your job to make sure that is true.
Soft Referential Integrity (Soft RI) is a mechanism by which you can tell the optimizer that
even though no formal RI constraints have been placed on the table(s), the data in the tables
conform to the requirements of RI enforced tables.
This means that the user has insured the following:
The PK of the parent table has unique, not null values.
The FK of the child table contains only values which are contained in the PK column of
the parent table.
Soft RI
By allowing the optimizer to assume that RI constraints are implicitly in force, (even though no
formal RI is assigned to the table), you enable the optimizer to eliminate join steps in queries
such as the one seen previously.
Implementing Soft RI
Soft RI is implemented using slightly different syntax than standard RI. The
REFERENCES clause for the column definition will add the key words 'WITH NO CHECK
OPTION'.
Examples
Create the employee table with a soft RI reference to the department table.
CREATE TABLE employee
( employee_number INTEGER NOT NULL,
manager_employee_number INTEGER,
department_number INTEGER ,
job_code INTEGER,
last_name CHAR(20) NOT NULL,
first_name VARCHAR(30) NOT NULL,
hire_date DATE NOT NULL,
birthdate DATE NOT NULL,
salary_amount DECIMAL(10,2) NOT NULL
, FOREIGN KEY ( department_number ) REFERENCES WITH NO CHECK OPTION
department( department_number))
UNIQUE PRIMARY INDEX (employee_number);
The parent table must be created with a unique, not null referenced column. Either of the
examples below may be used.
CREATE TABLE department
( department_number INTEGER NOT NULL CONSTRAINT primary_1 PRIMARY KEY
,department_name CHAR(30) UPPERCASE NOT NULL UNIQUE
,budget_amount DECIMAL(10,2)
,manager_employee_number INTEGER);
CREATE TABLE department
( department_number INTEGER NOT NULL
,department_name CHAR(30) UPPERCASE NOT NULL UNIQUE
,budget_amount DECIMAL(10,2)
,manager_employee_number INTEGER)
UNIQUE PRIMARY INDEX (department_number);
Executing the same query as before, notice the join elimination step takes place just as it did
when standard RI was enforced.
Find all employees in valid departments.
The problem in the diskdrive and disk array...can corrupt the data....
these type of corrupted data cant be found easily..but queries against these
corrupted data will get u wrong answers..we can find the corruption by means of scandisk and
checktable.....These errors will reduce the availability
of the DWH.......This Kinda Errors is called DIsk I/o Errors
Inorder to avoid this in TD we have the DIsk I/o Integrity Check.... CheckSum is used to check the
Disk I/O Integrity Check
by means of checksum for table level......this is a kinda protection technique by which we can select
the
various levels of corruption checking ..........
These checks are done by some integrity methods.....
This feature detects and logs the disk i/o errors
TD give predefined data integrity levels check.....
default,low,end,medium,high....etc...
this checksum can be enabled.....using create table for table level.. DDL.
for system level use DBScontrol utilty to set the parameter
If u wanna more hands on then u ve to use the scandisk and checktbl utility....
u ve to run the checktbl utility in level 3 so that it will diagnos the entire rows,byte by byte...
60.what is identity column?
IN Teradata V2R5.1 with one, column (INTEGER data type) that is defined as an Identity column. Here's the
DDL:
CREATE SET TABLE test_table ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT
(
PRIM_REGION_ID INTEGER GENERATED ALWAYS AS IDENTITY
(START WITH 1
INCREMENT BY 1
MINVALUE -2147483647
MAXVALUE 2147483647
NO CYCLE),
PRIM_REGION_CD CHAR(6) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL)
PRIMARY INDEX ( PRIM_REGION_ID );
Teradata has a concept of identity columns on their tables beginning around V2R6.x. These columns
differ from Oracle's sequence concept in that the number assigned is not guaranteed to be
sequential. The identity column in Teradata is simply used to guaranteed row-uniqueness.
Example:
CREATE MULTISET TABLE MyTable
(
ColA INTEGER GENERATED BY DEFAULT AS IDENTITY
(START WITH 1
INCREMENT BY 20)
ColB VARCHAR(20) NOT NULL
)
UNIQUE PRIMARY INDEX pidx (ColA);
Granted, ColA may not be the best primary index for data access or joins with other tables in the
data model. It just shows that you could use it as the PI on the table.
update T1
from T2 b
set last_dt = b.last_dt
where T1.msisdn = b.msisdn
else
insert into tp_tmp.sa_telenor_mshare_backup
values ( T2.msisdn, T2.oper_cd,
T2.outg, T2.incom,
T2.fst_dt, T2.last_dt,
T2.second_last_dt, T2.third_last_dt)
62.what is SAMPLEID in Teradata?
Since SAMPLEID is a column, it can be used as the sort key.
Multiple sample sets may be generated in a single query if desired. To identify the specific set,
a tag called the SAMPLEID is made available for association with each set. The SAMPLEID
may be selected, used for ordering, or used as a column in a new table.
Get three samples from the department table, one with 25% of the rows, another with 25%
and a third with 50%.
SELECT department_number
,sampleid
FROM department
SAMPLE .25, .25, .50
ORDER BY sampleid;
Result
department_number
----------------301
403
402
201
100
501
302
401
600
SampleId
----------1
1
2
2
3
3
3
3
3
63.What are diff other options available with SAMPLE function in Teradata?
SAMPLE function is used to retrive the random of data from table
example 1
Select * from emp sample 10
example 2
select * from tab sample
when prod_code = 'AS' then 10
when prod_code = 'CM' then 10
when prod_code = 'DQ' then 10
end
Sample Function
Hi,
I have an order table which has order details alongwith Product Code as "AS" ,
"BU" ,"CM","DQ","ER","FN"
I was to select a random of 10 records for each of the product codes "AS" , "CM" and "DQ"
Can i use a "sample" teradata feature to acheive the above results . If yes how can that be done in a
single query, such that i get 30 records 10 each for the above 3 product codes.
Is there a better way to get the above results
Thanks,
Sam
dnoeth 762 posts Joined 11/04
08 Jan 2007
Hi Sam,
select *
from tab
sample
when prod_code = 'AS' then 10
when prod_code = 'CM' then 10
when prod_code = 'DQ' then 10
end
Dieter
RANDOM Function
The RANDOM function may be used to generate a random number between a specified
range.
RANDOM (Lower limit, Upper limit) returns a random number between the lower and upper
limits inclusive. Both limits must be specified, otherwise a random number between 0 and
approximately 4 billion is generated.
Consider the department table, which consists of nine rows.
SELECTdepartment_numberFROMdepartment;
department_number
----------------501
301
201
600
100
402
403
302
401
Example
Assign a random number between 1 and 9 to each department.
SELECTdepartment_number,RANDOM(1,9)FROMdepartment;
department_number Random(1,9)
----------------- ----------501
2
301
6
201
3
600
7
100
3
402
2
403
1
302
5
401
1
Note: it is possible for random numbers to repeat. The RANDOM function is activated for
each row processed, thus duplicate random values are possible.
INTEGER
INTEGER
,lname
CHAR(20)
,fname
VARCHAR(20)
,salary
DECIMAL(10,2)
,hire_date DATE )
UNIQUE PRIMARY INDEX(emp);
RAID 5 is a parity-checking technique. For every three blocks of data (spread over three disks), there is a
fourth block on a fourth disk that contains parity information. This allows any one of the four blocks to be
reconstructed by using the information on the other three. If two of the disks fail, the rank becomes
unavailable. The array controller does the recalculation of the information for the missing block.
Recalculation will have some impact on performance, but at a much lower cost in terms of disk space.
76.what is the diff b/w sample and top.
The Sampling function (SAMPLE) permits a SELECT to randomly return rows from a Teradata database
table. It allows the request to specify either an absolute number of rows or a percentage of rows to return.
Additionally, it provides an ability to return rows from multiple samples.
SELECT
TOP Clause
The TOP clause is used to specify the number of records to return.
The TOP clause can be very useful on large tables with thousands of records. Returning a large number of
records can impact on perfor mance.
Note:
Example:
1.SELECT TOP 50 PERCENT * FROM EMP
2. SELECT TOP 2 * FROM EMP
There is an Top function in V2R6, but if you want to try out in V2R5 you need to go by analytical
function.
Select *
From vinod_1
Qualify Row_number() OVER(Order by empno) <= 5
77.How to improve performance of the query
78.Explain Primary Index and how do we select that
The Primary Index determines which AMP stores an individual row of a table. The PI data is converted
into the Row Hash using a mathematical hashing formula. The result is used as an offset into the Hash
Map to determine the AMP number. Since the PI value determines how the data rows are distributed
among the AMPs, requesting a row using the PI value is always the most efficient retrieval mechanism
for Teradata.
POINTS:
.It determines how data will be distributed and is also the most efficient access path.
79.What is difference between Role, Privilege and profile
A role can be assigned a collection of access rights in the same way a user can.
You then grant the role to a set of users, rather than grant each user the same rights.
This cuts down on maintenance, adds standardization (hence reducing erroneous access to sensitive data)
and reduces the size of the dbc.allrights table, which is very important in reducing DBC blocking in a large
environment.
Profiles assign different characteristics on a User, such as spool space, perm space and account strings.
Again this helps with standardization. Note that spool assigned to a profile will overrule spool assigned on a
create user statement. Check the on line manuals for the full lists of properties
Data Control Language is used to restrict or permit a user's access. It can selectively limit a user's ability to
retrieve, add, or modify data. It is used to grant and revoke access privileges on tables and views.
80.What are different spaces in Teradata and difference ?
Perm Space
Temp Space
spool space
Perm Space :All databases have a defined upper limit of permanent space.
Permanent space is used for storing the data rows of tables. Perm space is not pre-allocated. It
represents a maximum limit.
Spool Space :
All databases also have an upper limit of spool space. If there is no limit defined for a particular
database or user, limits are inherited from parents. Theoretically, a user could use all unallocated
space in the system for their query. Spool space is temporary space used to hold intermediate
query results or formatted answer sets to queries. Once the query is complete, the spool space is
released.
Example: You have a database with total disk space of 100GB. You have
10GB of user data and an additional 10GB of overhead. What is the
maximum amount of spool space available for queries?
Answer: 80GB. All of the remaining space in the system is available for spool
Temp Space :
The third type of space is temporary space. Temp space is used for Global and Volatile temporary
tables, and these results remain available to the user until the session is terminated. Tables created
in temp space will survive a restart.
81.If your Skew factor is going up. What are remedies ?
Skew factor occurs when the primary index column selected is not a good candidate.
Mean, If for a table when the PI selected having highly non unique values then SKEW factor will be
getting by default it will be zero, if skew factor selected is greater than 25 then it is not a good sign.
82.When, How and why we use Secondary Indexes.?
A secondary index is an alternate path to the data. Secondary indexes are used to improve performance
by allowing the user to avoid scanning the entire table during a query. A secondary index is like a primary
index in that it allows the user to locate rows. Unlike a primary index, it has no influence on the way rows
are distributed among AMPs. Secondary Indexes are optional and can be created and dropped dynamically.
Secondary Indexes require separate subtables which require extra I/O to maintain the indexes.
83.What is difference between Primary Key and Primary Index
84.What is difference between database and user in Teradata. what are the things you can do or can
not do in both.
Both may own objects such as tables, views, macros, procedures, and functions. Both users and databases
may hold privileges. However, only users may log on, establish a session with the Teradata Database, and
submit requests.
A user performs actions where as a database is passive. Users have passwords and startup strings;
databases do not. Users can log on to the Teradata Database, establish sessions, and submit SQL
statements; databases cannot.
Creator privileges are associated only with a user because only a user can log on and submit a CREATE
statement. Implicit privileges are associated with either a database or a user because each can hold an
object and an object is owned by the named space in which it resides
85.What is Checkpoint ?
86.When do you use BTEQ. What other softwares have you used or can we use rather than BTEQ.
When the query is performing operations on lesser amount of data in a table then we go for BTEQ.
Any kind of SQL operations like SELECT, UPDATE, INSERT and delete.
Can be used for import, export and reporting purposes.
Macros and Stored procs can also be run using BTEQ.
The other utilities which we can use instead of BTEQ for loading purposes are FASTLOAD and MLOAD.
And exporting is FASTEXPORT. But these are used while accessing large amount of data.
87.How many type of files have you loaded and their differences. (Fixed and Variable) ?
88.How do you execute your jobs in Teradata Environment.
In a channel environment I.e mainframes, the load utilities can be execute through a JCL.
In a network I.e from a command prompt the load scripts can be run through the following command.
<utility name> <scriptname>
89.What was the environment of your latest project (Number of Amps, Nodes, Teradata Server
Number etc)
Number of Amps production and integration 24 development 12
Number of nodes - production and integration 4 development 2
90.What is the process to restart the multiload if it fails
If Mload failed in the Acquisition phase just rerun the
job. If Mload failed in Application Phase:
a) Try to drop error tables, work tables, log tables,
release Mload if required n submit the job
from .Begin Import onwards.
b) if ur table is fallback protected u need to make sure un
fallback and use RELEASE MLOAD IN APPLY sql.
Then resubmit the job.
1.
2.
3.
93.what are the different functions you do in BTEQ (Errorcode, ErrorLevel, etc) ?
Error Level :Assigns severity to errors
you can assign an error level (severity) for each error code returned.
you can make decisions can be based on error level.
94.what is difference between ZEROIFNULL and NULLIFZERO ?
The ZEROIFNULL function: will pass zero when data coming as null
The NULLIFZERO function: will pass null when data coming as zero.
95.What is Range_N
Range_N is defined on a partition primary index to specify the range of values of a column that should be
assigned to a partition.
The number of partitions = the number of ranges specified + no case + unknown
no case if the value does not belong to any range
unknown- for the values like nulls, spaces etc
96.Explain PPI?
PPI :Partitioned Primary Indexes are Created so as to divide the table onto partitions based on Range or Values
as Required .the data is first Hashed into Amps , then Stored in amps based on the Partitions !!! which
when Retrived for a single partition / multiple Partitions , will be a all amps
Scan, but not a Full Table Scan !!!! . this is effective for Larger Tables partitioned on the Date Specially !!!
there is no extra Overhead on the System (no Spl Tables Created ect )
97.What is Casting in Teradata ?
It will convert the data type
The casting is similar to DDL:
Operator
Returns
UNION
UNION ALL
INTERSECT
MINUS
All distinct rows selected by the first query but not the second.
UNION Example
The following statement combines the results with the UNION operator, which eliminates duplicate selected
rows. This statement shows that you must match datatype (using the TO_DATE and TO_NUMBER
functions) when columns do not exist in one or the other table:
SELECT part, partnum, to_date(null) date_in FROM orders_list1
UNION
SELECT part, to_number(null), date_in FROM orders_list2;
PART
PARTNUM DATE_IN
---------- ------- -------SPARKPLUG 3323165
SPARKPLUG
10/24/98
FUEL PUMP 3323162
FUEL PUMP
12/24/99
TAILPIPE 1332999
TAILPIPE
01/01/01
CRANKSHAFT 9394991
CRANKSHAFT
09/12/02
SELECT part
FROM orders_list1
UNION
SELECT part
FROM orders_list2;
PART
---------SPARKPLUG
FUEL PUMP
TAILPIPE
CRANKSHAFT
MINUS Example
The following statement combines results with the MINUS operator, which returns only rows returned by the
first query but not by the second:
SELECT part
FROM orders_list1
MINUS
SELECT part
FROM orders_list2;
PART
---------SPARKPLUG
FUEL PUMP
The table got loaded with wrong data using Fastload and it failed. The error message shown was:
RDBMS error 2652: Operation not allowed: _db_._table_ is being Loaded. How to realese lock on
this table?
When the data got loaded completely and still its locked, submit another fastload script with
BEGIN LOADING AND END LOADING atetments alone.
I need to create a delimited file using fastexport. As fast export do not support delimited format, so I
have written the following select to get the delimited output:
select
trim(col1) || '|' ||
trim(col2) || '|' ||
trim(col3) || '|' || ...........
...............................
trim(col50)
from table
but the above script prefix each line with 2 junk characters.
How to get the data without the junk characters.
when the fastload check point value is <= 60 and > 60, how is that going to matter?
When the checkpoint interval is <= 60, that indicates the minutes (time) interval. If the value
is more than 60, it will be considered as the no. of records but not the time.
123. I am loading a delimited flat file with a time format as the following:
HH:MM PM/AM
Examples would be :
9:45 AM
10:25 PM
And there is no zero if the hours is a single integer value.
Is there any way that I would get the mload acquisition phase count in the mload script? MLOAD
support environment provides different variables (total ins, upd, del etc.) at the application phase,
but not at the acquisition phase.
Is there any way other than scan the log file?
There are various commands available for the same.
SYSAPLYCNT
SYSNOAPLYCNT
SYSRCDCNT
SYSRJCTCNT
124. I have this requirement when error table gets generated during the MLOAD, I want to send an
email. How can I achieve this?\
After Mload use a BTEQ to query for the error table if present quit on some value say '99'
and use your OS to mail when the return code is 99.
I am using the following syntax to logon to Teradata Demo thru BTEQ/BTEQWin:
.logon demotdat/dbc,dbc;
and having the following error:
*** Error: Invalid logon!
*** Total elapsed time was 1 second.
Teradata BTEQ 08.02.00.00 for WIN32. Enter your logon or BTEQ command:
The hosts file shows the following:
127.0.0.1 localhost DemoTDAT DemoTDATcop1
TROUBLE SHOOTING
Solution : When ever you want to open a fresh batch id, first of all you should
close the existing batch id and open a fresh batch id.
2) source is Flat file and I am staging the this flat file in teradata.
I found that the initial zeros are truncating in teradata. What could be the
reason.
Solution : The reason is that in teradata you are defined the column datatype
as Integer. Thats why initial values are truncating. So, change the target table
data type to VARCHAR. VARCHAR datatype it wont trucate the initial zeros.
3) Can`t determine current batch ID for Data Source 47
Solution : For any fresh stage load you should open a batch id for the current
data source id.
4) Unique Primary key violation CFDW_ECTL_CURRENT_BATCH table.
Solution : In CFDW_ECTL_CURRENT_BATCH table unique primary key defined
on ECTL_DATA_SRCE_ID,
ECTL_DATA_SRCE_INST_ID columns. At any point of time you shold have only
one record for ECTL_DATA_SRCE_ID,
ECTL_DATA_SRCE_INST_ID columns.
5) cant insert a NULL value in a NOT NULL column.
Solution : First find all the NOT NULL columns in a target table and cross verify
with the corresponding source columns and identify for which source column
other column (other primary key columns in spec) as the primary key and I will
check for the duplicate records. If I didnt get any duplicates, I will ask
modeller to add this column as the primary key.
13) In teradata the error is mentioned as: no more room in database
Solution: I spoke with DBA to add the space for that database.
14) Though the column is available in target table, when I am trying to load
using Mload, it shows that tahe column is not available in the table. Why?
Solution: As the loading process was happening through a view and the view
was not refreshed to add the new column, it was the error message. So,
refresh the view definition to add the new column.
15) when deleting the target table, though I wante to delete some data from
the target table, by mistake all the data got deleted from Development table.
Solution: Add ECTL_DATA_SRCE_ID and PGM_ID in the where clause of the
query.
16) While updatating the target table, it shows an error message saying
multiple rows are trying to update a single row.
Solution: There are duplicates available in the table matching the Where
condition of the update qurey. These duplicate records need to be eliminated.
17) I have a file with header, data records and trailer. Data record is delimited
with comma and header and trailer are fixed width. The header and trailer
starts with (HDR,TRA).
I need to avoid the header and trailer while loading the file with Multiload.
Please help me in this case.
Solution: Code Mload utility to consider only the data records excluding the
header and trailer records.
APPLY label WHERE REC_TD_IN NOT IN('HDR','TRA')
*Teradata makes itself the decision to use the index or not - if you are not careful you spend time in
table updates to keep up an index which is no used at all (one cannot give the query optimizer hints
to use some index - though collecting of statistics may affect the optimizer strategy
*In the MP-RAS environment, look at the script "/etc/gsc/bin/perflook.sh". This will provide a
system-wide snapshot in a series of files. The GSC uses this data for incident analysis.
* When using an index one must keep sure that the index condition is met in the sub queries "using
IN, nested queries, or derived tables"
* Indication of the proper index use is found by explain log entry "a ROW HASH MATCH SCAN
across ALL-AMPS"
* If the index is not used the result of the analysis is the 'FULL TABLE SCAN' where the
performance time grows when the size of the history table grows
* Keeping up an index information is a time/space consuming issue. Sometimes Teradata is much
better when you "manually" imitatate the index just building it from scratch.
* keeping up join index might help, but you cannot multiload to a table which is a part of the join
index - loading with 'tpump' or pure 'SQL' is OK but does not perform as well. Dropping and recreating a join index with a big table takes time and space.
* when your Teradata "explain" gives '25' steps from your query (even without the update of the
results) and the actual query is a join of six or more tables
Case e.g.
We had already given up updating the secondary indexes - because we have not had much use for
them.
After some trials and errors we ended up to the strategy, where the actual "purchase frequency
analysis" is never made "directly" against the history table.
Instead:
1) There is a "one-shot" run to build the initial "customer's previous purchase" from the "purchase
history" - it takes time, but that time is saved later
2) The purchase frequency is calculated by joining the "latest purchase" with the "customer's
previous purchase".
3) When the "latest purchase" rows are inserted to the "purchase history" the "customer's previous
purchase" table is dropped and recreated by merging the "customer's previous purchase" with the
"latest purchase"
4) By following these steps the performance is not too fast yet (about 25 minutes in our two node
system) for a bunch of almost 1.000.000 latest receipts - but it is tolerable now.
(We also tested by adding both the previous and latest purchase to the same table, but because its
size was in average case much bigger than the pure "latest purchase", the self-join was slower in
that case)
*********
And, of course, the entire workload must scale upward linearly as the demand increases, ideally
with a minimum of effort required from users and systems staff. Here's a look at some of the most
frequent questions I receive on the subject of mixed workloads and concurrency requirements.
How do I balance the work coming in across all nodes of my Teradata
configuration?
You don't. Teradata automatically balances sessions across all nodes to evenly distribute work
across the entire parallel configuration. Users connect to the system as a whole rather than a specific
node, and the system uses a balancing algorithm to assign their sessions to a node. Balancing
requires no effort from users or system administrators.
Does Teradata balance the work queries cause?
The even distribution of data is the key to parallelism and scalability in Teradata. Each query
request is sent to all units of parallelism, each of which has an even portion of the data to process,
resulting in even work distribution across the entire system.
For short queries and update flow typical of Web interactions, the optimizer recognizes that only a
single unit of parallelism is needed. A query coordinator routes the work to the unit of parallelism
needed to process the request. The hashing algorithm does not cluster related data, but spreads it out
across the entire system. For example, this month's data and even today's data is evenly distributed
across all units of parallelism, which means the work to update or look at that data is evenly
distributed.
Will many concurrent requests cause bottlenecks in query coordination?
Query coordination is carried out by a fully parallel parsing engine (PE) component. Usually, one or
more PEs are present on each node. Each PE handles the requests for a set of sessions, and sessions
are spread evenly across all configured PEs. Each PE is multithreaded, so it can handle many
requests concurrently. And each PE is independent of the others with no required crosscoordination. The number of users logged on and requests in flight are limited only by the number
of PEs in the configuration.
How do you avoid bottlenecks when the query coordinator must retrieve
information from the data dictionary?
In Teradata, the DBMS itself manages the data dictionary. Each dictionary table is simply a
relational table, parallelized across all nodes. The same query engine that manages user workloads
also manages the dictionary access, using all nodes for processing dictionary information to spread
the load and avoid bottlenecks. The PE even caches recently used dictionary information in
memory. Because each PE has its own cache, there is no coordination overhead. The cache for each
PE learns the dictionary information most likely to be needed by the sessions assigned to it.
With a large volume of work, how can all requests execute at once?
As in any computer system, the total number of items that can execute at the same time is always
limited to the number of CPUs available. Teradata uses the scheduling services Unix and NT
provide to handle all the threads of execution running concurrently. Some requests might also exist
on other queues inside the system, waiting for I/O from the disk or a message from the BYNET, for
example. Each work item runs in a thread; each thread gets a turn at the CPU until it needs to wait
for some external event or until it completes the current work. Teradata configures several units of
parallelism in each SMP node. Each unit of parallelism contains many threads of execution that
aren't restricted to a particular CPU; therefore, every thread gets to compete equally for the CPUs in
and will be executed first. I/O is managed by priority. Data warehouse workloads are heavy I/O
users, so a large query performing a lot of I/O could hold up a short, high-priority request. PSF puts
the high-priority request I/Os to the head of the queue, helping to deliver response time goals.
Data warehouse databases often set the system environment to allow for fast scans.
Does Teradata performance suffer when the short work is mixed in?
Because Teradata was designed to handle a high volume of concurrent queries, it doesn't count on
sequential scans to produce high performance for queries. Although other DBMS products see a
large fall in request performance when they go from a single large query to multiple queries or
when a mixed workload is applied, Teradata sees no such performance change. Teradata never plans
on sequential access in the first place. In fact, Teradata doesn't even store the data for sequential
accesses. Therefore, random accesses from many concurrent requests are just business as usual.
Sync scan algorithms provide additional optimization. When multiple concurrent requests are
scanning or joining the same table, their I/O is piggybacked so that only a single I/O is performed to
the disk. Multiple concurrent queries can run without increasing the physical I/O load, leaving the
I/O bandwidth available for other parts of the workload.
What if work demand exceeds Teradata's capabilities?
There are limits to how much work the engine can handle. A successful data warehouse will almost
certainly create a demand for service that is greater than the total processing power available on the
system. Teradata always puts into execution any work presented to the DBMS.
If the total demand is greater than the total resources, then controls must be in place before the work
enters the DBMS. When your warehouse reaches this stage, you can use Database Query Manager
(DBQM) to manage the flow of user requests into the warehouse. DBQM, inserted between the
users' ODBC applications and the DBMS, evaluates each request and then applies a set of rules
created by the system administrator. If the request violates any of the rules, DBQM notifies the user
that the request is denied or deferred to a later time for execution.
Rules can include, for example, system use levels, query cost parameters, time of day, objects
accessed, and authorized users. You can read more about DBQM in a recent Teradata Review article
("Field Report: DBQM," Summer 1999, available online at
www.teradatareview.com/summer99/truet.html).
How do administrators and DBAs stay on top of complex mixed workloads?
The Teradata Manager utility provides a single operational system view for administrators and
DBAs. The tool provides real-time performance, logged past performance, users and queries
currently executing, management of the schema, and more.
STAYING ACTIVE
The active warehouse is a busy place. It must handle all decision making for the organization,
including strategic, long-range data mining queries, tactical decisions for daily operations, and
event-based decisions necessary for effective Web sites. Nevertheless, managing this diversity of
work does not require a staff of hundreds running a complex architecture with multiple data marts,
operational data stores, and a multitude of feeds. It simply requires a database management system
that can manage multiple workloads at varying service levels, scale with the business, and provide
2437 availability year round with a minimum of operational staff.
2. Use COMPRESS in whichever attribute possible. This helps in reducing IO and hence
Improves performance. Especially for attribute having lots of NULL values/Unique known
values.
3. COLLECT STATISTICS on daily basis (after every load) in order to improve performance.
4. Drop and recreate secondary indices before and after every load. This helps in improving load
performance (if critical)
5. Regularly Check for EVEN data distribution across all AMPs using Teradata Manager or thru
queryman
6. Check for the combination on CPU, AMPs, PE, nodes for performance optimization.
Each AMP can handle 80 tasks and each PE can handle 120 sessions.
MLOAD Customize the number sessions for each MLOAD jobs depending on the
Number of concurrent MLOAD jobs &
Number of PEs in the system
e.g
SCENARIO 1
# of AMPS = 10
# of MAx load Jobs handled by Teradata=5 (Parameter which can be
set values-5 to 15)
# of Sessions per load job= 1 (parameter that can be set at Global
or at each MLOAD script level)
# of PE's=1
So 10*5*1= 50 + 10 (2 per job overhead) = 60 is the Max sessions
on Teradata box
This is LESS then 120, which is max # of sessions a PE can handle
SCENARIO 2
#AMPS = 16
#Max load Jobs handles by Teradata=15
#Sessions per load job= 1
#of PE's=1
So 16*15*1= 240 + 30 (2 per job ovehead) = 270 (Max sessions on
Teradata box).
This is MORE then 120, which is the max sessions a PE can handle.
Hence MLOAD fail, inspite of the usage of the SLEEP & TENACITY
features.
Use the SLEEP and TENACITY features of MLOAD for scheduling MLOAD jobs.
Check the TABLEWAIT parameter. If omitted can cause immediate load job failure if you
submit two MLOADS loads that are trying to update the same table.
JOIN INDEX - Check the limit on number of fields for a join Index (max 16 fields). It may
vary by version
Join Index is like building the table physically. Hence it has the advantage like BETTER
Performance since data is physically stored and not calculated ON THE FLY etc. Cons are
of LOADING time(MLOAD needs Join Indices to be dropped before loading) and additional
space since it is a physical table.