Вы находитесь на странице: 1из 10

UAT VS SIT : Objective of UAT : To test the application from end users perspective and to verify for the

business rules. Objective of ST : To test the application for the correctness of how the application has been built and how its interface is been done. Reference document for UAT : BRD Reference document for ST : SRS or FS Environment for UAT : Simulated Live Environment(Prod Site) Environment for ST : Test Environment (Developers site) Data used for UAT : Simulated live data Data used for ST : Dummy data

ROWNUM
Conditions testing for ROWNUM values greater than a positive integer are always false. For example, this query returns no rows:
SELECT * FROM employees WHERE ROWNUM > 1;

The first row fetched is assigned a ROWNUM of 1 and makes the condition false. The second row to be fetched is now the first row and is also assigned a ROWNUM of 1 and makes the condition false. All rows subsequently fail to satisfy the condition, so no rows are returned.

Oracle Stored Procedures - Debug a procedure


(Page 7 of 10 ) Lets introduce a compilation error into your procedure declaration. Open your skeleton.sql file in Notepad. Replace the DBMS_OUTPUT.PUT_LINE procedure call with the NULLL statement (notice the three "l"s!), an invalid PL/SQL statement. Your program should look like this:

CREATE OR REPLACE PROCEDURE skeleton IS BEGIN NULLL; END; Save your file as skeleton.sql. From SQL*Plus, open your skeleton.sql file. SQL*Plus loads the contents of your skeleton.sql file into the SQL*Plus buffer or memory area and presents the SQL*Plus command prompt: SQL> 1 2 3 4 5* SQL>

CREATE OR REPLACE PROCEDURE skeleton IS BEGIN NULLL; END;

Execute the contents of the SQL*Plus buffer. Type a front slash and press <enter> like this: SQL> / Your procedure is compiled and saved on the database. However, SQL*Plus warns us of compilation errors: Warning: Procedure created with compilation errors. Lets see the compilation errors. First, we need to run two SET commands to ensure the SQL*Plus buffer does not overflow. At the SQL*Plus command prompt, type: SQL> SET ARRAYSIZE 1 SQL> SET MAXDATA 60000 SQL> Again, SQL*Plus remains secretive of the result. Let's see the errors. At the SQL*Plus command prompt, type: SQL> SHOW ERRORS PROCEDURE skeleton You should see the compilation error: LINE/COL ---------------------------------------------ERROR ---------------------------------------------4/3 PLS-00201: identifier 'NULLL' must be declared 4/3 PL/SQL: Statement ignored Oracle doesn't recognize the NULLL statement with the three "l"s. But Oracle won't hold it against you. Change your procedure declaration in Notepad by inserting the proper NULL statement, and follow the steps to create your procedure again on the Oracle database.

What if you want to completely remove a procedure from your database? That's what we'll cover next.

Unix commands
]

] 1 Log In Session

1.1 Log In

Enter username at login: prompt. Be carefull - Unix is case sensitive. Enter password at password: prompt.
1.2 Change Password
passwd

1.3 Log Out


logout

or exit

2 File System
2.1 Create a File
cat > file vi file Enter text and end with ctrl-D Edit file using the vi editor

2.2 Make a Directory


mkdir directory-name

2.3 Display File Contents


cat more view less file file file file display contents of file display contents of file one screenfull at a time. a read only version of vi. similar to, but more powerfull than more. See the man page for more infomation on less.

2.4 Comparing Files


diff file1 file2 cmp file1 file2 line by comparison byte by byte comparison

2.5 Changing Access Modes


chmod mode file1 file2 ...

chmod -R mode dir (changes all files in Mode Settings u g o + r w x user (owner) group other add permission remove permission read write execute

dir

Example: chmod go+rwx public.html adds read, write, and execute permissions for group and other on public.html.
2.6 List Files and Directories
ls ls -a ls -l list contents of directory include files with "." (dot files) list contents in long format (show modes)

2.7 Move (or Rename) Files and Directories


mv mv mv mv src-file dest-file src-file dest-dir src-dir dest-dir -i src dest rename move a rename copy & src-file to dest-file file into a directory src-dir, or move to dest-dir prompt before overwriting

2.8 Copy Files


cp cp cp cp src-file dest-file src-file dest-dir -R src-dir dest-dir -i src dest copy copy copy copy src-file to dest-file a file into a directory one directory into another & prompt before overwriting

2.9 Remove File


rm file rmdir dir rm -r dir rm -i file remove remove remove remove (delete) a file an empty directory a directory and its contents file, but prompt before deleting

2.10 Compressing files


compress file zcat file.Z uncompress file.Z encode file, replacing it with file.Z display compressed file decode file.Z, replacing it with file

2.11 Find Name of Current Directory


pwd display absolute path of working directory

2.12 Pathnames

simple:

One filename or directory name for accessing local file or directory. Example: foo.c absolute: List of directory names from root directory to desired file or directory name, each separated by /. Example: /src/shared relative: List of directory names from working directory to desired file or directory name, each separated by /. Example: Mail/inbox/23
2.13 Directory Abbreviations
~ ~username . .. ../.. Your home (login) directory Another user's home directory Working (current) directory Parent of working directory Parent of parent directory

2.14 Change Working Directory


cd / cd cd ~username cd cd cd cd cd go to the root directory go to your login (home) directory go to username's login (home) directory not allowed in the Bourne shell ~username/directory go to username's indicated directory .. go up one directory level from here ../.. go up two directory levels from here /full/path/name/from/root change directory to absolute path named note the leading slash path/from/current/directory change directory to path relative to here. note there is no leading slash

3.0 Commands
3.1 Date
date display date and time

3.2 Wild Cards


? * single character wild card Arbitrary number of characters

3.3 Printing
lpr file lpr -Pprinter file lpr -c# file print file on default printer print file on printer print # copies of file

lpr -d file lpq lprm -#

interpret file as a dvi file show print queue (-Pprinter also valid) remove print request # (listed with lpq)

3.4 Redirection
command > file direct output of command to file instead of to standard output (screen), replacing current contents of file as above, except output is appended to the current contents of file command receives input from file instead of from standard input (keyboard) "pipe" output of cmd1 to input of cmd2 log everything displayed on the terminal to file; end with exit

command > > file command < file cmd1 | cmd2 script file

4 Search Files
grep string filelist grep -v string filelist grep -i string filelist show lines containing string in any file in filelist show lines not containing string show lines containing string, ignore case

5 Information on Users
finger user or finger user@machine finger @machine who get information on a user list users on machine list current users

6 Timesavers
6.1 Aliases
alias string command abbreviate command to string

6.2 History: Command Repetition

Commands may be recalled


history !num !str !! !$ show command history repeat command with history number num repeat last command beginning with string str repeat entire last command line repeat last word of last command line

7.0 Process and Job Control

7.1 Important Terms


pid Process IDentification number. See section 7.2. job-id Job identification number. See section 7.2.

7.2 Display Process and/or Job IDs


ps ps gx jobs report processes and pid numbers as above, but include "hidden" processes report current jobs and job id numbers

7.3 Stop (Suspend) a Job


ctrl-Z NOTE:process still exists!

7.4 Run a Job in the Background


To start a job in background add & to the end of the command. Example: xv foo.gif &

To force a running job into the background:


ctrl-Z bg stop the job "push" the job into the background

7.5 Bring a Job to the Foreground


fg fg %job-id bring a job to foreground foreground by job-id (see 7.2)

7.6 Kill a Process or Job


ctrl-C kill -KILL pid# kill -KILL %job-id# kill foreground process see 7.2 for displaying pids & job-ids

8.0 Mail Handler (MH)


MH commands are issued directly to the terminal.
inc scan show next prev repl forw comp rmm cmd -help incorporate new mail show list of mail messages show current message show next message show previous message reply to current message forward current message compose a mail message remove current mail message print help on mh commmand cmd mh-profile

The file .mh_profile is used to enable/disable MH features. man more information.

for

9.0 On-line Assistance

On-line Documentation
man command-name man -k string string display on-line manual pages list one-line summaries of manual pages containing

DATA WAREHOUSE TESTING IS DIFFERENT All works in Data Warehouse population are mostly through batch runs. Therefore the testing is different from what is done in transaction systems. Unlike a typical transaction system, data warehouse testing is different on the following counts:
User-Triggered vs. System triggered

Most of the production/Source system testing is the processing of individual transactions, which are driven by some input from the users (Application Form, Servicing Request.). There are very few test cycles, which cover the system-triggered scenarios (Like billing, Valuation.) In data Warehouse, most of the testing is system triggered ('Extraction, Transformation and Loading'), the view refresh scripts etc. as per the scripts for ETL

Therefore typically Data-Warehouse testing is divided into two parts--> 'Back-end' testing where the source systems data is compared to the end-result data in Loaded area, and 'Front-end' testing where the user checks the data by comparing their MIS with the data displayed by the end-user tools like OLAP.
Batch vs. online gratification

This is something, which makes it a challenge to retain users interest. A transaction system will provide instant OR at least overnight gratification to the users, when they enter a transaction, which either is processed online OR maximum via overnight batch. In the case of data- warehouse, most of the action is happening in the back-end and users have to trace the individual transactions to the MIS and views produced by the OLAP tools. This is the same challenge, when you ask users to test the month-end mammoth reports/financial statements churned out by the transaction systems.
Volume of Test Data

The test data in a transaction system is a very small sample of the overall production data. Typically to keep the matters simple, we include as many test cases as are needed to comprehensively include all possible test scenarios, in a limited set of test data.. Data Warehouse has typically large test data as one does try to fill-up maximum possible combination and permutations of dimensions and facts. For example, if you are testing the location dimension, you would like the location-wise sales revenue report to have some revenue figures for most of the 100 cities and the 44 states. This would mean that you have to have thousands of sales transaction data at sales office level (assuming that sales office is lowest level of granularity for location dimension).
Possible scenarios/ Test Cases

If a transaction system has hundred (say) different scenarios, the valid and possible combination of those scenarios will not be unlimited. However, in case of Data Warehouse, the permutations and combinations one can possibly test is virtually unlimited due to the core objective of Data Warehouse is to allow all possible views of Data. In other words, 'You can never fully test a data Warehouse' Therefore one has to be creative in designing the test scenarios to gain a high level of confidence.
Test Data Preparation

This is linked to the point of possible test scenarios and volume of data. Given that a data- warehouse needs lots of both, the effort required to prepare the same is much more.
Programming for testing challenge

In case of transaction systems, users/business analysts typically test the output of the system. However, in case of data warehouse, as most of the action is happening at the back-end, most of the 'Data Warehouse data Quality testing' and 'Extraction,Transformation and Loading' testing is done by running separate stand-alone scripts. These scripts compare preTransformation to post Transformation (say) comparison of aggregates and throw out the pilferages. Users roles come in play, when their help is needed to analyze the same (if designers OR business analysts are not able to figure it out).

Oracle Database Utilities


The Oracle Database 11g Release 2 Utilitiescomprising Oracle Data Pump and Oracle SQL*Loaderare a set of tools to allow fast and easy data transfer, maintenance, and database administration of Oracle databases.

Oracle Data Pump


Oracle Data Pump is a feature of Oracle Database 11g Release 2 that enables very fast bulk data and metadata movement between Oracle databases. Oracle Data Pump provides new high-speed, parallel Export and Import utilities (expdp and impdp) as well as a Web-based Oracle Enterprise Manager interface. Data Pump Export and Import utilities are typically much faster than the original Export and Import Utilities. A single thread of Data Pump Export is about twice as fast as original Export, while Data Pump Import is 15-45 times fast than original Import. Data Pump jobs can be restarted without loss of data, whether or not the stoppage was voluntary or involuntary. Data Pump jobs support fine-grained object selection. Virtually any type of object can be included or excluded in a Data Pump job. Data Pump supports the ability to load one instance directly from another (network import) and unload a remote instance (network export).

SQL*Loader
SQL*Loader is a high-speed data loading utility that loads data from external files into tables in an Oracle database. SQL*Loader accepts input data in a variety of formats, can perform filtering, and can load data into multiple Oracle database tables during the same load session. SQL*Loader provides three methods for loading data: Conventional Path Load, Direct Path Load, and External Table Load.

External Tables
A feature has been added to external tables that allows users to preprocess input data before it is sent to the access driver. The ability to manipulate input data with a preprocessor program results in additional loadable data formats, which greatly enhances the flexibility and processing power of external tables.

The types of preprocessor programs that can be used are versatile, ranging from system commands, usergenerated binaries (for example, a C program), or user-supplied shell scripts. Because the user supplies the program to preprocess the data, it can be tailored to meet the users specific needs. This means that the number of loadable formats is restricted only by the ability to manipulate the original data set.

Вам также может понравиться