Вы находитесь на странице: 1из 3

Original Export and Import Versus Data Pump Export and

Import

If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of oracle database.
Oracle 10g introduces a new feature called data pump export and import.Data pump export/import differs from
original export/import. The difference is listed below.
1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and Import, such as
BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.
2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.
3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.
4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp
full=y where in original export/import does not always exhibit this behavior.
5)Expdp/Impdp access files on the server rather than on the client.
6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.
7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import
we could directly compress the dump by using pipes.
8)The Data Pump method for moving data between different database versions is different than the method used
by original Export/Import.
9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an
active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs
any rows that are in violation and continues with the load.
10)Expdp/Impdp consume more undo tablespace than original Export and Import.
11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas,
the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not
compressed upon import.
12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions
that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in
which character set conversions are not supported under the Data Pump is when using transportable tablespaces.
13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the
COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.

Datapump Export handy examples: Analytical backup of your


data
Here I want to show some handy examples, how to run the datapump export.

When we do conventional backup of our database either by using RMAN or by using hot or cold backup methods
then output file (backup file) actually contains block to block copy of source file. Now these files used to restore
the data then following two condition must be followed up:

Platform must not be changed.

Backup process should identify the oracle blocks from backup files.

Interestingly, if we have dump export then it can be used as analytical backup. It means....... it can imported
irrespective platform and oracle database.
Let us have some examples how to perform the export from database:

Performing a Table-Mode Export


Issue the following Data Pump export command to perform a table export of the tables employees and jobs from
the human resources (hr) schema:
expdp hr/hr TABLES=employees,jobs DUMPFILE=dpump_dir1:table.dmp NOLOGFILE=y
Because user hr is exporting tables in his own schema, it is not necessary to specify
the schema name for the tables. The NOLOGFILE=y parameter indicates that an Export log
file of the operation will not be generated.

Data-Only Unload of Selected Tables and Rows


the contents of a parameter file (exp.par) that you could use to perform a data-only unload of all tables in the
human resources (hr) schema except for the tables countries and regions. Rows in the employees table are
unloaded that have a department_id other than 50. The rows are ordered by employee_id.
Only Unload of Selected Tables and Rows
DIRECTORY=dpump_dir1
DUMPFILE=dataonly.dmp
CONTENT=DATA_ONLY
EXCLUDE=TABLE:"IN ('COUNTRIES', 'REGIONS')"
QUERY=employees:"WHERE department_id !=50 ORDER BY employee_id"
You can issue the following command to execute the exp.par parameter file:
> expdp hr/hr PARFILE=exp.par

Estimating Disk Space Needed in a Table-Mode Export


the use of the ESTIMATE_ONLY parameter to estimate the space that would be consumed in a table-mode export,
without actually performing the export operation. Issue the following command to use the BLOCKS method to
estimate the number of bytes required to export the data in the following three tables located in the human
resource (hr) schema: employees, departments, and locations.
Example 2-3 Estimating Disk Space Needed in a Table-Mode Export
> expdp hr/hr DIRECTORY=dpump_dir1 ESTIMATE_ONLY=y TABLES=employees,
departments, locations LOGFILE=estimate.log

Performing a Schema-Mode Export


The estimate is printed in the log file and displayed on the client's standard output device. The estimate is for
table row data only; it does not include metadata.

A schema-mode export (the default mode) is performed, but the CONTENT parameter effectively limits the export
to an unload of just the table's data. The DBA previously created the directory object dpump_dir1 which points to

the directory on the server where user hr is authorized to read and write export dump files. The dump
file dataonly.dmp is created in dpump_dir1.

> expdp hr/hr DUMPFILE=dpump_dir1:expschema.dmp LOGFILE=dpump_dir1:expschema.log

Performing a Parallel Full Database Export


> expdp hr/hr FULL=y DUMPFILE=dpump_dir1:full1%U.dmp, dpump_dir2:full2%U.dmp
FILESIZE=2G PARALLEL=3 LOGFILE=dpump_dir1:expfull.log JOB_NAME=expfull
Because this is a full database export, all data and metadata in the database will be exported. Dump
files full101.dmp, full201.dmp,full102.dmp, and so on will be created in a round-robin fashion in the
directories pointed to by the dpump_dir1 and dpump_dir2 directory objects. For best performance, these should
be on separate I/O channels. Each file will be up to 2 gigabytes in size, as necessary. Initially, up to three files will
be created. More files will be created, if needed. The job and master table will have a name of expfull. The log
file will be written to expfull.log in the dpump_dir1 directory.

Using Interactive Mode to Stop and Reattach to a Job


While the export is running, press Ctrl+C. This will start the interactive-command interface of Data Pump Export. In
the interactive interface, logging to the terminal stops and the Export prompt is displayed.
Stopping and Reattaching to a Job
At the Export prompt, issue the following command to stop the job:
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y
The job is placed in a stopped state and exits the client.
Enter the following command to reattach to the job you just stopped:
> expdp hr/hr ATTACH=EXPFULL
After the job status is displayed, you can issue the CONTINUE_CLIENT command to resume logging mode and
restart the expfull job.
Export> CONTINUE_CLIENT

Вам также может понравиться