Академический Документы
Профессиональный Документы
Культура Документы
Import
If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of oracle database.
Oracle 10g introduces a new feature called data pump export and import.Data pump export/import differs from
original export/import. The difference is listed below.
1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and Import, such as
BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.
2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands.
3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.
4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp
full=y where in original export/import does not always exhibit this behavior.
5)Expdp/Impdp access files on the server rather than on the client.
6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.
7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import
we could directly compress the dump by using pipes.
8)The Data Pump method for moving data between different database versions is different than the method used
by original Export/Import.
9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an
active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs
any rows that are in violation and continues with the load.
10)Expdp/Impdp consume more undo tablespace than original Export and Import.
11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas,
the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not
compressed upon import.
12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions
that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in
which character set conversions are not supported under the Data Pump is when using transportable tablespaces.
13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the
COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.
When we do conventional backup of our database either by using RMAN or by using hot or cold backup methods
then output file (backup file) actually contains block to block copy of source file. Now these files used to restore
the data then following two condition must be followed up:
Backup process should identify the oracle blocks from backup files.
Interestingly, if we have dump export then it can be used as analytical backup. It means....... it can imported
irrespective platform and oracle database.
Let us have some examples how to perform the export from database:
A schema-mode export (the default mode) is performed, but the CONTENT parameter effectively limits the export
to an unload of just the table's data. The DBA previously created the directory object dpump_dir1 which points to
the directory on the server where user hr is authorized to read and write export dump files. The dump
file dataonly.dmp is created in dpump_dir1.