General Information |
DataPump comes in two different forms ... the executable EXPDP and IMPDP in the $ORACLE_HOME/bin directory, this page, and as a built-in package DBMS_DATAPUMP linked at page bottom. |
Data Dictionary Objects |
DATAPUMP_DIR_OBJS | DATAPUMP_PATHMAP | DBMS_STREAMS_DATAPUMP_UTIL |
DATAPUMP_OBJECT_CONNECT | DATAPUMP_PATHS | KU$_DATAPUMP_MASTER_10_1 |
DATAPUMP_PATHMAP | DATAPUMP_PATHS_VERSION | KU$_DATAPUMP_MASTER_11_1 |
DATAPUMP_PATHS | DATAPUMP_REMAP_OBJECTS | KU$_DATAPUMP_MASTER_11_1_0_7 |
DATAPUMP_PATHS_VERSION | DATAPUMP_TABLE_DATA | KU$_DATAPUMP_MASTER_11_2 |
DATAPUMP_REMAP_OBJECTS | DATA_PUMP_DIR | KUPC$DATAPUMP_QUETAB |
DATAPUMP_TABLE_DATA | DBA_DATAPUMP_JOBS | KUPC$DATAPUMP_QUETAB_1 |
DBA_DATAPUMP_JOBS | DBA_DATAPUMP_SESSIONS | ORACLE_DATAPUMP |
DBA_DATAPUMP_SESSIONS | DBMS_DATAPUMP | USER_DATAPUMP_JOBS |
DATAPUMP_DDL_TRANSFORM_PARAMS | DBMS_DATAPUMP_UTL | V$DATAPUMP_JOB |
DATAPUMP_DIR_OBJS | DBMS_STREAMS_DATAPUMP | V$DATAPUMP_SESSION |
DATAPUMP_OBJECT_CONNECT | | |
|
Export Modes |
Mode | Description |
Full | Use the FULL parameter: Exports the entire database is unloaded. EXP_FULL_DATABASE role required. |
Schema | Use the SCHEMAS parameter: The default export mode. If you have the EXP_FULL_DATABASE role, then you can specify a list of schemas and optionally include the schema definitions themselves, as well as system privilege grants to those schemas. If you do not have the EXP_FULL_DATABASE role, you can export only your own schema. |
Table | Use the TABLES parameter. A specified set of tables, partitions, and their dependent objects are unloaded. You must have the EXP_FULL_DATABASE role to specify tables that are not in your own schema. All specified tables must reside in a single schema. Note that type definitions for columns are not exported in table mode. It is expected that the type definitions already exist in the target instance at import time. |
Tablespace | Use the TABLESPACES parameter. Only tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, its data, metadata, and dependent objects are also unloaded. In tablespace mode, if any part of a table resides in the specified set, then that table and all of its dependent objects are exported. Privileged users get all tables. Nonprivileged users get only the tables in their own schemas. |
Transportable Tablespace | Use the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, only the metadata for tables, and dependent objects, within a specified set of tablespaces are unloaded. EXP_FULL_DATABASE role required. Degree of parallelism must = 1. |
Legacy | Data Pump enters legacy mode once it determines a parameter unique to original Export is present, either on the command line or in a script. As Data Pump processes the parameter, the analogous Data Pump Export parameter is displayed. |
|
Import Modes |
Mode | Description |
Full | A full import is specified using the FULL parameter. In full import mode, the entire content of the source (dump file set or another database) is loaded into the target database. This is the default for file-based imports. You must have the DATAPUMP_IMP_FULL_DATABASE role if the source is another database.
Cross-schema references are not imported for non-privileged users. For example, a trigger defined on a table within the importing user's schema, but residing in another user's schema, is not imported. |
Schema | Schema import is specified using the SCHEMAS parameter. In a schema import, only objects owned by the specified schemas are loaded. The source can be a full, table, tablespace, or schema-mode export dump file set or another database. you have the DATAPUMP_IMP_FULL_DATABASE role, then a list of schemas can be specified and the schemas themselves (including system privilege grants) are created in the database in addition to the objects contained within those schemas.
Cross-schema references are not imported for non-privileged users unless the other schema is remapped to the current schema. For example, a trigger defined on a table within the importing user's schema, but residing in another user's schema, is not imported. |
Table | Table-mode import is specified using the TABLES parameter. In table mode, only the specified set of tables, partitions, and their dependent objects are loaded. The source can be a full, schema, tablespace, or table-mode export dump file set or another database. You must have the DATAPUMP_IMP_FULL_DATABASE role to specify tables that are not in your own schema.
You can use the transportable option during a table-mode import by specifying the TRANPORTABLE=ALWAYSparameter with the TABLES parameter. This requires use of the NETWORK_LINK parameter, as well. |
Tablespace | Tablespace-mode import is specified using the TABLESPACES parameter. In tablespace mode, all objects contained within the specified set of tablespaces are loaded, along with the dependent objects. The source can be a full, schema, tablespace, or table-mode export dump file set or another database. For unprivileged users, objects not remapped to the current schema will not be processed. |
Transportable Tablespace | Transportable tablespace import is specified using the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, the metadata from a transportable tablespace export dump file set or from another database is loaded. The datafiles, specified by the TRANSPORT_DATAFILES parameter, must be made available from the source system for use in the target database, typically by copying them over to the target system. Encrypted columns are not supported in transportable tablespace mode. This mode requires the DATAPUMP_IMP_FULL_DATABASE role. In transportable tablespace mode, only the metadata for tables, and dependent objects, within a specified set of tablespaces are unloaded. EXP_FULL_DATABASE role required.Degree of parallelism must = 1. |
Legacy | Data Pump enters legacy mode once it determines a parameter unique to original Import is present, either on the command line or in a script. As Data Pump processes the parameter, the analogous Data Pump Import parameter is displayed. |
|
|
Exporting Schemas |
Demo Setup | conn / as sysdba
desc dba_directories
col owner format a10 col directory_path format a70
SELECT * FROM dba_directories;
CREATE OR REPLACE DIRECTORY data_pump_dir AS 'c:\temp'; -- default is $ORACLE_BASE/admin//dpdump
GRANT export full database TO uwclass;
Note: Verify that the environment variables ORACLE_HOME and ORACLE_SID are set properly in your shell. If they are not set then you must set them at the command line to proceed. |
| |
|
Basic Export Types |
Note: After each export, SELECT table_name FROM user_tables in the schema that ran the export, in the following examples, uwclass or abc. You will find tables with names such as SYS_EXPORT_FULL_01 and SYS_EXPORT_SCHEMA_01. Examine their contents |
Full Export | FULL=<N | Y> |
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo01.dmp FULL=y |
Schema Export | SCHEMAS= |
expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo02.dmp SCHEMAS=uwclass,scott |
Table Export | TABLES=<[schema_name.]table_name[:partition_name] [, ...]> |
expdp uwclass/uwclass DUMPFILE=data_pump_dir:demo03a.dmp TABLES=servers, serv_inst
expdp sh/sh DUMPFILE=data_pump_dir:demo03b.dmp TABLES=sales:sales_q3_2003 |
Tablespace Export | TABLESPACES= |
expdp uwclass DUMPFILE=data_pump_dir:demo04.dmp TABLESPACES=uwclass,usersTRANSPORT_FULL_CHECK=y |
Transportable Tablespace Export | TRANSPORT_TABLESPACES=
The default tablespace of the user performing the export must not be set to one of the tablespaces being transported. |
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo05.dmp TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=demo5.log
conn / as sysdba
ALTER TABLESPACE users READ ONLY; ALTER TABLESPACE example READ ONLY;
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo05.dmp TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=demo5.log
ALTER TABLESPACE users READ WRITE; ALTER TABLESPACE example READ WRITE; |
Legacy Mode Mappings |
Legacy Export Cmd | How Handled |
BUFFER | This parameter is ignored because Data Pump does not make use of conventional mode. |
COMPRESS | In original Export, the COMPRESS parameter affected how the initial extent was managed. Setting COMPRESS=n caused original Export to use current storage parameters for the initial and next extent. The Data Pump COMPRESSION parameter is used to specify how data is compressed in the dump file, and is not related to the original Export COMPRESS parameter. |
CONSISTENT | Data Pump Export determines the current time and uses FLASHBACK_TIME. Always use this option. |
CONSTRAINTS | If original Export used CONSTRAINTS=n, then Data Pump Export uses EXCLUDE=CONSTRAINTS. The default behavior is to include constraints as part of the export. |
DIRECT | This parameter is ignored. Data Pump automatically chooses the best export method. |
FEEDBACK | The Data Pump STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the export job, as well as the rows being processed. In original Export, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump, the status is given every so many seconds, as specified by STATUS. |
FILE | Data Pump attempts to determine the path that was specified or defaulted to for the FILE parameter, and also to determine whether a directory object exists to which the schema has read and write access. |
GRANTS | If original Export used GRANTS=n, then Data Pump uses EXCLUDE=GRANT. If original Export used GRANTS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump default behavior. |
INDEXES | If original Export used INDEXES=n, then Data Pump uses the EXCLUDE=INDEX parameter. If original Export used INDEXES=y, then Data Pump uses the INCLUDE=INDEX parameter. |
LOG | Data Pump attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access. |
OBJECT_CONSISTENT | This parameter is ignored because Data Pump processing ensures that each object is in a consistent state when being exported. |
OWNER | The Data Pump SCHEMAS parameter is used. |
RECORDLENGTH | This parameter is ignored because Data Pump automatically takes care of buffer sizing |
RESUMABLE | This parameter is ignored because Data Pump automatically provides this functionality |
RESUMABLE_NAME | This parameter is ignored because Data Pump automatically provides this functionality |
RESUMABLE_TIMEOUT | This parameter is ignored because Data Pump automatically provides this functionality. |
ROWS | If original Export used ROWS=y, then Data Pump Export uses the CONTENT=ALL parameter.
If original Export used ROWS=n, then Data Pump Export uses the CONTENT=METADATA_ONLY parameter. |
STATISTICS | This parameter is ignored because statistics are always saved for tables as part of a Data Pump operation. |
TABLESPACES | If original Export also specified TRANSPORT_TABLESPACE=n, then Data Pump ignores the TABLESPACES parameter. If original Export also specified TRANSPORT_TABLESPACE=y, then Data Pump takes the names listed for the TABLESPACES parameter and uses them. |
TRANSPORT_TABLESPACE | If original Export used TRANSPORT_TABLESPACE=n (the default), then Data Pump uses the TABLESPACES parameter. If original Export used TRANSPORT_TABLESPACE=y, then Data Pump uses the TRANSPORT_TABLESPACES parameter and only the metadata is exported. |
TRIGGERS | If original Export used TRIGGERS=n, then Data Pump Export uses the EXCLUDE=TRIGGER parameter |
TTS_FULL_CHECK | If original Export used TTS_FULL_CHECK=y, then Data Pump uses the TRANSPORT_FULL_CHECK parameter. If original Export used TTS_FULL_CHECK=y, then the parameter is ignored. |
VOLSIZE | It means the location specified for the dump file is a tape device. The Data Pump Export dump file format does not support tape devices. Therefore, this operation terminates with an error. |
|
|
Additional Export Parameters |
Attach | ATTACH=<[schema_name.]job_name> |
expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo06.dmp ATTACH=uw_job NOLOGFILE=y |
Cluster | CLUSTER=<Y | N> |
expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo07.dmp CLUSTER=n |
Compression | COMPRESSION=METADATA_ONLY |
| NONE>
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo08.dmpCOMPRESSION=none |
Content | CONTENT=<ALL | DATA_ONLY | METADATA_ONLY | NONE> |
expdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo09.dmpCONTENT=metadata_only |
Data Options | DATA_OPTIONS=<XML_CLOBS> |
expdp uwclass/uwclass DATA_OPTIONS=XML_CLOBS DIRECTORY=data_pump_dir DUMPFILE=demo10.dmp |
Directory | DIRECTORY= |
See FULL Demo Below |
Dumpfile | DUMPFILE=<expdat.dmp | file_name.dmp> -- can be used more than 1X at once |
See FULL Demo Below |
Encryption | Default: The default mode depends on which other encryption-related parameters are used. If only the ENCRYPTION parameter is specified, then the default mode is TRANSPARENT.
If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is open, then the default is DUAL. If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is closed, then the default is PASSWORD.
ENCRYPTION=NONE |
>
ENCRYPTION_ALGORITHM=<
AES128 | AES192 | AES256>
ENCRYPTION_MODE=
ENCRYPTION_PASSWORD=
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo11.dmp ENCRYPTION=all ENCRYPTION_ALGORITHM=aes256 ENCRYPTION_MODE=dual ENCRYPTION_PASSWORD=a1pha |
Estimate | ESTIMATE=<BLOCKS | STATISTICS> |
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo12.dmpESTIMATE=blocks
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo13.dmp ESTIMATE=statistics |
Estimate Only | ESTIMATE_ONLY=N |
>
expdp uwclass/uwclass SCHEMAS=uwclass ESTIMATE_ONLY=y
open export.log with an editor |
Exclude | EXCLUDE= |
-- exclude all (nonreferential) constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo14.dmp EXCLUDE=constraint
-- exclude referential integrity (foreign key) constraints expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo15.dmp EXCLUDE=ref_constraint
-- exclude object grants on all object types and system priv grants expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo16.dmp EXCLUDE=grant
-- excludes the definitions of users expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo17.dmp EXCLUDE=user
-- excludes views expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo18.dmp EXCLUDE=view,package,function
-- to exclude a specific user and all objects of that user, specify a filter such as the following (where hr is the schema name of the user you want to exclude): expdp uwclass/uwclass FULL=y DIRECTORY=data_pump_dir DUMPFILE=demo19.dmpEXCLUDE=SCHEMA:\"='HR'\" |
Filesize | FILESIZE<0 | integer[B | K | M | G]>
The default, zero, means unlimited |
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE = demo%U.dmpCOMPRESSION=none FILESIZE=500M |
Flashback SCN | FLASHBACK_SCN= |
conn / as sysdba
SELECT dbms_flashback.get_system_change_number FROM dual;
exit
expdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo20.dmp FLASHBACK_SCN=36477000 |
Flashback Time | FLASHBACK_TIME= |
conn / as sysdba
SELECT dbms_flashback.get_system_change_number FROM dual;
SELECT SCN_TO_TIMESTAMP(dbms_flashback.get_system_change_number) FROM dual;
exit
expdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo21.dmp FLASHBACK_TIME =\"TO_TIMESTAMP('01-SEP-2012 08:08:08', 'DD-MON-YYYY HH24:MI:SS')\" |
Full | See BASIC EXPORT TYPES: Above |
Help | HELP=N |
>
expdp uwclass HELP=y |
Include | INCLUDE= |
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo22.dmp INCLUDE=table
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo23.dmp INCLUDE=\"IN ('SERVERS', 'SERV_INST')\"
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo24.dmp INCLUDE=procedure
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo25.dmpINCLUDE=INDEX:\"LIKE 'PK%\" |
Job Name | JOB_NAME= |
expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo26.dmp JOB_NAME=uwjob |
Logfile | LOGFILE=<export.log | directory_object:file_name> |
expdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo27.dmpLOGFILE=data_pump_dir:demo27.log |
Metrics | METRICS= |
expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo28.dmp NETWORK_LINK=fixed_user |
Network Link | NETWORK_LINK= |
expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo28.dmp NETWORK_LINK=fixed_user |
No Logfile | NOLOGFILE=<N | Y> |
expdp uwclass TABLES=servers, airplanes DUMPFILE=data_pump_dir:demo29.dmp NOLOGFILE=y |
Parallel | PARALLEL=<1 | parallel_degree> |
expdp uwclass TABLES=servers DUMPFILE=data_pump_dir:demo30.dmp PARALLEL=2 |
Parfile | PARFILE=<[directory_object.]file_name> |
-- create this as a text file in the data_pump_dir directory
TABLES=servers DUMPFILE=data_pump_dir:demo31.dmp LOGFILE=data_pump_dir:demo31.log PARALLEL=2 |
expdp uwclass PARFILE=data_pump_dir:parfile.par |
Query | QUERY=<[schema.][table_name:]query_where_clause> |
expdp uwclass TABLES=airplanes DUMPFILE=data_pump_dir:demo32.dmp QUERY=airplanes:\"WHERE program_id = ''737''\" |
Remap Data | REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function |
conn uwclass/uwclass
CREATE OR REPLACE PACKAGE remap IS FUNCTION timestwo (inval NUMBER) RETURN NUMBER; END remap; /
CREATE OR REPLACE PACKAGE BODY remap IS FUNCTION timestwo (inval NUMBER) RETURN NUMBER IS BEGIN RETURN inval*2; END timestwo; END remap; /
expdp uwclass TABLES=servers DUMPFILE=data_pump_dir:demo33.dmp REMAP_DATA=uwclass.servers.srvr_id:uwclass.remap.timestwo |
Reuse Dump Files | REUSE_DUMPFILES=<N | Y> |
expdp uwclass TABLES=servers DUMPFILE=data_pump_dir:demo34.dmp REUSE_DUMPFILES=y |
Sample | SAMPLE=<[[[schema_name.]table_name:]sample_percent> |
expdp uwclass/uwclass TABLES=airplanes DUMPFILE=data_pump_dir:demo35.dmpSAMPLE=\"UWCLASS.AIRPLANES:10\" |
Schema | See BASIC EXPORT TYPES: Above |
Service Name | SERVICE_NAME= |
expdp uwclass/uwclass TABLES=airplanes DUMPFILE=ctemp:demo36.dmp SERVICE_NAME=oradata |
Source Edition | SOURCE_EDITION= |
expdp uwclass/uwclass TABLES=airplanes DUMPFILE=ctemp:demo37.dmp SOURCE_EDITION=ORA$ASE |
Status | STATUS=<0 | seconds> -- how often job status is displayed |
expdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo38.dmp STATUS=5 |
Tables | See BASIC EXPORT TYPES: Above |
Tablespaces | See BASIC EXPORT TYPES: Above |
Transport Full Check | TRANSPORT_FULL_CHECK=<N | Y> |
See TRANSPORT_TABLESPACES Demo Below |
Version | VERSION=<COMPATIBLE | LATEST | version_string> |
expdp version
expdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo39.dmp VERSION=latest |
|
Importing Schemas |
Demo Setup | conn / as sysdba
desc dba_directories
col owner format a10 col directory_path format a70
SELECT * FROM dba_directories;
CREATE OR REPLACE DIRECTORY data_pump_dir AS 'c:\temp'; -- default is $ORACLE_BASE/admin//dpdump
GRANT import full database TO uwclass; |
Query a list of database dependent objects | SQL> SELECT object_path 2 FROM database_export_objects 3 WHERE object_path like 'TABLE%' 4 ORDER BY 1;
OBJECT_PATH ---------------------------------------------------- TABLE TABLE/AUDIT_OBJ TABLE/COMMENT TABLE/CONSTRAINT TABLE/CONSTRAINT/REF_CONSTRAINT TABLE/FGA_POLICY TABLE/GRANT TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT TABLE/INDEX TABLE/INDEX/STATISTICS TABLE/INSTANCE_CALLOUT TABLE/MATERIALIZED_VIEW_LOG TABLE/POST_INSTANCE/GRANT/PROCDEPOBJ_GRANT TABLE/POST_INSTANCE/PROCDEPOBJ TABLE/POST_INSTANCE/PROCDEPOBJ_AUDIT TABLE/POST_TABLE_ACTION TABLE/PRE_TABLE_ACTION TABLE/PROCACT_INSTANCE TABLE/RLS_CONTEXT TABLE/RLS_GROUP TABLE/RLS_POLICY TABLE/TRIGGER TABLESPACE TABLESPACE_QUOTA |
|
Basic Import Types |
Full Import | FULL=<N | Y> |
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo01.dmp FULL=y |
Schema Import | SCHEMAS= |
SQL> conn / as sysdba
SQL> CREATE USER abc IDENTIFIED BY abc DEFAULT TABLESPACE users TEMPORARY TABLESPACE temp QUOTA unlimited ON users;
SQL> GRANT create session, create table TO abc; SQL> GRANT read on directory data_pump_dir TO abc; SQL> GRANT write on directory data_pump_dir TO abc;
SQL> conn abc/abc
SQL> CREATE TABLE zzyzx AS SELECT * FROM all_tables;
expdp uwclass/uwclass DUMPFILE=data_pump_dir:demo02.dmp SCHEMAS=abc
SQL> DROP TABLE zzyzx;
impdp uwclass/uwclass DUMPFILE=data_pump_dir:demo02.dmp SCHEMAS=abc
impdp abc DIRECTORY=data_pump_dir DUMPFILE=demo03.dmp SCHEMAS=abc
SQL> SELECT owner, object_type, created FROM dba_objects_ae WHERE object_name = 'ZZYZX'; |
Table Import | TABLES=<[schema_name.]table_name[:partition_name] [, ...]> |
expdp uwclass/uwclass DUMPFILE=data_pump_dir:demo04.dmp TABLES=servers, serv_inst |
Tablespace Import | TABLESPACES= |
expdp uwclass DUMPFILE=data_pump_dir:demo05.dmp TABLESPACES=uwclass,users TRANSPORT_FULL_CHECK=y |
Transportable Tablespace Import | TRANSPORT_TABLESPACES=
The default tablespace of the user performing the export must not be set to one of the tablespaces being transported |
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo06.dmp TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=demo6.log
conn / as sysdba
ALTER TABLESPACE users READ ONLY; ALTER TABLESPACE example READ ONLY;
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo07.dmp TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=demo7.log
ALTER TABLESPACE users READ WRITE; ALTER TABLESPACE example READ WRITE; |
Legacy Mode Mappings |
Legacy Export Cmd | How Handled |
BUFFER | This parameter is ignored because Data Pump does not make use of conventional mode. |
CHARSET | This parameter is desupported and will cause the Data Pump Import operation to abort. |
COMMIT | This parameter is ignored. Data Pump Import automatically performs a commit after each table is processed. |
COMPILE | This parameter is ignored. Data Pump Import compiles procedures after they are created. |
CONSTRAINTS | If original Import used CONSTRAINTS=n, then Data Pump Import uses the EXCLUDE=CONSTRAINT parameter. If original Import used CONSTRAINTS=y, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior. |
DATAFILES | The Data Pump Import TRANSPORT_DATAFILES parameter is used. |
DESTROY | If original Import used DESTROY=y, then Data Pump Import uses the REUSE_DATAFILES=y parameter. If original Import used DESTROY=n, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior. |
FEEDBACK | The Data Pump Import STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the import job, as well as the rows being processed. In original Import, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump Import, the status is given every so many seconds, as specified by STATUS. |
FILE | Data Pump attempts to determine the path that was specified or defaulted to for the FILE parameter, and also to determine whether a directory object exists to which the schema has read and write access. |
FILESIZE | This parameter is ignored because the information is already contained in the Data Pump dump file set. |
FROMUSER | The Data Pump SCHEMAS parameter is used. If FROMUSER was used without TOUSER also being used, then import schemas that have the IMP_FULL_DATABASE role cause Data Pump Import to attempt to create the schema and then import that schema's objects. Import schemas that do not have the IMP_FULL_DATABASE role can only import their own schema from the dump file set. |
GRANTS | If original Import used GRANTS=n, then Data Pump uses the EXCLUDE=OBJECT_GRANT parameter. If original Import used GRANTS=y, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior. |
IGNORE | If original Import used IGNORE=y, then Data Pump Import uses the TABLE_EXISTS_ACTION=APPEND parameter. This causes the processing of table data to continue. If original Import used IGNORE=n, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior. |
INDEXES | If original Import used INDEXES=n, then Data Pump uses the EXCLUDE=INDEX parameter. If original Import used INDEXES=y, the parameter is ignored and does not need to be remapped because that is the Data Pump default behavior. |
INDEXFILE | fThe Data Pump Import SQLFILE={directory-object:}filename and INCLUDE=INDEX parameters are used. The same method and attempts made when looking for a directory object described for the FILE parameter also take place for the INDEXFILE parameter. If no directory object was specified on the original Import, then Data Pump uses the directory object specified with the DIRECTORY parameter. |
LOG | Data Pump attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access. |
RECORDLENGTH | This parameter is ignored because Data Pump automatically takes care of buffer sizing |
RESUMABLE | This parameter is ignored because Data Pump automatically provides this functionality |
RESUMABLE_NAME | This parameter is ignored because Data Pump automatically provides this functionality |
RESUMABLE_TIMEOUT | This parameter is ignored because Data Pump automatically provides this functionality. |
ROWS=N | If original Import used ROWS=n, then Data Pump uses the CONTENT=METADATA_ONLY parameter. If original Import used ROWS=y, then Data Pump uses the CONTENT=ALL parameter. |
SHOW | If SHOW=y is specified, the Data Pump Import SQLFILE=[directory_object:]file_name parameter is used to write the DDL for the import operation to a file. Only the DDL (not the entire contents of the dump file) is written to the specified file. (Note that the output is not shown on the screen as it was in original Import.) |
STATISTICS | This parameter is ignored because statistics are always saved for tables as part of a Data Pump operation. |
STREAMS_CONFIGURATION | This parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified. |
STREAMS_INSTANTIATION | This parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified. |
TABLESPACES | If original Import also specified TRANSPORT_TABLESPACE=n (the default), then Data Pump ignores the TABLESPACES parameter. If original Import also specified TRANSPORT_TABLESPACE=y, then Data Pump takes the names supplied for this TABLESPACES parameter and applies them to the Data Pump TRANSPORT_TABLESPACES parameter. |
TOID_NOVALIDATE | This parameter is ignored. OIDs are no longer used for type validation. |
TO_USER | The REMAP_SCHEMA parameter is used. There may be more objects imported than with original Import. Data Pump may create the target schema if it does not already exist. |
TRANSPORT_TABLESPACE | If original Import used TRANSPORT_TABLESPACE=n, then Data Pump Import ignores this parameter. |
TTS_OWNERS | This parameter is ignored because this information is automatically stored in the Data Pump dump file set. |
VOLSIZE | When the VOLSIZE parameter is used, it means the location specified for the dump file is a tape device. The Data Pump dump file format does not support tape devices. The operation terminates with an error. |
|
|
Additional Import Parameters |
Attach | ATTACH[=[schema_name.]job_name] |
impdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo08.dmp ATTACH=uw_job NOLOGFILE=y |
Cluster | CLUSTER=<Y | N> |
impdp uwclass CLUSTER=N |
Content | CONTENT=<ALL | DATA_ONLY | METADATA_ONLY> |
impdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo09.dmpCONTENT=metadata_only |
Data Options | DATA_OPTIONS= |
impdp uwclass DATA_OPTIONS=disable_append_hint |
Directory | DIRECTORY=DATA_PUMP_DIR |
>
impdp uwclass DIRECTORY=CTEMP |
Dumpfile | DUMPFILE= -- can be used more than 1X at once |
impdp uwclass DUMPFILE=CTEMP\demo10.dmp |
Encryption | Default: The default mode depends on which other encryption-related parameters are used. If only the ENCRYPTION parameter is specified, then the default mode is TRANSPARENT.
If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is open, then the default is DUAL. If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is closed, then the default is PASSWORD.
ENCRYPTION_PASSWORD= |
impdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo11.dmp ENCRYPTION=all ENCRYPTION_ALGORITHM=aes256 ENCRYPTION_MODE=dual ENCRYPTION_PASSWORD=a1pha |
Estimate | ESTIMATE=<BLOCKS | STATISTICS> |
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo10.dmp ESTIMATE=blocks
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo11.dmp ESTIMATE=statistics |
Exclude | EXCLUDE= |
-- exclude all (nonreferential) constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo12.dmp EXCLUDE=constraint
-- exclude referential integrity (foreign key) constraints impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo13.dmp EXCLUDE=ref_constraint
-- exclude object grants on all object types and system priv grants impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo14.dmp EXCLUDE=grant
-- excludes the definitions of users impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo15.dmp EXCLUDE=user
-- excludes views impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo16.dmp EXCLUDE=view,package,function
-- to exclude a specific user and all objects of that user, specify a filter such as the following (where hr is the schema name of the user you want to exclude): impdp uwclass/uwclass FULL=y DIRECTORY=data_pump_dir DUMPFILE=demo17.dmpEXCLUDE=SCHEMA:\"='HR'\" |
Flashback SCN | FLASHBACK_SCN= |
conn / as sysdba
SELECT dbms_flashback.get_system_change_number FROM dual;
exit
impdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo18.dmp FLASHBACK_SCN=36477000 |
Flashback Time | FLASHBACK_TIME= |
conn / as sysdba
SELECT dbms_flashback.get_system_change_number FROM dual;
SELECT SCN_TO_TIMESTAMP(dbms_flashback.get_system_change_number) FROM dual;
exit
impdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo19.dmp FLASHBACK_TIME=\"TO_TIMESTAMP('01-SEP-2012 08:08:08', 'DD-MON-YYYY HH24:MI:SS')\" |
Help | HELP=N |
>
impdp uwclass HELP=y |
Include | INCLUDE= |
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo20.dmp INCLUDE=table
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo21.dmp INCLUDE=\"IN ('SERVERS', 'SERV_INST')\"
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo22.dmp INCLUDE=procedure
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo23.dmpINCLUDE=INDEX:\"LIKE 'PK%\" |
Job Name | JOB_NAME= |
impdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo24.dmp JOB_NAME=uwjob |
Logfile | LOGFILE= |
impdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo25.dmpLOGFILE=data_pump_dir:demo25.log |
Network Link | NETWORK_LINK= |
impdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo26.dmp NETWORK_LINK=fixed_user |
No Logfile | NOLOGFILE=<N | Y> |
impdp uwclass TABLES=servers, airplanes DUMPFILE=data_pump_dir:demo27.dmp NOLOGFILE=y |
Parallel | PARALLEL=<1 | parallel_degree> |
impdp uwclass TABLES=servers DUMPFILE=data_pump_dir:demo28.dmp PARALLEL=2 |
Parfile | PARFILE=<[directory_object.]file_name> |
-- create this as a text file in the data_pump_dir directory
TABLES=servers DUMPFILE=data_pump_dir:demo29.dmp LOGFILE=data_pump_dir:demo29.log PARALLEL=2 |
impdp uwclass PARFILE=data_pump_dir:parfile.par |
Partition Options | PARTITION_OPTIONS=departition |
| merge>
impdp uwclass PARTITION_OPTIONS=departition |
Query | QUERY=<[schema.][table_name:]query_where_clause> |
impdp uwclass TABLES=airplanes DUMPFILE=data_pump_dir:demo30.dmp QUERY=airplanes:\"WHERE program_id = ''737''\" |
Remap Data | REMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function |
CREATE TABLE holder ( holder_name VARCHAR2(30) NOT NULL, card_number NUMBER(16) NOT NULL);
CREATE TABLE activity ( vendor VARCHAR2(30) NOT NULL, card_number NUMBER(16) NOT NULL, sales_total NUMBER(10,2) NOT NULL);
INSERT INTO holder VALUES ('Morgan', 4124555566661234); INSERT INTO holder VALUES ('Kyte', 3776555566665678); INSERT INTO holder VALUES ('Norgaard', 5058555566669012);
INSERT INTO activity VALUES ('Amazon.com', 4124555566661234, 100.00); INSERT INTO activity VALUES ('eBay.com', 4124555566661234, 59.50); INSERT INTO activity VALUES ('Oracle', 3776555566665678, 50.50); INSERT INTO activity VALUES ('Miracle', 5058555566669012, 42.42);
CREATE OR REPLACE PACKAGE hidedata AUTHID CURRENT_USER IS FUNCTION newcc(oldno IN NUMBER) RETURN NUMBER; END hidedata; /
CREATE OR REPLACE PACKAGE BODY hidedata IS TYPE cc_list IS TABLE OF NUMBER INDEX BY VARCHAR2(16); cc_remaps cc_list; cc_seed NUMBER(15) := 000000000000010;
FUNCTION newcc(oldno IN NUMBER) RETURN NUMBER IS BEGIN IF NOT cc_remaps.EXISTS(oldno) THEN cc_seed := cc_seed+10; cc_remaps(oldno) := ROUND(oldno, -15)+cc_seed+MOD(oldno,10); END IF; RETURN cc_remaps(oldno); END newcc; END hidedata; /
col card_number format 9999999999999999
SELECT vendor, hidedata.newcc(card_number) CARD_NUMBER, sales_total FROM activity;
expdp uwclass/uwclass tables=uwclass.holder,uwclass.activityremap_data=uwclass.holder.card_number:hidedata.newcc remap_data=uwclass.activity.card_number:hidedata.newcc directory=ctemp dumpfile=hremp31.dmp |
Remap Datafile | REMAP_DATAFILE= |
TBD |
Remap Schema | REMAP_SCHEMA= |
expdp system/oracle1 DIRECTORY=data_pump_dir DUMPFILE=streamsdemo32.dmp SCHEMAS=scott
impdp system/oracle1 DUMPFILE=data_pump_dir:streamsdemo32.dmp SCHEMAS=scott REMAP_SCHEMA=scott:scottrep |
Remap Table | REMAP_TABLE=<[schema.]old_tablename[.partition]:new_tablename> |
TBD |
Remap Tablespace | REMAP_TABLESPACE= |
impdp uwclass/uwclass DIRECTORY=data_pump_dir dumpfile=demo33.dmp schemas=abc job_name=export_cw4ora REMAP_SCHEMA=abc:def REMAP_TABLESPACE=users:uw_data REMAP_TABLESPACE=example:uwdata exclude=GRANT |
Reuse Datafiles | REUSE_DATAFILES=N |
>
impdp uwclass/uwclass REUSE_DATAFILES=Y |
Schemas | SCHEMAS= |
impdp hr/hr SCHEMAS=uwclass LOGFILE=uwdp.log DUMPFILE=dump34.dmp |
Service Name | SERVICE_NAME= |
impdp hr/hr SERVICE_NAME=orabase |
Skip Unusable Indexes | SKIP_UNUSABLE_INDEXES=<value_of_the_corresponding_init_parameter | Y | N> |
impdp uwclass SKIP_UNUSABLE_INDEXES=Y |
Source Edition | SOURCE_EDITION=<default_database_edition | edition_name> |
impdp hr DIRECTORY=dpump_dir1 SOURCE_EDITION=exp_edition |
SQL File | SQLFILE=<[directory_object:]file_name> |
impdp hr DIRECTORY=dpump_dir1 DUMPFILE=demo35.dmp SQLFILE=dpump_dir2:expfull.sql |
Status Demo | STATUS=<0 | freq_in_seconds> -- how often job status is displayed |
impdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo36.dmp STATUS=5 |
Streams Configuration | STREAMS_CONFIGURATION=<Y | N> |
impdp hr/hr DIRECTORY=dpump_dir1 STREAMS_CONFIGURATION=N |
Table Exists Action | TABLE_EXISTS_ACTION=<SKIP | APPEND | TRUNCATE | REPLACE> |
impdp hr/hr DIRECTORY=dpump_dir1 TABLE_EXISTS_ACTION=APPEND |
Target Editions | TARGET_EDITIONS= |
impdp uwclass/uwclass TARGET_EDITIONS=edition2,edition3 |
Transform | TRANSFORM= |
TBD |
Transport Data Files | TRANSPORT_DATAFILES= |
impdp uwclass/uwclass DIRECTORY=dpump_dir1 DUMPFILE=tdf.dmp TRANSPORTABLE=alwaysTRANSPORT_DATAFILES='/app/oracle/oradata/orabase/uwclass01.dbf' TRANSPORT_FULL_CHECK=N |
Transportable | TRANSPORTABLE=NEVER |
>
See TRANSPORT_DATAFILES Demo Above |
Version Demo | VERSION=<COMPATIBLE | LATEST | version_string> |
impdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo37.dmp VERSION=latest |
1 comment:
good work !
Post a Comment