Oracle DataPump Utility 11.2.0.3


General Information
DataPump comes in two different forms ... the executable EXPDP and IMPDP in the $ORACLE_HOME/bin directory, this page, and as a built-in package DBMS_DATAPUMP linked at page bottom.
Data Dictionary Objects
DATAPUMP_DIR_OBJSDATAPUMP_PATHMAPDBMS_STREAMS_DATAPUMP_UTIL
DATAPUMP_OBJECT_CONNECTDATAPUMP_PATHSKU$_DATAPUMP_MASTER_10_1
DATAPUMP_PATHMAPDATAPUMP_PATHS_VERSIONKU$_DATAPUMP_MASTER_11_1
DATAPUMP_PATHSDATAPUMP_REMAP_OBJECTSKU$_DATAPUMP_MASTER_11_1_0_7
DATAPUMP_PATHS_VERSIONDATAPUMP_TABLE_DATAKU$_DATAPUMP_MASTER_11_2
DATAPUMP_REMAP_OBJECTSDATA_PUMP_DIRKUPC$DATAPUMP_QUETAB
DATAPUMP_TABLE_DATADBA_DATAPUMP_JOBSKUPC$DATAPUMP_QUETAB_1
DBA_DATAPUMP_JOBSDBA_DATAPUMP_SESSIONSORACLE_DATAPUMP
DBA_DATAPUMP_SESSIONSDBMS_DATAPUMPUSER_DATAPUMP_JOBS
DATAPUMP_DDL_TRANSFORM_PARAMSDBMS_DATAPUMP_UTLV$DATAPUMP_JOB
DATAPUMP_DIR_OBJSDBMS_STREAMS_DATAPUMPV$DATAPUMP_SESSION
DATAPUMP_OBJECT_CONNECT  
Export Modes
ModeDescription
FullUse the FULL parameter: Exports the entire database is unloaded. EXP_FULL_DATABASE role required.
SchemaUse the SCHEMAS parameter: The default export mode. If you have the EXP_FULL_DATABASE role, then you can specify a list of schemas and optionally include the schema definitions themselves, as well as system privilege grants to those schemas. If you do not have the EXP_FULL_DATABASE role, you can export only your own schema.
TableUse the TABLES parameter. A specified set of tables, partitions, and their dependent objects are unloaded. You must have the EXP_FULL_DATABASE role to specify tables that are not in your own schema. All specified tables must reside in a single schema. Note that type definitions for columns are not exported in table mode. It is expected that the type definitions already exist in the target instance at import time.
TablespaceUse the TABLESPACES parameter. Only tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, its data, metadata, and dependent objects are also unloaded. In tablespace mode, if any part of a table resides in the specified set, then that table and all of its dependent objects are exported. Privileged users get all tables. Nonprivileged users get only the tables in their own schemas.
Transportable TablespaceUse the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, only the metadata for tables, and dependent objects, within a specified set of tablespaces are unloaded. EXP_FULL_DATABASE role required. Degree of parallelism must = 1.
LegacyData Pump enters legacy mode once it determines a parameter unique to original Export is present, either on the command line or in a script. As Data Pump processes the parameter, the analogous Data Pump Export parameter is displayed.
Import Modes
ModeDescription
FullA full import is specified using the FULL parameter. In full import mode, the entire content of the source (dump file set or another database) is loaded into the target database. This is the default for file-based imports. You must have the DATAPUMP_IMP_FULL_DATABASE role if the source is another database.

Cross-schema references are not imported for non-privileged users. For example, a trigger defined on a table within the importing user's schema, but residing in another user's schema, is not imported.
SchemaSchema import is specified using the SCHEMAS parameter. In a schema import, only objects owned by the specified schemas are loaded. The source can be a full, table, tablespace, or schema-mode export dump file set or another database. you have the DATAPUMP_IMP_FULL_DATABASE role, then a list of schemas can be specified and the schemas themselves (including system privilege grants) are created in the database in addition to the objects contained within those schemas.

Cross-schema references are not imported for non-privileged users unless the other schema is remapped to the current schema. For example, a trigger defined on a table within the importing user's schema, but residing in another user's schema, is not imported.
TableTable-mode import is specified using the TABLES parameter. In table mode, only the specified set of tables, partitions, and their dependent objects are loaded. The source can be a full, schema, tablespace, or table-mode export dump file set or another database. You must have the DATAPUMP_IMP_FULL_DATABASE role to specify tables that are not in your own schema.

You can use the transportable option during a table-mode import by specifying the
 TRANPORTABLE=ALWAYSparameter with the TABLES parameter. This requires use of the NETWORK_LINK parameter, as well.
TablespaceTablespace-mode import is specified using the TABLESPACES parameter. In tablespace mode, all objects contained within the specified set of tablespaces are loaded, along with the dependent objects. The source can be a full, schema, tablespace, or table-mode export dump file set or another database. For unprivileged users, objects not remapped to the current schema will not be processed.
Transportable TablespaceTransportable tablespace import is specified using the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, the metadata from a transportable tablespace export dump file set or from another database is loaded. The datafiles, specified by the TRANSPORT_DATAFILES parameter, must be made available from the source system for use in the target database, typically by copying them over to the target system. Encrypted columns are not supported in transportable tablespace mode. This mode requires the DATAPUMP_IMP_FULL_DATABASE role. In transportable tablespace mode, only the metadata for tables, and dependent objects, within a specified set of tablespaces are unloaded. EXP_FULL_DATABASE role required.Degree of parallelism must = 1.
LegacyData Pump enters legacy mode once it determines a parameter unique to original Import is present, either on the command line or in a script. As Data Pump processes the parameter, the analogous Data Pump Import parameter is displayed.
 
Exporting Schemas
Demo Setupconn / as sysdba

desc dba_directories

col owner format a10
col directory_path format a70

SELECT * FROM dba_directories;

CREATE OR REPLACE DIRECTORY data_pump_dir AS 'c:\temp';
-- default is $ORACLE_BASE/admin//dpdump

GRANT export full database TO uwclass;

Note: Verify that the environment variables ORACLE_HOME and ORACLE_SID are set properly in your shell. If they are not set then you must set them at the command line to proceed.
  
 
Basic Export Types
Note: After each export, SELECT table_name FROM user_tables in the schema that ran the export, in the following examples, uwclass or abc. You will find tables with names such as SYS_EXPORT_FULL_01 and SYS_EXPORT_SCHEMA_01. Examine their contents
Full ExportFULL=<N | Y>
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo01.dmp FULL=y
Schema ExportSCHEMAS=
expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo02.dmp SCHEMAS=uwclass,scott
Table ExportTABLES=<[schema_name.]table_name[:partition_name] [, ...]>
expdp uwclass/uwclass DUMPFILE=data_pump_dir:demo03a.dmp TABLES=servers, serv_inst

expdp sh/sh DUMPFILE=data_pump_dir:demo03b.dmp TABLES=sales:sales_q3_2003
Tablespace ExportTABLESPACES=
expdp uwclass DUMPFILE=data_pump_dir:demo04.dmp TABLESPACES=uwclass,usersTRANSPORT_FULL_CHECK=y
Transportable Tablespace ExportTRANSPORT_TABLESPACES=

The default tablespace of the user performing the export must not be set to one of the tablespaces being transported.
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo05.dmp
TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=demo5.log

conn / as sysdba

ALTER TABLESPACE users READ ONLY;
ALTER TABLESPACE example READ ONLY;

expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo05.dmp
TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=demo5.log

ALTER TABLESPACE users READ WRITE;
ALTER TABLESPACE example READ WRITE;
Legacy Mode Mappings
Legacy Export CmdHow Handled
BUFFERThis parameter is ignored because Data Pump does not make use of conventional mode.
COMPRESSIn original Export, the COMPRESS parameter affected how the initial extent was managed. Setting COMPRESS=n caused original Export to use current storage parameters for the initial and next extent. The Data Pump COMPRESSION parameter is used to specify how data is compressed in the dump file, and is not related to the original Export COMPRESS parameter.
CONSISTENTData Pump Export determines the current time and uses FLASHBACK_TIME. Always use this option.
CONSTRAINTSIf original Export used CONSTRAINTS=n, then Data Pump Export uses EXCLUDE=CONSTRAINTS.
The default behavior is to include constraints as part of the export.
DIRECTThis parameter is ignored. Data Pump automatically chooses the best export method.
FEEDBACKThe Data Pump STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the export job, as well as the rows being processed. In original Export, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump, the status is given every so many seconds, as specified by STATUS.
FILEData Pump attempts to determine the path that was specified or defaulted to for the FILE parameter, and also to determine whether a directory object exists to which the schema has read and write access.
GRANTSIf original Export used GRANTS=n, then Data Pump uses EXCLUDE=GRANT. If original Export used GRANTS=y, then the parameter is ignored and does not need to be remapped because that is the Data Pump default behavior.
INDEXESIf original Export used INDEXES=n, then Data Pump uses the EXCLUDE=INDEX parameter. If original Export used INDEXES=y, then Data Pump uses the INCLUDE=INDEX parameter.
LOGData Pump attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access.
OBJECT_CONSISTENTThis parameter is ignored because Data Pump processing ensures that each object is in a consistent state when being exported.
OWNERThe Data Pump SCHEMAS parameter is used.
RECORDLENGTHThis parameter is ignored because Data Pump automatically takes care of buffer sizing
RESUMABLEThis parameter is ignored because Data Pump automatically provides this functionality
RESUMABLE_NAMEThis parameter is ignored because Data Pump automatically provides this functionality
RESUMABLE_TIMEOUTThis parameter is ignored because Data Pump automatically provides this functionality.
ROWSIf original Export used ROWS=y, then Data Pump Export uses the CONTENT=ALL parameter.

If original Export used ROWS=n, then Data Pump Export uses the CONTENT=METADATA_ONLY parameter.
STATISTICSThis parameter is ignored because statistics are always saved for tables as part of a Data Pump operation.
TABLESPACESIf original Export also specified TRANSPORT_TABLESPACE=n, then Data Pump ignores the TABLESPACES parameter. If original Export also specified TRANSPORT_TABLESPACE=y, then Data Pump takes the names listed for the TABLESPACES parameter and uses them.
TRANSPORT_TABLESPACEIf original Export used TRANSPORT_TABLESPACE=n (the default), then Data Pump uses the TABLESPACES parameter. If original Export used TRANSPORT_TABLESPACE=y, then Data Pump uses the TRANSPORT_TABLESPACES parameter and only the metadata is exported.
TRIGGERSIf original Export used TRIGGERS=n, then Data Pump Export uses the EXCLUDE=TRIGGER parameter
TTS_FULL_CHECKIf original Export used TTS_FULL_CHECK=y, then Data Pump uses the TRANSPORT_FULL_CHECK parameter. If original Export used TTS_FULL_CHECK=y, then the parameter is ignored.
VOLSIZEIt means the location specified for the dump file is a tape device. The Data Pump Export dump file format does not support tape devices. Therefore, this operation terminates with an error.
 
Additional Export Parameters
AttachATTACH=<[schema_name.]job_name>
expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo06.dmp ATTACH=uw_job NOLOGFILE=y
ClusterCLUSTER=<Y | N>
expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo07.dmp CLUSTER=n
CompressionCOMPRESSION=METADATA_ONLY
 | NONE> expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo08.dmpCOMPRESSION=none ContentCONTENT=<ALL | DATA_ONLY | METADATA_ONLY | NONE> expdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo09.dmpCONTENT=metadata_only Data OptionsDATA_OPTIONS=<XML_CLOBS> expdp uwclass/uwclass DATA_OPTIONS=XML_CLOBS DIRECTORY=data_pump_dir DUMPFILE=demo10.dmp DirectoryDIRECTORY= See FULL Demo Below DumpfileDUMPFILE=<expdat.dmp | file_name.dmp> -- can be used more than 1X at once See FULL Demo Below EncryptionDefault: The default mode depends on which other encryption-related parameters are used. If only the ENCRYPTION parameter is specified, then the default mode is TRANSPARENT.

If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is open, then the default is DUAL. If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is closed, then the default is PASSWORD.


ENCRYPTION=NONE
>
ENCRYPTION_ALGORITHM=<AES128 | AES192 | AES256>
ENCRYPTION_MODE=
ENCRYPTION_PASSWORD=
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo11.dmp ENCRYPTION=all ENCRYPTION_ALGORITHM=aes256 ENCRYPTION_MODE=dual ENCRYPTION_PASSWORD=a1pha EstimateESTIMATE=<BLOCKS | STATISTICS> expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo12.dmpESTIMATE=blocks

expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo13.dmp ESTIMATE=statistics
Estimate OnlyESTIMATE_ONLY=N > expdp uwclass/uwclass SCHEMAS=uwclass ESTIMATE_ONLY=y

open export.log with an editor
ExcludeEXCLUDE= -- exclude all (nonreferential) constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo14.dmp EXCLUDE=constraint

-- exclude referential integrity (foreign key) constraints
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo15.dmp EXCLUDE=ref_constraint

-- exclude object grants on all object types and system priv grants
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo16.dmp EXCLUDE=grant

-- excludes the definitions of users
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo17.dmp EXCLUDE=user

-- excludes views
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo18.dmp EXCLUDE=view,package,function

-- to exclude a specific user and all objects of that user, specify a filter such as the following (where hr is the schema name of the user you want to exclude):
expdp uwclass/uwclass FULL=y DIRECTORY=data_pump_dir DUMPFILE=demo19.dmpEXCLUDE=SCHEMA:\"='HR'\"
FilesizeFILESIZE<0 | integer[B | K | M | G]>

The default, zero, means unlimited
expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE = demo%U.dmpCOMPRESSION=none FILESIZE=500M Flashback SCNFLASHBACK_SCN= conn / as sysdba

SELECT dbms_flashback.get_system_change_number
FROM dual;

exit

expdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo20.dmp FLASHBACK_SCN=36477000
Flashback TimeFLASHBACK_TIME= conn / as sysdba

SELECT dbms_flashback.get_system_change_number
FROM dual;

SELECT SCN_TO_TIMESTAMP(dbms_flashback.get_system_change_number)
FROM dual;

exit

expdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo21.dmp FLASHBACK_TIME =\"TO_TIMESTAMP('01-SEP-2012 08:08:08', 'DD-MON-YYYY HH24:MI:SS')\"
FullSee BASIC EXPORT TYPES: Above HelpHELP=N > expdp uwclass HELP=y IncludeINCLUDE= expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo22.dmp INCLUDE=table

expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo23.dmp INCLUDE=\"IN ('SERVERS', 'SERV_INST')\"

expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo24.dmp INCLUDE=procedure

expdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo25.dmpINCLUDE=INDEX:\"LIKE 'PK%\"
Job NameJOB_NAME= expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo26.dmp JOB_NAME=uwjob LogfileLOGFILE=<export.log | directory_object:file_name> expdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo27.dmpLOGFILE=data_pump_dir:demo27.log MetricsMETRICS= expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo28.dmp NETWORK_LINK=fixed_user Network LinkNETWORK_LINK= expdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo28.dmp NETWORK_LINK=fixed_user No LogfileNOLOGFILE=<N | Y> expdp uwclass TABLES=servers, airplanes DUMPFILE=data_pump_dir:demo29.dmp NOLOGFILE=y ParallelPARALLEL=<1 | parallel_degree> expdp uwclass TABLES=servers DUMPFILE=data_pump_dir:demo30.dmp PARALLEL=2 ParfilePARFILE=<[directory_object.]file_name> -- create this as a text file in the data_pump_dir directory
TABLES=servers DUMPFILE=data_pump_dir:demo31.dmp
LOGFILE=data_pump_dir:demo31.log
PARALLEL=2

expdp uwclass PARFILE=data_pump_dir:parfile.par
QueryQUERY=<[schema.][table_name:]query_where_clause> expdp uwclass TABLES=airplanes DUMPFILE=data_pump_dir:demo32.dmp QUERY=airplanes:\"WHERE program_id = ''737''\" Remap DataREMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function conn uwclass/uwclass

CREATE OR REPLACE PACKAGE remap IS
  FUNCTION timestwo (inval NUMBER) RETURN NUMBER;
END remap;
/

CREATE OR REPLACE PACKAGE BODY remap IS
  FUNCTION timestwo (inval NUMBER) RETURN NUMBER IS
  BEGIN
    RETURN inval*2;
  END timestwo;
END remap;
/

expdp uwclass TABLES=servers DUMPFILE=data_pump_dir:demo33.dmp REMAP_DATA=uwclass.servers.srvr_id:uwclass.remap.timestwo
Reuse Dump FilesREUSE_DUMPFILES=<N | Y> expdp uwclass TABLES=servers DUMPFILE=data_pump_dir:demo34.dmp REUSE_DUMPFILES=y SampleSAMPLE=<[[[schema_name.]table_name:]sample_percent> expdp uwclass/uwclass TABLES=airplanes DUMPFILE=data_pump_dir:demo35.dmpSAMPLE=\"UWCLASS.AIRPLANES:10\" SchemaSee BASIC EXPORT TYPES: Above Service NameSERVICE_NAME= expdp uwclass/uwclass TABLES=airplanes DUMPFILE=ctemp:demo36.dmp SERVICE_NAME=oradata Source EditionSOURCE_EDITION= expdp uwclass/uwclass TABLES=airplanes DUMPFILE=ctemp:demo37.dmp SOURCE_EDITION=ORA$ASE StatusSTATUS=<0 | seconds> -- how often job status is displayed expdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo38.dmp STATUS=5 TablesSee BASIC EXPORT TYPES: Above TablespacesSee BASIC EXPORT TYPES: Above Transport Full CheckTRANSPORT_FULL_CHECK=<N | Y> See TRANSPORT_TABLESPACES Demo Below VersionVERSION=<COMPATIBLE | LATEST | version_string> expdp version

expdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo39.dmp VERSION=latest
  Importing Schemas Demo Setupconn / as sysdba

desc dba_directories

col owner format a10
col directory_path format a70

SELECT * FROM dba_directories;

CREATE OR REPLACE DIRECTORY data_pump_dir AS 'c:\temp';
-- default is $ORACLE_BASE/admin//dpdump

GRANT import full database TO uwclass;
Query a list of database dependent objectsSQL> SELECT object_path
   2 FROM database_export_objects
   3 WHERE object_path like 'TABLE%'
   4 ORDER BY 1;

OBJECT_PATH
----------------------------------------------------
TABLE
TABLE/AUDIT_OBJ
TABLE/COMMENT
TABLE/CONSTRAINT
TABLE/CONSTRAINT/REF_CONSTRAINT
TABLE/FGA_POLICY
TABLE/GRANT
TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
TABLE/INDEX
TABLE/INDEX/STATISTICS
TABLE/INSTANCE_CALLOUT
TABLE/MATERIALIZED_VIEW_LOG
TABLE/POST_INSTANCE/GRANT/PROCDEPOBJ_GRANT
TABLE/POST_INSTANCE/PROCDEPOBJ
TABLE/POST_INSTANCE/PROCDEPOBJ_AUDIT
TABLE/POST_TABLE_ACTION
TABLE/PRE_TABLE_ACTION
TABLE/PROCACT_INSTANCE
TABLE/RLS_CONTEXT
TABLE/RLS_GROUP
TABLE/RLS_POLICY
TABLE/TRIGGER
TABLESPACE
TABLESPACE_QUOTA
  Basic Import Types Full ImportFULL=<N | Y> expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo01.dmp FULL=y Schema ImportSCHEMAS= SQL> conn / as sysdba

SQL> CREATE USER abc
     IDENTIFIED BY abc
     DEFAULT TABLESPACE users
     TEMPORARY TABLESPACE temp
     QUOTA unlimited ON users;

SQL> GRANT create session, create table TO abc;
SQL> GRANT read on directory data_pump_dir TO abc;
SQL> GRANT write on directory data_pump_dir TO abc;

SQL> conn abc/abc

SQL> CREATE TABLE zzyzx AS
     SELECT * FROM all_tables;

expdp uwclass/uwclass DUMPFILE=data_pump_dir:demo02.dmp SCHEMAS=abc

SQL> DROP TABLE zzyzx;

impdp uwclass/uwclass DUMPFILE=data_pump_dir:demo02.dmp SCHEMAS=abc

impdp abc DIRECTORY=data_pump_dir DUMPFILE=demo03.dmp SCHEMAS=abc

SQL> SELECT owner, object_type, created
     FROM dba_objects_ae
     WHERE object_name = 'ZZYZX';
Table ImportTABLES=<[schema_name.]table_name[:partition_name] [, ...]> expdp uwclass/uwclass DUMPFILE=data_pump_dir:demo04.dmp TABLES=servers, serv_inst Tablespace ImportTABLESPACES= expdp uwclass DUMPFILE=data_pump_dir:demo05.dmp TABLESPACES=uwclass,users TRANSPORT_FULL_CHECK=y Transportable Tablespace ImportTRANSPORT_TABLESPACES=

The default tablespace of the user performing the export must not be set to one of the tablespaces being transported
expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo06.dmp
TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=demo6.log

conn / as sysdba

ALTER TABLESPACE users READ ONLY;
ALTER TABLESPACE example READ ONLY;

expdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo07.dmp
TRANSPORT_TABLESPACES=users,example TRANSPORT_FULL_CHECK=y LOGFILE=demo7.log

ALTER TABLESPACE users READ WRITE;
ALTER TABLESPACE example READ WRITE;
Legacy Mode Mappings
Legacy Export CmdHow Handled
BUFFERThis parameter is ignored because Data Pump does not make use of conventional mode.
CHARSETThis parameter is desupported and will cause the Data Pump Import operation to abort.
COMMITThis parameter is ignored. Data Pump Import automatically performs a commit after each table is processed.
COMPILEThis parameter is ignored. Data Pump Import compiles procedures after they are created.
CONSTRAINTSIf original Import used CONSTRAINTS=n, then Data Pump Import uses the EXCLUDE=CONSTRAINT parameter.
If original Import used CONSTRAINTS=y, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
DATAFILESThe Data Pump Import TRANSPORT_DATAFILES parameter is used.
DESTROYIf original Import used DESTROY=y, then Data Pump Import uses the REUSE_DATAFILES=y parameter. If original Import used DESTROY=n, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
FEEDBACKThe Data Pump Import STATUS=30 command is used. Note that this is not a direct mapping because the STATUS command returns the status of the import job, as well as the rows being processed. In original Import, feedback was given after a certain number of rows, as specified with the FEEDBACK command. In Data Pump Import, the status is given every so many seconds, as specified by STATUS.
FILEData Pump attempts to determine the path that was specified or defaulted to for the FILE parameter, and also to determine whether a directory object exists to which the schema has read and write access.
FILESIZEThis parameter is ignored because the information is already contained in the Data Pump dump file set.
FROMUSERThe Data Pump SCHEMAS parameter is used. If FROMUSER was used without TOUSER also being used, then import schemas that have the IMP_FULL_DATABASE role cause Data Pump Import to attempt to create the schema and then import that schema's objects. Import schemas that do not have the IMP_FULL_DATABASE role can only import their own schema from the dump file set.
GRANTSIf original Import used GRANTS=n, then Data Pump uses the EXCLUDE=OBJECT_GRANT parameter. If original Import used GRANTS=y, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
IGNOREIf original Import used IGNORE=y, then Data Pump Import uses the TABLE_EXISTS_ACTION=APPEND parameter. This causes the processing of table data to continue. If original Import used IGNORE=n, the parameter is ignored and does not need to be remapped because that is the Data Pump Import default behavior.
INDEXESIf original Import used INDEXES=n, then Data Pump uses the EXCLUDE=INDEX parameter. If original Import used INDEXES=y, the parameter is ignored and does not need to be remapped because that is the Data Pump default behavior.
INDEXFILEfThe Data Pump Import SQLFILE={directory-object:}filename and INCLUDE=INDEX parameters are used. The same method and attempts made when looking for a directory object described for the FILE parameter also take place for the INDEXFILE parameter. If no directory object was specified on the original Import, then Data Pump  uses the directory object specified with the DIRECTORY parameter.
LOGData Pump attempts to determine the path that was specified or defaulted to for the LOG parameter, and also to determine whether a directory object exists to which the schema has read and write access.
RECORDLENGTHThis parameter is ignored because Data Pump automatically takes care of buffer sizing
RESUMABLEThis parameter is ignored because Data Pump automatically provides this functionality
RESUMABLE_NAMEThis parameter is ignored because Data Pump automatically provides this functionality
RESUMABLE_TIMEOUTThis parameter is ignored because Data Pump automatically provides this functionality.
ROWS=NIf original Import used ROWS=n, then Data Pump uses the CONTENT=METADATA_ONLY parameter. If original Import used ROWS=y, then Data Pump uses the CONTENT=ALL parameter.
SHOWIf SHOW=y is specified, the Data Pump Import SQLFILE=[directory_object:]file_name parameter is used to write the DDL for the import operation to a file. Only the DDL (not the entire contents of the dump file) is written to the specified file. (Note that the output is not shown on the screen as it was in original Import.)
STATISTICSThis parameter is ignored because statistics are always saved for tables as part of a Data Pump operation.
STREAMS_CONFIGURATIONThis parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified.
STREAMS_INSTANTIATIONThis parameter is ignored because Data Pump Import automatically determines it; it does not need to be specified.
TABLESPACESIf original Import also specified TRANSPORT_TABLESPACE=n (the default), then Data Pump ignores the TABLESPACES parameter. If original Import also specified TRANSPORT_TABLESPACE=y, then Data Pump takes the names supplied for this TABLESPACES parameter and applies them to the Data Pump TRANSPORT_TABLESPACES parameter.
TOID_NOVALIDATEThis parameter is ignored. OIDs are no longer used for type validation.
TO_USERThe REMAP_SCHEMA parameter is used. There may be more objects imported than with original Import. Data Pump may create the target schema if it does not already exist.
TRANSPORT_TABLESPACEIf original Import used TRANSPORT_TABLESPACE=n, then Data Pump Import ignores this parameter.
TTS_OWNERSThis parameter is ignored because this information is automatically stored in the Data Pump dump file set.
VOLSIZEWhen the VOLSIZE parameter is used, it means the location specified for the dump file is a tape device. The Data Pump dump file format does not support tape devices. The operation terminates with an error.
  Additional Import Parameters AttachATTACH[=[schema_name.]job_name] impdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo08.dmp ATTACH=uw_job NOLOGFILE=y ClusterCLUSTER=<Y | N> impdp uwclass CLUSTER=N ContentCONTENT=<ALL | DATA_ONLY | METADATA_ONLY> impdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo09.dmpCONTENT=metadata_only Data OptionsDATA_OPTIONS= impdp uwclass DATA_OPTIONS=disable_append_hint DirectoryDIRECTORY=DATA_PUMP_DIR > impdp uwclass DIRECTORY=CTEMP DumpfileDUMPFILE= -- can be used more than 1X at once impdp uwclass DUMPFILE=CTEMP\demo10.dmp EncryptionDefault: The default mode depends on which other encryption-related parameters are used. If only the ENCRYPTION parameter is specified, then the default mode is TRANSPARENT.

If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is open, then the default is DUAL. If the ENCRYPTION_PASSWORD parameter is specified and the Oracle Encryption Wallet is closed, then the default is PASSWORD.


ENCRYPTION_PASSWORD=
impdp uwclass/uwclass DIRECTORY=data_pump_dir DUMPFILE=demo11.dmp ENCRYPTION=all ENCRYPTION_ALGORITHM=aes256 ENCRYPTION_MODE=dual ENCRYPTION_PASSWORD=a1pha EstimateESTIMATE=<BLOCKS | STATISTICS> impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo10.dmp ESTIMATE=blocks

impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo11.dmp ESTIMATE=statistics
ExcludeEXCLUDE= -- exclude all (nonreferential) constraints, except for NOT NULL constraints and any constraints needed for successful table creation and loading
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo12.dmp EXCLUDE=constraint

-- exclude referential integrity (foreign key) constraints
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo13.dmp EXCLUDE=ref_constraint

-- exclude object grants on all object types and system priv grants
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo14.dmp EXCLUDE=grant

-- excludes the definitions of users
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo15.dmp EXCLUDE=user

-- excludes views
impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo16.dmp EXCLUDE=view,package,function

-- to exclude a specific user and all objects of that user, specify a filter such as the following (where hr is the schema name of the user you want to exclude):
impdp uwclass/uwclass FULL=y DIRECTORY=data_pump_dir DUMPFILE=demo17.dmpEXCLUDE=SCHEMA:\"='HR'\"
Flashback SCNFLASHBACK_SCN= conn / as sysdba

SELECT dbms_flashback.get_system_change_number FROM dual;

exit

impdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo18.dmp FLASHBACK_SCN=36477000
Flashback TimeFLASHBACK_TIME= conn / as sysdba

SELECT dbms_flashback.get_system_change_number
FROM dual;

SELECT SCN_TO_TIMESTAMP(dbms_flashback.get_system_change_number)
FROM dual;

exit

impdp uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo19.dmp
FLASHBACK_TIME=\"TO_TIMESTAMP('01-SEP-2012 08:08:08', 'DD-MON-YYYY HH24:MI:SS')\"
HelpHELP=N > impdp uwclass HELP=y IncludeINCLUDE= impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo20.dmp INCLUDE=table

impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo21.dmp INCLUDE=\"IN ('SERVERS', 'SERV_INST')\"

impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo22.dmp INCLUDE=procedure

impdp uwclass/uwclass SCHEMAS=uwclass DIRECTORY=data_pump_dir DUMPFILE=demo23.dmpINCLUDE=INDEX:\"LIKE 'PK%\"
Job NameJOB_NAME= impdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo24.dmp JOB_NAME=uwjob LogfileLOGFILE= impdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo25.dmpLOGFILE=data_pump_dir:demo25.log Network LinkNETWORK_LINK= impdp uwclass DIRECTORY=data_pump_dir DUMPFILE=demo26.dmp NETWORK_LINK=fixed_user No LogfileNOLOGFILE=<N | Y> impdp uwclass TABLES=servers, airplanes DUMPFILE=data_pump_dir:demo27.dmp NOLOGFILE=y ParallelPARALLEL=<1 | parallel_degree> impdp uwclass TABLES=servers DUMPFILE=data_pump_dir:demo28.dmp PARALLEL=2 ParfilePARFILE=<[directory_object.]file_name> -- create this as a text file in the data_pump_dir directory
TABLES=servers DUMPFILE=data_pump_dir:demo29.dmp
LOGFILE=data_pump_dir:demo29.log
PARALLEL=2

impdp uwclass PARFILE=data_pump_dir:parfile.par
Partition OptionsPARTITION_OPTIONS=departition  | merge> impdp uwclass PARTITION_OPTIONS=departition QueryQUERY=<[schema.][table_name:]query_where_clause> impdp uwclass TABLES=airplanes DUMPFILE=data_pump_dir:demo30.dmp QUERY=airplanes:\"WHERE program_id = ''737''\" Remap DataREMAP_DATA=[schema.]tablename.column_name:[schema.]pkg.function CREATE TABLE holder (
holder_name VARCHAR2(30) NOT NULL,
card_number NUMBER(16)   NOT NULL);

CREATE TABLE activity (
vendor      VARCHAR2(30) NOT NULL,
card_number NUMBER(16)   NOT NULL,
sales_total NUMBER(10,2) NOT NULL);

INSERT INTO holder VALUES ('Morgan', 4124555566661234);
INSERT INTO holder VALUES ('Kyte', 3776555566665678);
INSERT INTO holder VALUES ('Norgaard', 5058555566669012);

INSERT INTO activity VALUES ('Amazon.com', 4124555566661234, 100.00);
INSERT INTO activity VALUES ('eBay.com', 4124555566661234, 59.50);
INSERT INTO activity VALUES ('Oracle', 3776555566665678, 50.50);
INSERT INTO activity VALUES ('Miracle', 5058555566669012, 42.42);

CREATE OR REPLACE PACKAGE hidedata AUTHID CURRENT_USER IS
 FUNCTION newcc(oldno IN NUMBER) RETURN NUMBER;
END hidedata;
/

CREATE OR REPLACE PACKAGE BODY hidedata IS
 TYPE cc_list IS TABLE OF NUMBER INDEX BY VARCHAR2(16);
 cc_remaps cc_list;
 cc_seed   NUMBER(15) := 000000000000010;

 FUNCTION newcc(oldno IN NUMBER) RETURN NUMBER IS
 BEGIN
   IF NOT cc_remaps.EXISTS(oldno) THEN
     cc_seed := cc_seed+10;
     cc_remaps(oldno) := ROUND(oldno, -15)+cc_seed+MOD(oldno,10);
   END IF;
   RETURN cc_remaps(oldno);
 END newcc;
END hidedata;
/

col card_number format 9999999999999999

SELECT vendor, hidedata.newcc(card_number) CARD_NUMBER, sales_total
FROM activity;

expdp uwclass/uwclass tables=uwclass.holder,uwclass.activityremap_data=uwclass.holder.card_number:hidedata.newcc
remap_data=uwclass.activity.card_number:hidedata.newcc directory=ctemp dumpfile=hremp31.dmp
Remap DatafileREMAP_DATAFILE= TBD Remap SchemaREMAP_SCHEMA= expdp system/oracle1 DIRECTORY=data_pump_dir DUMPFILE=streamsdemo32.dmp SCHEMAS=scott

impdp system/oracle1 DUMPFILE=data_pump_dir:streamsdemo32.dmp SCHEMAS=scott REMAP_SCHEMA=scott:scottrep
Remap TableREMAP_TABLE=<[schema.]old_tablename[.partition]:new_tablename> TBD Remap TablespaceREMAP_TABLESPACE= impdp uwclass/uwclass DIRECTORY=data_pump_dir dumpfile=demo33.dmp schemas=abc job_name=export_cw4ora
REMAP_SCHEMA=abc:def REMAP_TABLESPACE=users:uw_data REMAP_TABLESPACE=example:uwdata exclude=GRANT
Reuse DatafilesREUSE_DATAFILES=N > impdp uwclass/uwclass REUSE_DATAFILES=Y SchemasSCHEMAS= impdp hr/hr SCHEMAS=uwclass LOGFILE=uwdp.log DUMPFILE=dump34.dmp Service NameSERVICE_NAME= impdp hr/hr SERVICE_NAME=orabase Skip Unusable IndexesSKIP_UNUSABLE_INDEXES=<value_of_the_corresponding_init_parameter | Y | N> impdp uwclass SKIP_UNUSABLE_INDEXES=Y Source EditionSOURCE_EDITION=<default_database_edition | edition_name> impdp hr DIRECTORY=dpump_dir1 SOURCE_EDITION=exp_edition SQL FileSQLFILE=<[directory_object:]file_name> impdp hr DIRECTORY=dpump_dir1 DUMPFILE=demo35.dmp SQLFILE=dpump_dir2:expfull.sql Status DemoSTATUS=<0 | freq_in_seconds> -- how often job status is displayed impdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo36.dmp STATUS=5 Streams ConfigurationSTREAMS_CONFIGURATION=<Y | N> impdp hr/hr DIRECTORY=dpump_dir1 STREAMS_CONFIGURATION=N Table Exists ActionTABLE_EXISTS_ACTION=<SKIP | APPEND | TRUNCATE | REPLACE> impdp hr/hr DIRECTORY=dpump_dir1 TABLE_EXISTS_ACTION=APPEND Target EditionsTARGET_EDITIONS= impdp uwclass/uwclass TARGET_EDITIONS=edition2,edition3 TransformTRANSFORM= TBD Transport Data FilesTRANSPORT_DATAFILES= impdp uwclass/uwclass DIRECTORY=dpump_dir1 DUMPFILE=tdf.dmp TRANSPORTABLE=alwaysTRANSPORT_DATAFILES='/app/oracle/oradata/orabase/uwclass01.dbf' TRANSPORT_FULL_CHECK=N TransportableTRANSPORTABLE=NEVER > See TRANSPORT_DATAFILES Demo Above Version DemoVERSION=<COMPATIBLE | LATEST | version_string> impdp uwclass/uwclass SCHEMAS=uwclass DUMPFILE=data_pump_dir:demo37.dmp VERSION=latest

1 comment:

Arif Malik said...

good work !