Steps for Schema Refresh Using Datapump Expdp & Impdp Utility | Oracle Database


Schema/Object refresh is one of the routine tasks in DBA's life, moving objects from one schema to another schema in same database or another. Generally, we use expdp/impdp or traditional export/import methods for refresh, but Here I am explaining with datapump utility because compare to normal traditional export/import method, datapump is much faster and it having several features and options like we can stop and resume in the middle of task, parallelism to improve performance like many etc...

Below I have explained completed  procedure of schema refresh step by step.

Oracle Data Pump is introduced from oracle 10g, Oracle Data Pump enables very high-speed movement of data and metadata from one database to another database. Before performing the schema refresh check whether that user having EXP_FULL_DATABASE role or not.


Follow below steps in source and target database sides:

Source Database Side:

Step-1:

Check whether the schema exist or not, schema must be resides in source database.

SQL> select username, account_status, created from dba_users where username='ORACLE';

Step-2:

Check the schema size

SQL> select owner, sum(bytes)/1024/1024 “MB” from dba_segments where owner="ORACLE";

Step-3:

Take the count of schema objects, this will be use full after complete the refresh to compare both target and source schema objects.

SQL> select object type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;

Step-4:

Before going to take the export, first check mount point size where you’re going to store the export dumpfile, if mount point doesn't have sufficient space export job gets fail.

example:

]$  cd /d01/exports

exports] $ df -h  .  (in Linux) df -g . (AIX and Sun Solaris)

Step-5:

Create a datapump directory in database level, default datapump directory  location is " /app/oracle/product/10.2.0/rdbms/log/".

First create a directory in OS Level

]$ mkdir -p  /d01/exports

Next create in database level

SQL> create or replace directory DATAPUMP as '/d01/exports';

Step-6:

Now take the export of schema.

]$ expdp \'/ AS SYSDBA\' directory=DATA_PUMP_DIR dumpfile= ORACLE_DBNAME.dmp logfile= ORACLE_DBNAME.log schemas=ORACLE

Step-7:

If  you're going to import on same server means no need to move the dumpfile anywhere, incase if you're going to import this dump file in some other server, copy this dumpfile through SCP command

expdp]$ scp  -p  username@servername:\do1\targetlocation   (It will ask you target server password )

Completed almost all steps in source database end, now move to target database side.


Target Database:

Step-1:

Check the mount pint size, it should be more then schema size.

]$  cd /d01/exports

exports]$ df  -h  .  (in Linux)  df  -g . (AIX and Sun Solaris)

Step-2:

Create a directory same like how we have create for source database.

First create a directory in OS Level

]$ mkdir -p  /d01/exports

Next create directory at database levele

SQL>  create or replace directory DATAPUMP as '/d01/exports';

Step-3:

Take target schema backup before importing.(for safe side). with export command.

]$ expdp \'/ AS SYSDBA\' directory=DATAPUMP dumpfile= ORACLE_DBNAME.dmp logfile= ORACLE_DBNAME.log schemas=ORACLE

Step-4:

SCRIPTS: Find complete list of  objects in the schema

 

Use this script when you do not need to drop the schema, But only the dependent objects in the schema. For ex :- to preserve DB-links, grants, privileges

Below scripts will generate all of the drop statements needed to drop almost all objects (sometimes 1-2 of them will have problems and you will have to manually drop those) from a specified schema (it prompts you for the schema).

 

EXECUTION

Login to the database where the schema to be dropped exists. Copy and paste the following script , double checking that you are in the correct database!:

 

--Initializations

set linesize 1000;

SET VERIFY OFF

col owner format a15;

col object_name format a30;

col object_type format a20;

 

 

--Select Non-system object owners

SELECT OWNER,OBJECT_TYPE,COUNT(*)

FROM SYS.DBA_OBJECTS

WHERE OWNER NOT IN ('SYS','SYSTEM','TSMSYS','ORACLE_OCM','WMSYS','PATMAN','OUTLN','PUBLIC','DBSNMP','XDB','APPQOSSYS','CTXSYS')

GROUP BY OWNER,OBJECT_TYPE

ORDER BY OWNER,OBJECT_TYPE;

 

--Select specific Owner, Object_Type & Count

SELECT OWNER,OBJECT_TYPE,COUNT(*)

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

GROUP BY OWNER,OBJECT_TYPE

ORDER BY OWNER,OBJECT_TYPE;

 

--Drops: Tables, Indexes, & Triggers

SELECT 'DROP TABLE '||OWNER||'.'||OBJECT_NAME||' CASCADE CONSTRAINTS PURGE;'

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('TABLE');

 

--Drops: Sequences, Views, Packages, Functions & Procedures, Synonyms

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||';'

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('PACKAGE','SEQUENCE','VIEW','FUNCTION','PROCEDURE','SYNONYM','TRIGGER');

 

--Drops: Types

SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||' FORCE;'

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('TYPE');

 

--DO NOT DROP OR IMPORT DBLINKS, EXCLUDE=DATABASE_LINK

SELECT OWNER, OBJECT_TYPE, COUNT(*)

FROM SYS.DBA_OBJECTS

WHERE OWNER = '&&USER'

AND OBJECT_TYPE IN ('DATABASE LINK')

GROUP BY OWNER, OBJECT_TYPE;

OR

Drop the Schema at destination database:

SQL> Drop Schema <schema_name> cascade;

(Better drop only schema objects instead of dropping  schema)

 

Step-5:

Import the dumpfile into target schema

impdp \'/ AS SYSDBA\' directory=DATAPUMP dumpfile=ORACLE_DATABASE.dmp logfile=ORACLE_DATABASE.log schemas=ORACLE

 

Step-6:

Compare the Object Count with source database.

SQL> select object_type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;

(If all the objects same and looks good go ahead and run utlrp.sql.)

Step-7:

Check invalid objects count if any.

SQL> select owner, object_type, status, count(*)

from sys.dba_objects

where status = ‘INVALID’

group by owner, object_type, status

order by owner, object_type, status;

SQL> @?/rdbms/admin/utlrp.sql

Simple Introduction of Oracle Multitenant Architecture

Oracle Multitenant Architecture:

Multitenant Database is one of the biggest architectural level changes in oracle database history from 12c (12.1) release. With this changes oracle introduced new concepts of CDB (container database) and PDB (pluggable database) which means unlike normal non-cdb(Normal Database), here we have option to create multiple pluggable databases using single instance to save time, storage, resource and administration cost.

Basic multitenant architecture contains root CDB(CDB$ROOT), seed PDB named as PDB$SEED and zero or more user created PDBs.


Oracle Multitenant Architecture

Container Database:

The root container is named CDB$ROOT– In multitenant architecture CDB is very similar to conventional oracle database, as it contains most of the working parts you will be already aware (Control files, Data files, Redo logs, Temp files and undo etc…). It also houses the data dictionary for those objects that are owned by the root container and those that are visible to all PDBs.

Seed PDB(PDB$SEED):

Every root container will have a default seed pluggable database. The seed PDB is a system supplied template that the CDB can use to create new PDB’s, you can add or modify objects in PDB$SEED.

PDB:

Pluggable Database is a user created entity that contains the data and code required for a specific set of features. Creating a new pdb is coping from seed pdb (peed$seed) template and it take time as simple as long as the files to take copy.

For example, a PDB can support a specific application, such as a human resources or sales application. No PDBs exist at creation of the CDB. You add PDBs based on your business requirements.


Benefits of the Multitenant Architecture:

  • -        Cost Reduction
  • -        Easier and more rapid movement of data and code
  • -        Easier Management
  • -        Easy to monitor of physical database.
  • -        Simplify maintenance tasks like database patching and upgrading database
  • -        Separation of data and code
  • -        Reduce Administration, Operation, data center, Storage costs
  • -        Benefits of data dictionary separation include the following:
  • -        Easier migration of data and code

(For example, instead of upgrading a CDB from one database release to another, you can unplug a PDB from the existing CDB, and then plug it into a newly created CDB from a higher release.)

  • -        Easier testing of applications

(You can develop an application on a test PDB and, when it is ready for deployment, plug this PDB into the production CDB.)


[root@192.168.1.121 SID=none ~]# su - oracle

[oracle@192.168.1.121 SID=proddb1 ~]$sqlplus '/as sysdba'

SQL> show con_name

CON_NAME

------------------------------

CDB$ROOT

 


SQL> show pdbs;

 CON_ID CON_NAME                       OPEN MODE RESTRICTED

---------- ------------------------------ ---------- ----------

         2 PDB$SEED                       READ ONLY NO

         3 SALES                          READ WRITE NO

         4 HR                             READ WRITE NO

 

SQL> SELECT SYS_CONTEXT ('USERENV', 'CON_NAME') FROM DUAL;

------------------------------------------------------------------CDB$ROOT

 

SQL> select NAME, CDB, CON_ID, OPEN_MODE from V$DATABASE;

 

NAME      CDB     CON_ID OPEN_MODE

--------- --- ---------- --------------------

PDBDB1    YES          0 READ WRITE


 

[oracle@192.168.1.121 SID=proddb1 ~]$lsnrctl status

 

LSNRCTL for Linux: Version 12.1.0.2.0 - Production on 27-JAN-2021 16:47:00

 

Copyright (c) 1991, 2014, Oracle.  All rights reserved.

 

Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))

STATUS of the LISTENER

------------------------

Alias                     LISTENER

Version                   TNSLSNR for Linux: Version 12.1.0.2.0 - Production

Start Date                21-JAN-2021 16:57:09

Uptime                    1 days 23 hr. 49 min. 50 sec

Trace Level               off

Security                  ON: Local OS Authentication

SNMP                      OFF

Listener Parameter File   /u05/app/grid/12.1.0/network/admin/listener.ora

Listener Log File         /u05/app/oracle/diag/tnslsnr/192.168.1.121/listener/alert/log.xml

Listening Endpoints Summary...

  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER)))

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.121)(PORT=1521)))

  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.121)(PORT=1521)))

Services Summary...

Service "+ASM" has 1 instance(s).

  Instance "+ASM1", status READY, has 1 handler(s) for this service...

Service "proddb1" has 1 instance(s).

  Instance "proddb1", status READY, has 1 handler(s) for this service...

Service "proddb1XDB" has 1 instance(s).

  Instance "prodb1", status READY, has 1 handler(s) for this service...

Service "sales" has 1 instance(s).

  Instance "proddb1", status READY, has 1 handler(s) for this service...

Service "hr" has 1 instance(s).

The command completed successfully