Sunday 26 July 2015

Schema Refresh

Schema Refresh using ASM Diskgroups
How to create Datapump Export Dumps within ASM diskgroups

Schema Status
-------------
select username,account_status,default_tablespace from dba_users where username=upper('&username');

select owner,sum(bytes)/1024/1024 "SIZE in MB" from dba_segments where owner=upper('&owner') group by owner;

select owner,object_type,count(*) from dba_objects where owner=upper('&owner') and object_name not like 'BIN$%' group by object_type,owner order by object_type;


EXPORT (EXPDP)

$ps -ef| grep pmon
$. oraenv
ORACLE_SID = [dbinst1] ? +ASM1
The Oracle base for ORACLE_HOME=/opt/app/11.2.0/grid is /opt/app/oracle
$asmcmd
ASMCMD> cd DATA01/TESTDB
ASMCMD> mkdir EXPDP

1. Create a directory in ASM.
SQL> create or replace directory DATAPUMP_ASM_DIR as '+DATA01/TESTDB/EXPDP';
Directory created.
SQL> GRANT READ,WRITE ON DIRECTORY DATAPUMP_ASM_DIR TO SYSTEM;

2. Create a logfile directory in filesystem, since logfiles can't be stored in ASM.
SQL> create or replace directory DATAPUMP_LOG_DIR as '/oracle/backup/testdb/expdp';
Directory created.
SQL> GRANT READ,WRITE ON DIRECTORY DATAPUMP_LOG_DIR TO SYSTEM;

3. Create logfile directory in filesystem.
$ cd /oracle/backup/testdb/
$ mkdir expdp
$ chmod -R 755 expdp
$ cd expdp
$pwd
/oracle/backup/testdb/expdp
$ df -h /oracle/backup/testdb/expdp -- or
$df -gt /oracle/backup/testdb/expdp

$ sqlplus "/as sysdba"

set pages 50000 lines 32767
col DIRECTORY_PATH for a80;
select * from dba_directories;

OWNER DIRECTORY_NAME DIRECTORY_PATH
------- ---------------- ---------------------------------------
SYS DATAPUMP_ASM_DIR +DATA01/testdb/expdp
SYS DATAPUMP_LOG_DIR /oracle/backup/testdb/expdp

Where DATAPUMP_ASM_DIR is a ASM based directory
and   DATAPUMP_LOG_DIR is on a filesystem.

Here, Export is written to ASM based dictionary,
      but log is written to another directory which is on filesystem.

We can create export dumps to the ASM diskgroups. But we can’t write expdp log to ASM (as it is a text file).

$vi schema_export_in_ASM.par

user_id="/ as sysdba"
directory=DATAPUMP_ASM_DIR
dumpfile=source_schemaname_expdp_%U.dmp
logfile=DATAPUMP_LOG_DIR:schema_export_in_ASM.log
schemas=schema1,schema2
exclude=statistics
parallel=6
compression=all
cluster=n  ---- 11g

:wq

$nohup expdp parfile=schema_export_in_ASM.par &

$tail -f nohup.out or

$tail -f schema_export_in_ASM.log

==================================================================================================================

scp dumpfiles from Source Server ASM to Destination Server ASM
Take schema backup and Drop Schema Objects

MPORT (IMPDP)

Schema Status
-------------
select username,account_status,default_tablespace from dba_users where username=upper('&username');

select owner,sum(bytes)/1024/1024 "SIZE in MB" from dba_segments where owner=upper('&owner') group by owner;

select owner,object_type,count(*) from dba_objects where owner=upper('&owner') and object_name not like 'BIN$%' group by object_type,owner order by object_type;

$ps -ef| grep pmon
$. oraenv
ORACLE_SID = [dbinst1] ? +ASM1
The Oracle base for ORACLE_HOME=/opt/app/11.2.0/grid is /opt/app/oracle
$asmcmd
ASMCMD> cd DATA01/TESTDB
ASMCMD> mkdir IMPDP

1. Create a directory in ASM.
SQL> create or replace directory DATAPUMP_ASM_DIR as '+DATA01/TESTDB/IMPDP';
Directory created.
SQL> GRANT READ,WRITE ON DIRECTORY DATAPUMP_ASM_DIR TO SYSTEM;

2. Create a logfile directory in filesystem, since logfiles can't be stored in ASM.
SQL> create or replace directory DATAPUMP_LOG_DIR as '/oracle/backup/testdb/impdp';
Directory created.
SQL> GRANT READ,WRITE ON DIRECTORY DATAPUMP_LOG_DIR TO SYSTEM;

3. Create logfile directory in filesystem.
$ cd /oracle/backup/testdb/
$ mkdir impdp
$ chmod -R 755 impdp
$ cd impdp
$pwd
/oracle/backup/testdb/impdp

$ sqlplus "/as sysdba"

set pages 50000 lines 32767
col DIRECTORY_PATH for a80;
select * from dba_directories;

OWNER DIRECTORY_NAME DIRECTORY_PATH
------- ---------------- ---------------------------------------
SYS DATAPUMP_ASM_DIR +DATA01/testdb/impdp
SYS DATAPUMP_LOG_DIR /oracle/backup/testdb/impdp

Where DATAPUMP_ASM_DIR is a ASM based directory
and   DATAPUMP_LOG_DIR is on a filesystem.

Here, Export is written to ASM based dictionary,
      but log is written to another directory which is on filesystem.

We can create export dumps to the ASM diskgroups. But we can’t write expdp log to ASM (as it is a text file).

$vi import_schema_export_in_ASM.par

user_id="/ as sysdba"
directory=DATAPUMP_ASM_DIR
dumpfile=source_schemaname_expdp_%U.dmp
logfile=DATAPUMP_LOG_DIR:import_schema_export_in_ASM.log
schemas=schema1,schema2
parallel=6
cluster=n  ---- 11g

If required,
REMAP_SCHEMA=source_schema1:destination_schema1,source_schema2:destination_schema2
REMAP_TABLESPACE=source_tablespace1:destination_tablespace1,source_tablespace2:destination_tablespace2

:wq

$nohup impdp parfile=import_schema_export_in_ASM.par &

$tail -f nohup.out

$tail -f import_schema_export_in_ASM.log


Schema Status
-------------
select username,account_status,default_tablespace from dba_users where username=upper('&username');

select owner,sum(bytes)/1024/1024 "SIZE in MB" from dba_segments where owner=upper('&owner') group by owner;

select owner,object_type,count(*) from dba_objects where owner=upper('&owner') and object_name not like 'BIN$%' group by object_type,owner order by object_type;
=====================================================================================================================================================

Network Level Import of Schema
How to take Schema Import at Network Level

The NETWORK_LINK parameter initiates a network import. This means that the impdp client initiates the import request, typically to the local database. That server contacts the remote source database referenced by the database link in the NETWORK_LINK parameter, retrieves the data, and writes it directly back to the target database. There are no dump files involved.

1. Create a TNS entry on the local database tnsnames.ora file for the remote database
Create a TNS entry on the local database tnsnames.ora file for the remote database and create a database link using the TNS entry. The user used in the database link must have read/write permission on the remote servers directory.

SOURCE_DATABASE_LINK =
 (DESCRIPTION =
   (ADDRESS = (PROTOCOL = TCP)(HOST = remote-loc)(PORT = 1521))
   (CONNECT_DATA =
     (SERVER = DEDICATED)
     (SID = remote-db)
   )
 )

2. Create the Database Link on the Destination database:

create database link SOURCE_DATABASE_LINK connect to SYSTEM identified by password using 'SOURCE_DATABASE_LINK';

3. Script to get database link information:

$ sqlplus "/as sysdba"

DBLINKS DETAILS
---------------
set pages 50000 lines 32767
col owner for a15
col username for a10
col host for a20
col created for a20

COL DB_LINK FORMAT A30

select owner, db_link, username, host, to_char(created,'dd-mon-yyyy hh24:mi:ss') CREATED from DBA_DB_LINKS order by owner, db_link;

OWNER      DB_LINK                        USERNAME HOST    CREATED
---------- ------------------------------ -------- ------- -------

4. PAR file.
$vi schema_network_level_import.par

user_id="/ as sysdba"
directory=DATA_PUMP_DIR
logfile=schema_network_level_import.log
schemas=schema1,schema2
NETWORK_LINK=source_database_link
FLASHBACK_TIME=SYSTIMESTAMP
exclude=statistics
parallel=6
cluster=n

:wq

$nohup impdp parfile=schema_network_level_import.par &

$tail -f nohup.out

$tail -f schema_network_level_import.log

Optional Parametrs:
------------------
ESTIMATE={BLOCKS | STATISTICS}
FLASHBACK_SCN=scn_number
FLASHBACK_TIME="TO_TIMESTAMP()"

The Import ESTIMATE parameter is valid only if the NETWORK_LINK parameter is also specified.

The FLASHBACK_SCN parameter is valid only when the NETWORK_LINK parameter is also specified. This is because the value is passed to the source system to provide an SCN-consistent data extraction.

This parameter is valid only when the NETWORK_LINK parameter is also specified. This is because the value is passed to the source system to provide a time-consistent import.

FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive.
===================================================================================================================================================

BACKUP DATABASE LINK IN ORACLE
Database links backup
---------------------

All DBlinks backup:
---------------------
set pages 0
set long 90000
set trimspool on
SELECT u.OWNER,DBMS_METADATA.GET_DDL('DB_LINK',u.DB_LINK,u.OWNER) FROM dba_db_links u;

For a specific DBlink backup:
---------------------------------
set pages 0
set long 90000
set trimspool on
SELECT DBMS_METADATA.GET_DDL('DB_LINK','&DB_LINK_NAME','&USER_NAME') FROM dba_db_links;


======================================================================================================================================================
DDL

GENERAL USAGE
-------------
SET LONG 99999999
SELECT DBMS_METADATA.GET_DDL('OBJECT_TYPE','OBJECT_NAME','OWNER') FROM DUAL;

Before droping user:(super Script---> collecting all grants of user)
--------------------------------------------------------------------
set long 100000

select dbms_metadata.get_ddl('USER','&username') from dual;
select dbms_metadata.get_granted_ddl('SYSTEM_GRANT','&username') from dual;
select DBMS_METADATA.GET_GRANTED_DDL('OBJECT_GRANT','&username') FROM dual;
select dbms_metadata.get_granted_ddl( 'ROLE_GRANT','&username') from dual;

SET LONG 99999999
select dbms_metadata.get_ddl('TABLESPACE','&TABLESPACE_NAME') FROM DUAL;

select  dbms_metadata.get_ddl ('TABLE', '<TABLE_NAME>', '<SCHEMA_NAME>') from dual;

select  dbms_metadata.get_ddl ('TABLE', '<SCHEMA_NAME>') from dual;
select dbms_metadata.get_ddl('PACKAGE','<PACKAGE_NAME>','<OWNER_NAME>') from dual ;
select dbms_metadata.get_ddl('PACKAGE_BODY','<PACKAGE_BODY_NAME>','<OWNER_NAME>') from dual ;

select dbms_metadata.get_ddl('PROCEDURE',<PROCEDURE_NAME>','<OWNER_NAME>') from dual;

To get the create scripts of all  tables of  a particular schema
----------------------------------------------------------------
SQL>spool tables.sql
SQL>select 'select dbms_metadata.get_ddl(''TABLE'', '''||TABLE_NAME||''',''<schema>'') from dual;' FROM DBA_TABLES
/

http://tech.padipa.net/generating-create-scripts-through-dbms_metadata-package
Posted by Dharmendra Chalasani at Tuesday, December 11, 2012  
Email This
BlogThis!
Share to Twitter
Share to Facebook
Share to Pinterest

Labels: DDL
October 25, 2012
PACKAGE BACKUP in ORACLE

PACKAGE BACKUP
--------------
set heading off
set echo off
set flush off
set pagesize 50000
set linesize 32767
set long 99999999
spool PACKAGE_NAME.pks
select dbms_metadata.get_ddl('PACKAGE','<PACKAGE_NAME>','<OWNER_NAME>') from dual ;
spool off;

OR

set linesize 500
set pagesize 100000
set head off
set verify off
spool PACKAGE_NAME.pks
select text from dba_source where owner='<OWNER_NAME>'and name='<PACKAGE_NAME>';
spool off

OR

select text from dba_source where owner='<OWNER_NAME>' and name='PACKAGE_NAME' and type='PACKAGE' order by line;

PACKAGE BODY BACKUP
-------------------
select dbms_metadata.get_ddl('PACKAGE_BODY','<PACKAGE_BODY_NAME>','<OWNER_NAME>') from dual ;

OR

set pagesize 0
set linesize 800
set trimspool on
set heading off
set feedback off
spool PACKAGE_BODY_NAME.pkb
select text from dba_source where owner='<OWNER_NAME>'and name='<PACKAGE_BODY_NAME>';
spool off

OR

select text from dba_source where owner='<OWNER_NAME>' and name='PACKAGE_BODY_NAME' and type='PACKAGE BODY' order by line;

DROP PACKAGE
------------
set pagesize 0
set linesize 800
col object_name for a30
col object_type for a25
col owner for a25
spool package_drop.sql
select owner,object_name,object_type from dba_objects where object_name='<object_name>';
drop package <owner>.<package_name>;
spool Off;
===================================================================================================================================================
Backup / Delete - Archivelogs and RMAN backups


sqlplus "/as sysdba"

archive log list;

!df -h or !df -kgt

show parameter db_recovery;

NAME                      TYPE         VALUE
----                      ----         -----
db_recovery_file_dest     string       +FLASHDG
db_recovery_file_dest_size big integer 150G

if +FRA,
select * from V$FLASH_RECOVERY_AREA_USAGE;
(see what kind of files are available in the Flash Recovery Area)

set pages 9999 lines 300
col name format a40
select name,
to_char(space_limit, '999,999,999,999') as space_limit,
to_char(space_limit - space_used + space_reclaimable,
'999,999,999,999') as space_available,
round((space_used - space_reclaimable)/space_limit * 100, 1) as pct_full
from v$recovery_file_dest;

NAME SPACE_LIMIT SPACE_USED SPACE_RECLAIMABLE NUMBER_OF_FILES
---- ----------- ---------- ----------------- ---------------


if SPACE_USED is full, then increase the db_recovery_file_dest_size

rman target /
crosscheck archivelog all;
delete expired archivelog all;
crosscheck backup;

SQL>alter system set db_recovery_file_dest_size=200G scope=both;
System altered.


rman target /

show all;
LIST BACKUP OF ARCHIVELOG FROM TIME 'sysdate-1'; ------List all archivelog backups for the past 24 hours
list backup of archivelog all completed before 'sysdate -1';

list backup of archivelog all;
list backup of archivelog all tag 'TAGXXX';
list archivelog all backed up 1 times to DEVICE TYPE DISK completed before 'sysdate-7';


crosscheck archivelog all;
list expired archivelog ALL;
delete expired archivelog all;

delete obsolete device type disk;
crosscheck backup;
delete expired backup device type disk;
delete expired backupset device type disk;


ls -ltr archivebackup.cmd

nohup rman target / cmdfile=archivebackup.cmd log=archivebackup_dbname_DDMONYY.log &
nohup: appending output to `nohup.out'

tail -f archivebackup_dbname_DDMONYY.log

+FRA/+RECO01 filled with archivelogs
====================================

Backup archivelogs to disk
--------------------------
run
{
allocate channel ch1 type DISK;
allocate channel ch2 type DISK;
allocate channel ch3 type DISK;
allocate channel ch4 type DISK;
BACKUP ARCHIVELOG ALL FILESPERSET 10 DELETE INPUT;
}

Backup archivelogs to disk (specified location,if there is no space to take bakup at archive default location)
--------------------------------------------------------------------------------------------------------------
RUN
{
allocate channel ch1 type DISK;
allocate channel ch2 type DISK;
allocate channel ch3 type DISK;
allocate channel ch4 type DISK;
BACKUP ARCHIVELOG ALL format '/location/arch_%d_%p_%s.rman' FILESPERSET 10 DELETE INPUT;
}

Backup archivelogs to tape (PROD Environment)
--------------------------
run
{
allocate channel ch1 type 'sbt_tape';
allocate channel ch2 type 'sbt_tape';
allocate channel ch3 type 'sbt_tape';
allocate channel ch4 type 'sbt_tape';
BACKUP ARCHIVELOG ALL FILESPERSET 10 DELETE INPUT;
}

+FRA filled with Bacupsets (PROD Environment)
==========================
Move older backupset to tape (to keep more recent backups on disk and older backups only on tape)
-------------------------------------------------------------------------------------------------
run
{
allocate channel ch1 type 'sbt_tape';
allocate channel ch2 type 'sbt_tape';
allocate channel ch3 type 'sbt_tape';
allocate channel ch4 type 'sbt_tape';
BACKUP BACKUPSET COMPLETED BEFORE 'SYSDATE-7' DELETE INPUT;
}

if required,
BACKUP BACKUPSET ALL FILESPERSET 10 DELETE INPUT;

Delete BACKUPSET backed up to tape/disk (PROD Environment)
-----------------------------------------
DELETE NOPROMPT BACKUPSET COMPLETED BEFORE 'SYSDATE-14' DEVICE TYPE DISK;

+FRA filled with Bacupsets (NON-PROD Environment)
==========================
Move older backupset to tape (to keep more recent backups on disk and older backups only on tape)
-------------------------------------------------------------------------------------------------
run
{
allocate channel ch1 type 'sbt_tape';
allocate channel ch2 type 'sbt_tape';
allocate channel ch3 type 'sbt_tape';
allocate channel ch4 type 'sbt_tape';
DELETE BACKUPSET COMPLETED BEFORE 'SYSDATE-4';
}

Delete archivelogs backed up to tape/disk (PROD Environment)
-----------------------------------------
DELETE archivelog UNTIL TIME 'SYSDATE-7' BACKED UP 1 TIMES TO DEVICE TYPE DISK;
DELETE archivelog UNTIL TIME 'SYSDATE-7' BACKED UP 1 TIMES TO DEVICE TYPE SBT_TAPE;

Delete archivelogs backed up to tape/disk (NON-PROD Environment)
-----------------------------------------
DELETE archivelog all BACKED UP 1 TIMES TO DEVICE TYPE DISK;

list archivelog all backed up 1 times to DEVICE TYPE DISK completed before 'sysdate-7';
delete archivelog all backed up 1 times to DEVICE TYPE DISK completed before 'sysdate-7';

backup archivelog [all] [until time 'sysdate']| [sequence between 100 to 110 thread 1] as filesperset 5 delete

input;

backup archivelog from sequence 100 until sequence 110 thread 1 delete input;

backup archivelog from logseq 100 until logseq 110 thread 1 delete input;

report obsolete;
DELETE OBSOLETE REDUNDANCY = 3;
DELETE OBSOLETE RECOVERY WINDOW OF 7 DAYS;




=====================================================================================================================================================How to Drop SCHEMA Objects in Oracle
Schema Status
-------------
select username,account_status,default_tablespace from dba_users where username=upper('&username');

select owner,sum(bytes)/1024/1024 "SIZE in MB" from dba_segments where owner=upper('&owner') group by owner;

select owner,object_type,count(*) from dba_objects where owner=upper('&owner') and object_name not like 'BIN$%' group by object_type,owner order by object_type;

DROP SCHEMA OBJECTS
-------------------
set head off
set pagesize 0
set linesize 300

spool target_schemaname_drop_obj.sql

select 'drop '||object_type||' '||owner||'.'||object_name||decode(object_type,'TABLE',' CASCADE CONSTRAINTS;',';') from dba_objects where owner in ('&owner') and object_type not in ('DATABASE LINK','JOB','LOB') order by  object_type,object_name
/

spool off;
set head on

@ target_schemaname_drop_obj.sql
exit

DROP OTHER OBJECTS (If required)
------------------
set head off
set pagesize 0
set linesize 300

spool target_schemaname_drop_other_obj.sql

select 'DROP '||decode(object_type, 'PACKAGE BODY', 'PACKAGE',object_type)|| ' '||owner||'.'||object_name||decode(object_type,'TABLE',' CASCADE CONSTRAINTS;',';')
from dba_objects where owner=upper('&owner') and object_type not in ('TABLE')
/

spool off;
set head on

@target_schemaname_drop_other_obj.sql
exit

ls -lrth target_schemaname_drop*.sql

vi target_schemaname_drop_obj.sql  /  vi target_schemaname_drop_other_obj.sql

:wq

sqlplus "/as sysdba"

@target_schemaname_drop_obj.sql

Schema Status
-------------
select username,account_status,default_tablespace from dba_users where username=upper('&username');

select owner,sum(bytes)/1024/1024 "SIZE in MB" from dba_segments where owner=upper('&owner') group by owner;

select owner,object_type,count(*) from dba_objects where owner=upper('&owner') and object_name not like 'BIN$%' group by object_type,owner order by object_type;
exit.
=====================================================================================================================================================OEM:Daily / Weekly Backups not happening
Issue :
Daily incremental/ weekly full backups scheduled through OEM is not happening as scheduled.

Troubleshooting:

1) First check whether emagent is running or not on that database node?
    a) ps -efa|grep agent

        - confirms the agent is running.

    b) cd /home/oracle/OEM_GRID/agent10g/bin/
       ./emctl status agent

       - The output says agent is up and running fine properly.

2) Checked whether agent upload is happening or not?
    ./emctl upload agent

    - ERROR.
    May be the same thing is not allowing the backups to happen as scheduled.

Fix :
    cd /home/oracle/OEM_GRID/agent10g/bin/
    ./emctl stop agent
    ./emctl clearstate agent
    ./emctl start agent
    ./emctl upload agent

After this, scheduled OEM jobs will run as expected.










No comments:

Post a Comment