Saturday 25 July 2015

oracle dba interview questions and answers which i faced

COMPANY = SAPCLE , LOCATION= CHENNAI

1).  How to find alert log file location in DB level ?
A:- 10g -- select name, value from v$parameter where name = 'background_dump_dest'

11g - select name, value from v$parameter where name = 'diagnostic_dest'
For 11G read up on ADRCI


2). How do you take ASM backup by using RMAN utility ?

Ans: ASM : RMAN Backup Configuration
Introduction : The following will guide one to take a backup of ASM Enabled Database using RMAN.

The following are the steps involved;
Setting Up a Database for RMAN
Setting Up Recovery Area for RMAN

Setting Up a Database for RMAN

Configuring the Backup Retention Policy

Retention Policy is of two options:
Window-Based Retention Policy
Redundancy-Based Retention Policy

Configuring a Recovery Window-Based Retention Policy:
The recovery window parameters specifies the number of days between the current time and the earliest point of recoverability.

CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7 DAYS;
The above can recover the database to any point withing the last week.

RMAN does not automatically delete backups rendered obsolute by the recovery window. Instead RMAN marks them as OBSOLUTE in the REPORT OBSOLUTE output.
RMAN retains all archived logs and incremental backups that are needed to recover the nonobsolete backups.

Configuring a Redundancy-Based Retention Policy:
The redundancy parameter specified the number of backups that has to be maintained for the Datafile & Controlfile by RMAN.
RMAN will mark the backups to be obsoluted , if the number of backups for a specific datafile or control file exceeds the Redundancy Setting.
RMAN keeps a track of which ones to retain and which are obsoluted.

CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
The above will set RMAN to have only 2 backups of the datafile or controlfile. If a backup is taken after that then the older one will be obsoluted.

RMAN retains all archived logs and incremental backups that are needed to recover the nonobsolete backups.

Configuring Control File and Server Parameter File Autobackup
RMAN can be configured to automatically back up the control file and server parameter file whenever the database structure metadata in the control file changes and whenever a backup record is added.
The autobackup enables RMAN to recover the database even if the current control file, catalog, and server parameter file are lost.

Enabling : CONFIGURE CONTROLFILE AUTOBACKUP ON;
Disabling: CONFIGURE CONTROLFILE AUTOBACKUP OFF;

The following command will configure RMAN the autobackup to write to an ASM DiskGroup

ASM Instance :
(To create the disk group )
CREATE DISKGROUP CONTROL
NORMAL REDUNDANCY
FAILGROUP failure_group_1 DISK 'ORCL:ASMD6'
FAILGROUP failure_group_2 DISK 'ORCL:ASMD7';

RMAN :
CONFIGURE CONTROLFILE AUTOBACKUP
FORMAT FOR DEVICE TYPE DISK TO '+CONTROL';

Optimizing the RMAN Backup:
Backup file optimization can be used to prevent repeat backups of read-only tablespaces and archive logs:
Read-only and offline-clean data files are only backed up once per recovery window.
The command BACKUP ARCHIVELOG ALL only backs up logs that are not already on tape.
The command BACKUP BACKUPSET ALL copies to tape all backup sets that do not already exist on tape.

Backup optimization is configured using:

CONFIGURE BACKUP OPTIMIZATION [ON | OFF | CLEAR];

Once you are done with the RMAN Configuration now we have to Setup the Flash Recovery Area.

Setting Up Recovery Area for RMAN

The flash recovery area feature lets you set up a location on disk where the database can create and manage a variety of backup and recovery-related files.
Using a flash recovery area simplifies the ongoing administration of your database by automatically naming files, retaining them as long as they are needed for restore and recovery activities, and deleting them when they are no longer needed to restore your database and space is needed for some other backup and recovery-related purpose.

Choosing a Location for the Flash Recovery Area:
When setting up a flash recovery area , we need to specify a location. The Location can be a Directory or ASM Disk Group to hold the files.

Files That Can Be Stored in the Flash Recovery Area:
Files are classified as permanent and transient.
Permanent are the control files and online redolog files , all the other rest of the files are transient as RMAN will delete the files when they become obsoulte based on the retentition policy defined.

Planning the Size of the Flash Recovery Area:
The size should be large enough to hold the following;
A copy of all datafiles
Incremental backups, as used by your chosen backup strategy
Online redo logs
Archived redo logs not yet backed up to tape
Control files
Control file autobackups (which include copies of the control file and SPFILE)

Setting Initialization Parameters for the Flash Recovery Area
Two key initialization Parameters are involved in setting up the flash recovery area.
DB_RECOVERY_FILE_DEST_SIZE - which specifies the disk quota, or maximum space to use for flash recovery area files for this database
DB_RECOVERY_FILE_DEST - which specifies the location of the flash recovery area

In our case we are going to use a Disk Group for the Destination which is of 2GB;

ASM Instance:
CREATE DISKGROUP ARCH
EXTERNAL REDUNDANCY
DISK 'ORCL:ASMD8','ORCL:ASMD9','ORCL:ASMD10','ORCL:ASMD11';

ASM Enabled Instance:
SQL> show parameter db_recovery_file_dest

NAME TYPE VALUE
------------------------------------ ---
db_recovery_file_dest string +ARCH
db_recovery_file_dest_size big integer 2G
SQL>

LAB:

Connect to the ASM Instance and Create the DiskGroup for the Control File Autobackup

SQL> CREATE DISKGROUP CONTROL
NORMAL REDUNDANCY
FAILGROUP failure_group_1 DISK 'ORCL:ASMD6'
FAILGROUP failure_group_2 DISK 'ORCL:ASMD7'; 

Diskgroup created.

SQL>

SQL> CREATE DISKGROUP ARCH
EXTERNAL REDUNDANCY
DISK 'ORCL:ASMD8','ORCL:ASMD9','ORCL:ASMD10','ORCL:ASMD11';

Diskgroup created.

SQL>

Connect to RMAN and configure the Settings

RMAN> CONFIGURE RETENTION POLICY TO REDUNDANCY 2;

new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters are successfully stored

RMAN>

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '+CONTROL';

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '+CONTROL';
new RMAN configuration parameters are successfully stored

RMAN>

RMAN> CONFIGURE BACKUP OPTIMIZATION ON;

new RMAN configuration parameters:
CONFIGURE BACKUP OPTIMIZATION ON;
new RMAN configuration parameters are successfully stored

RMAN>

RMAN> BACKUP DATABASE PLUS ARCHIVELOG;


Starting backup at 30-MAR-07
current log archived
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=151 devtype=DISK
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=6 recid=1 stamp=617555015
input archive log thread=1 sequence=7 recid=2 stamp=617743035
input archive log thread=1 sequence=8 recid=3 stamp=617744460
input archive log thread=1 sequence=9 recid=4 stamp=617747694
input archive log thread=1 sequence=10 recid=5 stamp=618009531
input archive log thread=1 sequence=11 recid=6 stamp=618356756
input archive log thread=1 sequence=12 recid=7 stamp=618357083
input archive log thread=1 sequence=13 recid=8 stamp=618360354
input archive log thread=1 sequence=14 recid=9 stamp=618360687
input archive log thread=1 sequence=15 recid=10 stamp=618567572
input archive log thread=1 sequence=16 recid=11 stamp=618571214
channel ORA_DISK_1: starting piece 1 at 30-MAR-07
channel ORA_DISK_1: finished piece 1 at 30-MAR-07
piece handle=+ARCH/odb/backupset/2007_03_30/annnf0_tag20070330t092016_0.287.618571219 tag=TAG20070330T092016 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:01:29
Finished backup at 30-MAR-07

Starting backup at 30-MAR-07
using channel ORA_DISK_1
channel ORA_DISK_1: starting full datafile backupset
channel ORA_DISK_1: specifying datafile(s) in backupset
input datafile fno=00001 name=+DATA/odb/datafile/system.256.617411373
input datafile fno=00003 name=+DATA/odb/datafile/sysaux.257.617411377
input datafile fno=00002 name=+DATA/odb/datafile/undotbs1.258.617411379
input datafile fno=00004 name=+DATA/odb/datafile/users.259.617411381
channel ORA_DISK_1: starting piece 1 at 30-MAR-07
channel ORA_DISK_1: finished piece 1 at 30-MAR-07
piece handle=+ARCH/odb/backupset/2007_03_30/nnndf0_tag20070330t092148_0.286.618571311 tag=TAG20070330T092148 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:03:50
Finished backup at 30-MAR-07

Starting backup at 30-MAR-07
current log archived
using channel ORA_DISK_1
channel ORA_DISK_1: starting archive log backupset
channel ORA_DISK_1: specifying archive log(s) in backup set
input archive log thread=1 sequence=17 recid=12 stamp=618571546
channel ORA_DISK_1: starting piece 1 at 30-MAR-07
channel ORA_DISK_1: finished piece 1 at 30-MAR-07
piece handle=+ARCH/odb/backupset/2007_03_30/annnf0_tag20070330t092547_0.283.618571549 tag=TAG20070330T092547 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 30-MAR-07

Starting Control File and SPFILE Autobackup at 30-MAR-07
piece handle=+CONTROL/odb/autobackup/2007_03_30/s_618571551.256.618571557 comment=NONE
Finished Control File and SPFILE Autobackup at 30-MAR-07

RMAN>


Summary : We have configured RMAN to Backup an ASM Enabled Instance. 
3). How do you find the TRACE file location in DB level ?
Ans: Trace files are stored in the Automatic Diagnostic Repository (ADR), in the trace directory under each ADR home. To help you locate individual trace files within this directory, you can use data dictionary views. For example, you can find the path to your current session's trace file or to the trace file for each Oracle Database process.
To find the trace file for your current session:
Submit the following query:
SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Default Trace File';
The full path to the trace file is returned.
To find all trace files for the current instance:
Submit the following query:
SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';
The path to the ADR trace directory for the current instance is returned.
To determine the trace file for each Oracle Database process:
Submit the following query:
SELECT PID, PROGRAM, TRACEFILE FROM V$PROCESS;

4). How to take HOT backup of ASM instance?
Ans: RMAN is the only interface able to take hot and cold backups of Oracle Databases onASM disk groups, and, more importantly, RMAN is the only interface for

5). How to restore online redolog file by using RMAN ?
Ans: Backup = rman> backup archivelog;
        Restore=
Loss of a single current online redo log file will require us to restore the entire database and
do an incomplete recovery.

We can simulate this scenario by deleting all the online redo log files at the OS level.

SQL> select member from v$Logfile;

MEMBER
-------------------------------
/u02/ORACLE/opsdba/redo03.log
/u02/ORACLE/opsdba/redo02.log
/u02/ORACLE/opsdba/redo01.log

opsdba:/u02/ORACLE/opsdba>rm redo*.log

If the current online redo log file is lost,the database hangs and in the alert log file
we can see the following error message:

Tue Jan 30 00:47:19 2007
ARC1: Failed to archive thread 1 sequence 93 (0)
Tue Jan 30 00:47:24 2007
Errors in file /opt/oracle/admin/opsdba/bdump/opsdba_arc0_32722.trc:
ORA-00313: open failed for members of log group 2 of thread 1
ORA-00312: online log 2 thread 1: '/u02/ORACLE/opsdba/redo02.log'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory

Using RMAN we can recover from this error by restoring the database from the backup and
recovering to the last available archived redo logfile.

From the alert log we can obtain the last archived file in our case it is sequence
92 as the error shows that it fails to archive the log file sequence 93.

SQL> select * from v$Log;

    GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIM
---------- ---------- ---------- ---------- ---------- --- ---------------- ------------- ---------
         1          1         95   52428800          1 NO  CURRENT                3203078 30-JAN-07
         2          1         93   52428800          1 NO  INACTIVE               3202983 30-JAN-07
         3          1         94   52428800          1 NO  INACTIVE               3203074 30-JAN-07

SQL> archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            /u02/ORACLE/opsdba/arch
Oldest online log sequence     92
Next log sequence to archive   93
Current log sequence           93

opsdba: cd /u02/ORACLE/opsdba/arc
opsdba:/u02/ORACLE/opsdba/arch> ls –lrt
total 54824
-rw-r-----    1 oracle   dba        714240 Jan 29 16:02 arch_1_90_613129285.dbf
-rw-r-----    1 oracle   dba      46281216 Jan 30 00:37 arch_1_91_613129285.dbf
-rw-r-----    1 oracle   dba         11264 Jan 30 00:41 arch_1_92_613129285.dbf

Shutdown the database

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

Mount the database

SQL> startup mount;
ORACLE instance started.

Total System Global Area  167772160 bytes
Fixed Size                  2069680 bytes
Variable Size              92277584 bytes
Database Buffers           67108864 bytes
Redo Buffers                6316032 bytes
Database mounted.

Use RMAN connect to the target database:

opsdba:/u02/ORACLE/opsdba>rman target /

Recovery Manager: Release 10.2.0.2.0 - Production on Tue Jan 30 00:53:21 2007

Copyright (c) 1982, 2005, Oracle.  All rights reserved.
connected to target database: OPSDBA (DBID=1493612009, not open)

RMAN> run {
2> set until sequence 93; (Note: set this number to one higher than the last archived log available)
3> restore database;
4> recover database;
5>  alter database open resetlogs;
6> }

executing command: SET until clause

Starting restore at 30-JAN-07
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=156 devtype=DISK
allocated channel: ORA_SBT_TAPE_1
channel ORA_SBT_TAPE_1: sid=155 devtype=SBT_TAPE
channel ORA_SBT_TAPE_1: Data Protection for Oracle: version 5.2.4.0

channel ORA_SBT_TAPE_1: starting datafile backupset restore
channel ORA_SBT_TAPE_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to /u02/ORACLE/opsdba/system01.dbf
restoring datafile 00002 to /u02/ORACLE/opsdba/undotbs01.dbf
restoring datafile 00003 to /u02/ORACLE/opsdba/sysaux01.dbf
restoring datafile 00004 to /u02/ORACLE/opsdba/users01.dbf
restoring datafile 00005 to /u02/ORACLE/opsdba/users02.dbf
restoring datafile 00006 to /u02/ORACLE/opsdba/users03.dbf
restoring datafile 00007 to /u02/ORACLE/opsdba/users05.dbf
restoring datafile 00008 to /u02/ORACLE/opsdba/users06.dbf
restoring datafile 00009 to /u02/ORACLE/opsdba/users07.dbf
restoring datafile 00010 to /u02/ORACLE/opsdba/users04.dbf
restoring datafile 00011 to /u02/ORACLE/opsdba/drtbs1.dbf
restoring datafile 00012 to /u02/ORACLE/opsdba/drtbs2.dbf
restoring datafile 00013 to /tmp/undonew.dbf
channel ORA_SBT_TAPE_1: reading from backup piece 5mi8ornj_1_1
channel ORA_SBT_TAPE_1: restored backup piece 1
piece handle=5mi8ornj_1_1 tag=TAG20070130T004019
channel ORA_SBT_TAPE_1: restore complete, elapsed time: 00:01:06
Finished restore at 30-JAN-07

Starting recover at 30-JAN-07
using channel ORA_DISK_1
using channel ORA_SBT_TAPE_1

starting media recovery

archive log thread 1 sequence 92 is already on disk as file 
/u02/ORACLE/opsdba/arch/arch_1_92_613129285.dbf
archive log filename=/u02/ORACLE/opsdba/arch/arch_1_92_613129285.dbf thread=1 sequence=92
media recovery complete, elapsed time: 00:00:01
Finished recover at 30-JAN-07

database opened

RMAN>exit
The recovery process creates the online redo logfiles at the operating system level also.

opsdba:/u02/ORACLE/opsdba>ls -lrt redo*
-rw-r-----    1 oracle   dba      52429312 Jan 30 01:00 redo03.log
-rw-r-----    1 oracle   dba      52429312 Jan 30 01:00 redo02.log
-rw-r-----    1 oracle   dba      52429312 Jan 30 01:00 redo01.log

Since we have done an incomplete recover with open resetlogs, we should take a fresh
complete backup of the database.

6). How to get user, table, index DDL in oracle ? 
Ans:  select  dbms_metadata.get_ddl (‘USER’,’USERNAME’) from dual
         select dbms_metadata.get_ddl('TABLE','DEPT','SCOTT') from dual;         
         select  dbms_metadata.get_ddl ('INDEX','DEPT_IDX','SCOTT') from dual
 Ref: http://www.dba-oracle.com/t_copying_oracle_users.htm
http://www.dba-oracle.com/art_builder_get_schema_syntax.htm
7). How to get database statistics ?
Ans: Gather stats on the entire database...
execute dbms_stats.gather_database_stats;
Or...
execute dbms_stats.gather_database_stats( -
estimate_percent => 1, -
method_opt => 'FOR ALL COLUMNS SIZE 1',-
cascade => TRUE);

8). How to register redo log files in Dataguard ?
Ans: alter database register or replace physical logfile '<fullpath/filename>';




Company: Delhi Date: 12-01-2012
1). RAC roles ?

2). Non – RAC to RAC?
3). What does rconfig internally ?
4). How to convert to non-RAC to RAC without rconfig ?
5). What does rconfig change in init.ora?
6). What are the system tablespaces are two instead of one in RAC environment?
7). What is the role of interconnect?
8). Cache fusion?
9). What is the key difference between 10g and 11g in RAC?
10). SCAN_LISTENER any idea?
Ans: SCAN can mean scan VIP or scan listener
SCAN - Single Client Access Name 
This simply means client will just have the  SCAN Name in the tnsnames.ora in the Client side  and no need to know the Name of the nodes involved in the cluster to load balance . 
The SCAN Name is associated  with one or more SCAN VIP's (max of 3 VIP's) . 
Client will get the SCAN Name and the request will be routed to the DNS and GNS and will get resolved by a single SCAN VIP which is associated to the SCAN Name . This SCAN VIP will route the request to the 
PMON will be frequently sending the load status on each node to the SCAN Listener and  hence the SCAN listener will have a count on the load status on each node .
Each Instance will be registered with the SCAN Listener  and SCAN Listener will be aware of the load on each instance and will load balance the new requests accordingly . 
Still 11gR2 uses the local_listener parameter which will be pointing to the local node VIP and remote_listener parameter which will be pointing to the SCAN Listener Name and port
what if you have more then 3 nodes ?  Answer is  all the instances in all the cluster nodes will be registered with the SCAN Listener using remote_listener parameter  even if  the SCAN IP's are hosted in the first 3 nodes . So the SCAN listener will be aware of all the services in all the nodes .

11). Backup strategy for OCR and voting files?

12). When should and how often we should take the backup of OCR and Voting files?

13). How do you know node eviction happen in RAC?
14). In RAC, TAF service is running in Node 2, how to relocate service in another node?

15). How do you know which service is running in which node?
16). How you can find, which node is primary node?
17). How will you say which service is running in primary node?
18). Application team come to know that, my apps in not able to connect to the database.? Then how will u trobleshoot the issue?
19). What are the things you will check database level, RAC level, Cluster level?
20). What will you check in TNS level (TAF level)
21). Addition of node in RAC environment?
22). Oracle 10g and 11g architecture difference?
23). Have you worked on consolidate environment?
24). How do you find database background processes in database level?
Ans: select * from v$session where type=’BACKGROUND’;

GST Global
1). what is your database size?
2). What type of data are you using?
3). What is primary key?
4). what is procedures and functions?
5). Who are your clients?
6). Where is the client location?

MINDTREE Bangalore Date:
1). What is the database size how many databases are there? Who are your clients?  What are your daily activities?
2). How do you convert Non-RAC to RAC?
3). How to you configure RMAN in 2 nodes RAC?
4). Above question with catalog and without catalog?
5). In RMAN if u don’t use catalog where the backup will be information stored? 
6). How do you rebuild index?
Ans: alter index  <index_name > rebuild, automatically
7). What are the things in AWR report ?
Ans: The AWR is used to collect performance statistics including:
Wait events used to identify performance problems.
Time model statistics indicating the amount of DB time associated with a process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.
Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.
Some system and session statistics from the V$SYSSTAT and V$SESSTAT views.
Object usage statistics.
Resource intensive SQL statements.

8). What is SQL Loader and how do u use that?
Ans: It is the primary method for quickly populating ORACLE tables with data from external files. 
9). How do you are taking backup’s, whether using any scripts or manual or by using cron tab’s?
10). What type of media are u using for backup’s(either tape or internal)?
11). Have you worked on windows?
12). How to convert to Non-ASM to ASM?
13). How to add disk group to ASM?
Ans: CREATE DISKGROUP disk_group_1 NORMAL REDUNDANCY
  FAILGROUP failure_group_1 DISK
    '/devices/diska1' NAME diska1,
    '/devices/diska2' NAME diska2
  FAILGROUP failure_group_2 DISK
    '/devices/diskb1' NAME diskb1,
    '/devices/diskb2' NAME diskb2;


TECH-MAHENDRA Noida Tech Date: 11-02-2012
1). What are the db s/w versions are working?
2). What are the RAC s/w versions are you working?
3). What is the difference between oracle RAC 10gR2, 11gR1 and 11gR2?
4). What is SCAN in oracle?
5). How do you shutdown the cluster nodes?
6). What are the parameters we need to edit for RAC report?
Ans:  statistics_level = typical (default)
     = ALL
7). In a RAC, in which node we need to generate the AWR report?
8). What is the difference between awrrpt and awrrpti ?
Ans: awrrpt = database script
awrrpti = instance script
9). What is the difference between Oracle DBA and RAC DBA?
10). What is ASH report?
Ans: Oracle collects Active Session History (ASH) statistics (mostly wait statistics for different events) for all active sessions every second from v$session and stores them in a circular FIFO buffer in the SGA. ASH records are very recent session history within the last 5-10 mins. The MMNL (Manageability Monitor light - shows up as "Manageability Monitor light 2" process) process, if the buffer fills up before the AWR flushes (by default every hour) the MMNL process will flush the data to disk (data stored in dba_hist_active_session_history).
ASH resides in the SGA and it’s size is calculated by the lesser of:
total # of cpu x 2MB memory 
5% of shared pool 
So on a 16 cpu server with a shared pool of 500MB
Ash desired size 16 x 2MB = 32MB 
5% of 500MB = 25MB (this is the lower so ASH will be 25MB) 
ASH collects the following:
Top Events
Load Profile
Top SQL
Top PL/SQL
Top Java
Top Call Types
Top Sessions
Top Objects/Files/Latches
Activity Over Time
and much more) 
Useful Views 
V$ACTIVE_SESSION_HISTORY stores the history session history of every session, collects this information every second from v$session, this is the circular buffer and older information will be rewritten over. 
V$SESSION this view holds all the sessions information (72 columns of info) 
DBA_HIST_ACTIVE_SESSION_HISTORY provides historical information about recent active session history. Its basically snapshots from v$active_session_history. 
ASH Report
It is possible to run a report on the ASH data collected, the report generates information about SQL that ran during the time you specify and it includes blocking and wait details.
ashrpt.sql information on SQL which includes blocking and wait details, the script will ask what you to specify a time and if the report is to be generated in text to html. 
Note: report is in $ORACLE_HOME/rdbms/admin 

11). In AWR report, what things you will see?
Ans: 1- AWR Concept:
In this post, I am going to show you how to generate AWR reports for Real Application Clusters (RAC) environment.
AWR was first introduced in 10g version of Oracle as a repository of all important statistics for an Oracle database. At a regular intervals, Oracle makes a snapshots of those vital statistics and stores them in SYSAUX tablespace. In 11g version, These snapshots are retained for 8 days. It is the MMON background process who is responsible of AWR collection. 
AWR scripts allow you to generate the reports in HTML or TEXT formats. These reports allow you to analyze the workload of the database at a given intervals. Oracle extends the AWR tool for RAC databases by adding the possibility to generate snapshots reports against the whole cluster database, some database instances of the cluster database, a specific database instance of the cluster database, statistics for a specific select query. In the following sections, I am going to show you how to generate AWR reports for each case.
2- Generate AWR reports for the current Instance:
Run the awrrpt
script from $ORACLE_HOME/rdbms/admin/
as a sys user. The script will ask you for the output format of the report (HTML or TEXT), the the number of days (n) will result in the most recent
(n) days of snapshots being listed (If you hit enter, you will get all the available snapshots listed), begin and end snapshot interval Ids …
3- Generate AWR reports for any instance in the cluster:
Run the awrrpti script from $ORACLE_HOME/rdbms/admin/
as a sys user. The script will ask you for the output format of the report (HTML or TEXT), the instance number, the the number of days (n) will result in the most recent
(n) days of snapshots being listed (If you hit enter, you will get all the available snapshots listed), begin and end snapshot interval Ids …
4- Generate AWR Cluster aggregated statistics from all the instances:
Run the awrgrpt script from $ORACLE_HOME/rdbms/admin/ as a sys user. The script will ask you for the output format of the report (HTML or TEXT), the the number of days (n) will result in the most recent
(n) days of snapshots being listed (If you hit enter, you will get all the available snapshots listed), begin and end snapshot interval Ids 
5- Generate AWR Global Cluster Difference Report:
Run the awrgdrpt.sql script from $ORACLE_HOME/rdbms/admin/ as a sys user. The script will ask you for the output format of the report (HTML or TEXT), the the number of days (n) will result in the most recent
(n) days of snapshots being listed (If you hit enter, you will get all the available snapshots listed), begin and end snapshot interval Ids of the first and second pair diff intervals…
6- Generate AWR Report Single Select Statement:
Run the awrsqrpt.sql script from $ORACLE_HOME/rdbms/admin/ as a sys user. The script will ask you for the output format of the report (HTML or TEXT), the the number of days (n) will result in the most recent
(n) days of snapshots being listed (If you hit enter, you will get all the available snapshots listed), begin and end snapshot interval Ids intervals and the SQL_ID for the specific SELECT statement …
7- License:
AWR tool is part of the diagnostic pack option which needs an Oracle license. The substitute of the AWR report in the case you are not licensed to use AWR tool is the Statspack tool which is detailed in my previous post: Statspack in RAC (http://www.oracle-class.com/?p=2384).
In 11g, if you are not licensed to use the diagnostic pack tool you have to disable the option by setting the control_management_pack_access to NONE;
alter system set control_management_pack_access=’none’;
In 10g version., you can disable the AWR tool following the MOS ID: 436386.1 which provides a script to disable the AWR package.
To disable database option, you can use chopt command line; for example; to disable partitioning option, you can run the following command from ALL nodes of the cluster as oracle database home owner (usually called oracle user):

12). What is the RAC architecture and what are the main processes?
Ans: 
13). Application team sent a mail to you, regarding database is running slowly, then what will you do, how to find. How to solve in O.S level and Database level?
14). What are the long running processes in database and how do you find it in O.S level & DB level?
Ans: v$session_longops
15). What are the backup’s in oracle?



ITC-INFOTECH Bangalore Date: 13-02-2012
1). Installation of oracle in UNIX?
2). Why do you set kernel parameters?
Ans: Oracle recommends that you set shared memory segment attributes as well as semaphores to the following values.
If not set, database instance creation will fail. I added the following lines to /etc/sysctl.conf file. Every OS process needs 
semaphore where It waits on for the resources. For more on semaphore, please read the UNIX os documents.
3). Is it mandatory to create oinstall group?
Ans: Yes
The Oracle Database, and the Oracle Grid Infrastructure for a standalone server installation owner users must belong to the Oracle Inventory group (oinstall).
When you install Oracle software on the system for the first time, Oracle Universal Installer creates the oraInst.loc file. This file identifies the name of the Oracle Inventory group (typically, oinstall) and the path of the Oracle Inventory directory.
You can configure one group to be the access control group for Oracle Inventory, for database administrators (OSDBA), and for all other access control groups used by Oracle software for operating system authentication. However, this group then must be the primary group for all users granted administrative privileges.
Log in as root, and use the following instructions to locate or create the Oracle Inventory group and a software owner:
Determining if the Oracle Inventory Group Exists
Creating the Oracle Inventory Group
Determining if the Oracle Inventory Group Exists
An oraInst.loc file has content similar to the following:
inventory_loc=central_inventory_location
inst_group=group
In the preceding example, central_inventory_location is the location of the Oracle Central Inventory, and group is the name of the group that has permissions to write to the central inventory.
If you have an existing Oracle Inventory, then ensure that you use the same Oracle Inventory for all Oracle software installations, and ensure that all Oracle software users you intend to use for installation have permissions to write to this directory.
To determine if the Oracle Inventory group exist, enter the following command:
# grep oinstall /etc/group
To determine if the oraInst.loc file exists, enter the following command:
# more /etc/oraInst.loc
If the oraInst.loc file exists, then the output from this command is similar to the following:
inventory_loc=/u01/app/oraInventory
inst_group=oinstall
In the previous output example:
The inventory_loc group shows the location of the Oracle Inventory
The inst_group parameter shows the name of the Oracle Inventory group (in this example, oinstall).
Creating the Oracle Inventory Group 
If the oraInst.loc file does not exist, then create the Oracle Inventory group by entering the following command:
# /usr/sbin/groupadd oinstall
2.8.2.2 Creating the OSDBA Group for Database Installations
You must create an OSDBA group in the following circumstances:
An OSDBA group does not exist, for example, if this is the first installation of Oracle Database software on the system
An OSDBA group exists, but you want to give a different group of operating system users database administrative privileges for a new Oracle Database installation
If the OSDBA group does not exist or if you require a new OSDBA group, then create it as follows. In the following procedure, use the group name dba unless a group with that name exists:
# /usr/sbin/groupadd -g 502 dba
2.8.2.3 Creating an OSOPER Group for Database Installations
Create an OSOPER group only to identify a group of operating system users with a limited set of database administrative privileges (SYSOPER operator privileges). For most installations, it is sufficient to create only the OSDBA group. If you want to use an OSOPER group, then you must create it in the following circumstances:
If an OSOPER group does not exist; for example, if this is the first installation of Oracle Database software on the system
If an OSOPER group exists, but you want to give a different group of operating system users database operator privileges in a new Oracle installation
If you require a new OSOPER group (typically, oper), then create it as follows. In the following, use the group name oper unless a group with that name exists:
# /usr/sbin/groupadd -g 503 oper
2.8.2.4 Creating the OSASM Group for Oracle Automatic Storage Management
If the OSASM group does not exist or if you require a new OSASM group, then create it as follows. In the following procedure, use the group name asmadmin unless a group with that name exists:
# /usr/sbin/groupadd -g 504 asmadmin
2.8.2.5 Creating the OSDBA Group for Oracle Automatic Storage Management
If you require a new OSDBA group for Oracle ASM, then create it as follows. In the following procedure, use the group name asmdba unless a group with that name exists:
# /usr/sbin/groupadd -g 506 asmdba
2.8.2.6 Creating the OSOPER Group for Oracle Automatic Storage Management
If you require an OSOPER group, then create it as follows. In the following procedure, use the group name asmoper unless a group with that name exists:
# /usr/sbin/groupadd -g 505 asmoper
2.8.2.7 Creating the Oracle Software Owner User
You must create an Oracle software owner user in the following circumstances:
If an Oracle software owner user does not exist; for example, if this is the first installation of Oracle software on the system.
If an Oracle software owner user exists, but you want to use a different operating system user, with different group membership, to give database administrative privileges to those groups in a new Oracle Database installation.
If you have created an Oracle software owner for Oracle Grid Infrastructure, such as grid, and you want to create a separate Oracle software owner for Oracle Database software, such as oracle.
2.8.2.7.1 Determining if an Oracle Software Owner User Exists
To determine if an Oracle software owner user named oracle, or grid exists, enter a command similar to the following:
# id oracle
# id grid
If the oracle user exists, then the output from this command is similar to the following:
uid=501(oracle) gid=501(oinstall) groups=502(dba),503(oper)
If the grid user exists, then the output from this command is similar to the following:
uid=8001(oracle) gid=8001(oinstall) groups=8001(oinstall),8002(asmadmin),8003(asmdba),8006(dba)
Determine whether you want to use the existing user or create another user. If you want to use the existing user, then ensure that the user's primary group is the Oracle Inventory group (oinstall) and that it is a member of the appropriate OSDBA and OSOPER groups. See the following sections for more information:
Creating an Oracle Software Owner User
Modifying an Existing Oracle Software Owner User
Note:
If necessary, contact your system administrator before using or modifying an existing user.
2.8.2.7.2 Creating an Oracle Software Owner User
If the Oracle software owner user does not exist, or if you require a new Oracle software owner user, such as oracle or grid, then create it as described in this section (in this case to create the oracle user).
In the following procedure, use the user name oracle unless a user with that name exists:
1. To create an oracle user, enter a command similar to the following:
2. # /usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba,[oper] oracle
In the preceding command:
o The -u option specifies the user ID. Using this command flag is optional because the system can provide you with an automatically generated user ID number. You must note the oracle user ID number because you need it during preinstallation.
o The -g option specifies the primary group, which must be the Oracle Inventory group, for example oinstall.
o The -G option specifies the secondary groups, which must include the OSDBA group, and, if required, the OSOPER and ASMDBA groups, for example, dba, asmdba, or oper.
3. Set the password of the oracle user:
4. # passwd oracle
2.8.2.7.3 Modifying an Existing Oracle Software Owner User
If the oracle user exists, but its primary group is not oinstall, or it is not a member of the appropriate OSDBA or OSOPER groups, then modify it as follows:
Specify the primary group using the -g option and any required secondary group using the -G option:
# /usr/sbin/usermod -g oinstall -G dba,asmdba[,oper] oracle

4). In a 2 node RAC, how many interconnects we use?
Ans:  2
5). In a 4 node RAC, how many interconnects we use?
Ans: 2
6). Advantages of dpump and why?
Ans: Link read
7).  Why dpump faster then traditional backup?
Ans: The fastest method of moving data is to copy the database data files to the target database without interpreting or altering the data. With this method, Data Pump Export is used to unload only structural information (metadata) into the dump file.
8). What is Rman? Why we  are using?
Ans: 1. Ability to perform INCREMENTAL backups
2. Ability to Recover one block of datafile
3. Ability to automatically backup CONTROLFILE and SPFILE
4. Ability to delete the older ARCHIVE REDOLOG files, with the new one's automatically.
5. Ability to perform backup and restore with parallelism.
6. Ability to report the files needed for the backup.
7. Ability to RESTART the failed backup, without starting from beginning.
8. Much faster when compared to other TRADITIONAL backup strategies.

9). Difference between RMAN backup and Physical backup?
Ans: Question:  I've been using data pump (the export utility) to backup my database, but I wonder if I should be using RMAN instead.  They both do backups, so what's the advantages of each?
Answer:  Yes, RMAN and export both backup tables, and they both support flashback database, but there are some important differences:
Data Pump Export (expdp) - The export utility is a "logical" backup, usually done by specifying specific tables.  If you fail to do a "consistent" export, or if you fail to include related tables and use RESTRICTED and CONSISTENT mode (with referential integrity constraints), you may not be able to recover properly.  Export is often used as a supplement to RMAN, usually for the restore of specific tables.

Recovery manager (rman) - RMAN is designed for backup and recovery, a extension of the Enterprise Backup Utility (EBU).  RMAN takes full, physical, consistent backups of your database files..
Advantages & disadvantages of Export:
Free, and easy to use
Very slow, compared to RMAN (examines every data block)
Easy restore of a specific table
Does not require ARCHIVELOG mode
Advantages & disadvantages of RMAN:
RMAN has block-level media recovery
Has a catalog for backup tracking and a report utility
Fast - If you dedicate a backup device for each production disk, you can backup terabytes in the time it takes to backup any single disk.
Does hot or cold backups
Backups and restores can be done in parallel
Allows incremental backups (block change tracking)
Interfaces with media management systems (TMS

10). ORA-01555 error?
Ans: ne of the most vexing problems that Oracle DBAs around the world face every day is this: 
ORA-1555: snapshot too old: rollback segment number 9 with name "R07" too small 
To most DBAs it is far from clear what might have caused the error, and even more perplexing as to how they can prevent it from occurring again. But perhaps the most exasperating thing about this error is that queries are most prone to it when they have been running for a long time, and thus many hours of processing can be lost. 
The good news is that it is easy to prevent this error entirely and absolutely. 
What does the error mean? 
The ORA-1555 error means that a consistent get on a particular database block has failed. 
When a transaction or query begins, the current SCN is recorded. That SCN serves as the snapshot SCN for the query or transaction. This term is derived from the requirement that the transaction or query must see a consistent snapshot of the database at that time. 
Every block used to select rows for the query or transaction must reflect the state of the database at the snapshot SCN. This applies to the selection of rows to be updated or deleted, as much as it does to the selection of rows for a query. If a block has to be changed, then those changes will be applied to the current version of that block. However, the selection of the rows to be changed must be based on a version of the block consistent with the snapshot SCN. The temporary reconstruction of a version of the block consistent with the snapshot SCN is called a consistent get. 
Why do consistent gets fail? 
There are two types of consistent get failure: rollback failure, and cleanout failure. 
Rollback failure 
If the block has been modified in any way by another transaction since the snapshot SCN, then those changes must be rolled back for the consistent get. To do so, it is necessary to read the rollback segment data blocks to which the rollback information for those changes was written. 
However, if any of those changes were made by a discrete transaction, then there will be no rollback information, because discrete transactions do not generate rollback information. If so, an ORA-1555 error will be raised. Similarly, an ORA-1555 error will be raised if the required rollback segment blocks are no longer available because the rollback segment extent containing those blocks has been deallocated in a shrink operation, or reused by subsequent transactions. 
Note that the rollback segment blocks required are those that were used by any other transactions that have modified the block after the snapshot SCN. These blocks could be in any rollback segment in the database. 
Note further that for those blocks to be unavailable by virtue of extent reuse, all extents in that rollback segment must have been used at least once since the snapshot SCN. This is why the error message suggests that the rollback segment is too small. 
Cleanout failure 
DBWn often writes a block to disk before the last transaction to modify that block has been committed. If so, the interested transaction list in the block header still shows that transaction as having an open interest in the block, and the row level locks in the row headers of the affected rows remain in force. When the block is read for another query or transaction, block cleanout must be performed. This involves finding out whether the previous transaction has committed, and if so its row level locks are cleaned out and the commit SCN for the transaction is record in the interested transaction list entry in the block header. 
For a consistent get, block cleanout is necessary to establish the relative sequence of the commit SCN for the interested transaction and the snapshot SCN for the consistent get. If the interested transaction has not yet committed, or committed after the snapshot SCN, then rollback is required as described above. But if the interested transaction committed before the snapshot SCN, then no rollback of its changes is required. 
To determine the commit SCN for an interested transaction, if it is not already recorded in the interested transaction list entry, and if it is no longer active, it is necessary to consult the transaction table in the rollback segment header block that was used by that transaction. The rollback segment number for the interested transaction is encoded in the interested transaction entry as part of the transaction identifier. However, the header block for that rollback segment may no longer contain a record for the interested transaction, because that block is also subject to change (lots of it) and the interested transaction may be ancient. 
Fortunately, however, the consistent get does not need to determine the exact commit SCN for the interested transaction - only the relative sequence of the commit SCN and the snapshot SCN. It is therefore sufficient to perform a recursive consistent get on the rollback segment header block of the rollback segment for the interested transaction. If that consistent get is successful, and if the transaction header for the interested transaction is not extant in the consistent version of its rollback segment header block, and if the transaction identifier indicates that the transaction predated the snapshot SCN, then it may be concluded that the interested transaction committed in relative antiquity, and so no roll back is required. 
(Incidentally, for current mode block cleanouts, the rollback segment header block is rolled back as far as possible, and the oldest available commit SCN for any transaction in that rollback segment at that time is recorded in the interested transaction list entry as the upper bound for its commit SCN. In other words, the transaction is marked as having committed no later than that SCN.) 
However, it is possible for the consistent get on the rollback segment header block for an interested transaction to fail. This may occur if the rollback information for any of the changes to that rollback segment header block since the snapshot SCN are not available. These changes are written to that rollback segment itself, and are therefore subject to unavailability due to extent reuse or deallocation in the same way as other changes. However, there is more latitude in the case of rollback segment transaction table changes. Because slots in the transaction table are reused cyclically, the rollback segment extents themselves may have to be reused many times before the rollback information for the transaction header for an interested transaction will be rendered unavailable thereby. 
How can you reduce the risk? 
The following simple guidelines should be followed to reduce the risk of snapshot too old errors. 
  Do not run discrete transactions while sensitive queries or transactions are running, unless you are confident that the data sets required are mutually exclusive.
  
  Schedule long running queries and transactions out of hours, so that the consistent gets will not need to rollback changes made since the snapshot SCN. This also reduces the work done by the server, and thus improves performance.
  
  Code long running processes as a series of restartable steps.
  
  Shrink all rollback segments back to their optimal size manually before running a sensitive query or transaction to reduce risk of consistent get rollback failure due to extent deallocation. This can be done with the APT script shrink_rollback_segs.sql.
  
  Use a large optimal value on all rollback segments, to delay extent reuse. For an indication of how long you might have before the problem strikes, the APT script rollback_reuse_time.sql can be used to get the average time before rollback segment extent reuse.
  
  Don't fetch across commits. That is, don't fetch on a cursor that was opened prior to the last commit, particularly if the data queried by the cursor is being changed in the current session.
  
  Use a large database block size to maximize the number of slots in the rollback segment transaction tables, and thus delay slot reuse.
  
  Commit less often in tasks that will run at the same time as the sensitive query, particularly in PL/SQL procedures, to reduce transaction slot reuse.
  
  If necessary, add extra rollback segments to make more transaction slots available. 
Note that adding extra rollback segments is somewhat in conflict with using a large optimal size, assuming the disk space available for rollback segments is invariant. The choice of a strategy at this point should depend upon the relative risk of consistent get rollback failures, as opposed to consistent get cleanout failures. 
What can you do when all else fails? 
For particularly sensitive queries and transactions, all this risk reduction is unnecessary. All that is needed is to prevent the deallocation or reuse of any rollback segment extents that have been used by any transaction subsequent to the snapshot SCN. 
One simple way of doing that is to ensure that there is only one (large) rollback segment online from the time of the snapshot SCN, and to explicitly use that rollback segment for the sensitive query or transaction. This protects all extents in that rollback segment that may be used thereafter, from extent deallocation and reuse, until the conclusion of the sensitive transaction or query. 
A more sophisticated variation on the same theme is to leave an uncommitted transaction in every online rollback segment. Of course, this introduces a risk of running out of space in the rollback segment tablespaces, but that risk is relatively easy to control. The following set of APT scripts can be used to apply this technique on Unix systems. 
prevent_1555_setup.sql 
This script creates a clustered table in the SYSTEM schema that is used to implement and record the protection of rollback segments from extent deallocation and reuse. 
prevent_1555.sql 
This is the main script of the set. It is called to ensure protection from ORA-1555 errors for a specified number of seconds. This script calls protect_rbs.sql in the background for each online rollback segment. 
protect_rbs.sql 
This script first shrinks the specified rollback segment to reduce the risk of running out of space in the rollback segment tablespaces. It then records its protection in the control table, before leaving an uncommitted transaction sleeping for the required number of seconds. 
prevent_1555_wait.sql 
This script must be run after the dummy transactions have been created in each online rollback segment. It waits for all older active transactions to finish. This is necessary in environments with other long-running transactions that may not yet have completed, because the earlier undo for those transactions is not protected and might be required by the critical report unless those transactions are allowed to finish before the critical report starts. 
protected_rollback_segs.sql 
This script is used to report the protection status of the rollback segments

11). How RMAN will knows, which block is changed?
Ans: ctwr background process.(control writer)
12). How you worked recovery?
Ans: Check the book mark.
13). Have you worked cloning?
Ans: Check the book mark.
14). Performance : AWR and ADDR?
Ans: Check the book mark link 
15). What are the difference is there between AWR and ADDR report?
Ans: Check the book mark link.
16). Adding a disk group to ASM, what is the activity in the DB?
17). We have 4 database’s in a single server. Then how many ASM instances can you use for 4 DB’s?
Ans: 1 asm instance only.
18). What is oratab file and when it is created and what does it contains?
Ans:
19). Why do we need only one ASM instance for multiple DB’s?

TECH-MAHENDRA Loc=Pune Tech Date: 14-02-2012
1). Tell me about dataguard? Why we use?
2). What are the issues you faced in Dataguard?
Ans:  Archive gap , Network failure, No archive log sync
3). How many RAC nodes do you have?
4). How to upgrade oracle 10g to 11g?
5). How to register redologs in dataguard?
Ans: alter database register or replace physical logfile '<fullpath/filename>';

6). How to get the  DDL of table(dept) without GUI?
Ans: select dbms_metadata.get_ddl (‘USER’,’SCOTT’) from dual;
        Select  dbms_metadata.get_ddl (‘TABLE’,’DEPT’,’SCOTT’) from dual;
        Select dbms_metadata.get_ddl (‘INDEX’,’DEPT_IDX’,’SCOTT’) from dual;
7). Can the sys password differ from primany and standby databases?
Ans: No, both server’s password should be equal for LTS.
8). How do you restore redolog files by using RMAN?
Ans:  rman> shutdown immediate
Rman> startup mount;
Rman > restore database;
Rman > recover database;
Rman > alter database open resetlogs;
9). One of your controlfile is corrupted?
Ans:  copy the another controlfile to corrupted location and then startup the database.


SAPCLE Loc=Chennai Tele Date: 14-02-2012
1). There is a database, that database (CRD) files of mount point is full, then how will you move the all CRD files to another mount point?
Ans: 
2). How do you take hot backup of ASM?
Ans:  
3). How do you take ASM backup by using RMAN?
Ans:  Find the bookmark link.
4). How to find alert log file location in database level?
Ans: 10g -- select name, value from v$parameter where name = 'background_dump_dest'

11g - select name, value from v$parameter where name = 'diagnostic_dest'
For 11G read up on ADRCI
5). How to find trace file location in database level?
Ans: Finding Trace Files
Trace files are stored in the Automatic Diagnostic Repository (ADR), in the trace directory under each ADR home. To help you locate individual trace files within this directory, you can use data dictionary views. For example, you can find the path to your current session's trace file or to the trace file for each Oracle Database process.
To find the trace file for your current session:
Submit the following query:
SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Default Trace File';
The full path to the trace file is returned.
To find all trace files for the current instance:
Submit the following query:
SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';
The path to the ADR trace directory for the current instance is returned.
To determine the trace file for each Oracle Database process:
Submit the following query:
SELECT PID, PROGRAM, TRACEFILE FROM V$PROCESS;

6). Unless we specify RMAN location, by default where the RMAN backup is stored/
Ans: 
7). If we don’t have FRA, then where it will store backup by using RMAN?

iGATE patni Bangalore  F2F Date: 18-02-2012
1). How to rename the online redolog  file?
Ans: You can use operating system commands to relocate online redo logs, then use the ALTER DATABASE statement to make their new names (locations) known to the database. This procedure is necessary, for example, if the disk currently used for some online redo log files is going to be removed, or if datafiles and a number of online redo log files are stored on the same disk and should be separated to reduce contention.
To rename online redo log members, you must have the ALTER DATABASE system privilege. Additionally, you might also need operating system privileges to copy files to the desired location and privileges to open and back up the database.
Before relocating your redo logs, or making any other structural changes to the database, completely back up the database in case you experience problems while performing the operation. As a precaution, after renaming or relocating a set of online redo log files, immediately back up the database's control file.
Use the following steps for relocating redo logs. The example used to illustrate these steps assumes:
The log files are located on two disks: diska and diskb.
The online redo log is duplexed: one group consists of the members /diska/logs/log1a.rdo and /diskb/logs/log1b.rdo, and the second group consists of the members /diska/logs/log2a.rdo and /diskb/logs/log2b.rdo.
The online redo log files located on diska must be relocated to diskc. The new filenames will reflect the new location: /diskc/logs/log1c.rdo and/diskc/logs/log2c.rdo.
Steps for Renaming Online Redo Log Members
1. Shut down the database.
2. SHUTDOWN
3.
2. Copy the online redo log files to the new location.
Operating system files, such as online redo log members, must be copied using the appropriate operating system commands. See your operating system specific documentation for more information about copying files.
________________________________________
Note:
You can execute an operating system command to copy a file (or perform other operating system commands) without exiting SQL*Plus by using the HOST command. Some operating systems allow you to use a character in place of the word HOST. For example, you can use ! in UNIX.
________________________________________
The following example uses operating system commands (UNIX) to move the online redo log members to a new location:
mv /diska/logs/log1a.rdo /diskc/logs/log1c.rdo
mv /diska/logs/log2a.rdo /diskc/logs/log2c.rdo

3. Startup the database, mount, but do not open it.
4. CONNECT / as SYSDBA
5. STARTUP MOUNT
6.
4. Rename the online redo log members.
Use the ALTER DATABASE statement with the RENAME FILE clause to rename the database's online redo log files.
ALTER DATABASE 
  RENAME FILE '/diska/logs/log1a.rdo', '/diska/logs/log2a.rdo' 
           TO '/diskc/logs/log1c.rdo', '/diskc/logs/log2c.rdo';

5. Open the database for normal operation.
The online redo log alterations take effect when the database is opened.
ALTER DATABASE OPEN;

2). How to drop TS?
Ans: > drop tablespace <tab>
3). If generated AWR report, then where output will be stored?
Ans: In SYSAUX tablespace
4). In expdp, what is constraint, consistency=y
Ans: contents option can be ALL, METADATA_ONLY or DATA_ONLY  
5). What is “archiver error”? 
Ans: ORA-00257: archiver error. Connect internal only, until freed.

Cause: The archiver process received an error while trying to archive a redo log. If the problem is not resolved soon, the database will stop executing transactions. The most likely cause of this message is the destination device is out of space to store the redo log file.

Action: Check the archiver trace file for a detailed description of the problem. Also, verify that the device specified in the initialization parameter archive_log_dest is set up properly for archiving.
The Oracle ARCH background process is responsible for taking the redo logs from the online redo log file system and writing them to the flat file
ORA-00257 is a common error in Oracle.  You will usually see ORA-00257 upon connecting to the database because you have encountered a maximum in the flash recovery area (FRA), or db_recovery_file_dest_size .  
First, make sure your automatic archiving is enabled.  To check the archive lo made, try:
SQL> archive log list;
Now, note thatyou can find archive destinations if you are using a destination of USE_DB_RECOVERY_FILE_DEST by:
SQL> show parameter db_recovery_file_dest;
The next step in resolving ORA-00257 is to find out what value is being used for db_recovery_file_dest_size, use:
SQL> SELECT * FROM V$RECOVERY_FILE_DEST;
You may find that the SPACE_USED is the same as SPACE_LIMIT, if this is the case, to resolve ORA-00257 should be remedied by moving the archive logs to some other destination.

You next need to archive the log files by, 
SQL> alter system archive log all;
It is important to note that within step five of the ORA-00257 resolution,  you may also encounter ORA-16020 in the LOG_ARCHIVE_MIN_SUCCEED_DEST, and you should use the proper archivelog path and use (keeping in mind that you may need to take extra measures if you are using Flash Recovery Area as you will receive more errors if you attempt to use LOG_ARCHIVE_DEST):
SQL>alter system set LOG_ARCHIVE_DEST_.. = 'location=/archivelogpath reopen';
The last step in resolving ORA-00257 is to change the logs for verification using:
SQL> alter system switch logfile;

6). Tablespace is full then how will you solve the problem?
Ans: 
7). Server is running very slowly because of 398 user’s are out of 400. Then how you  fix the problem?
Ans: There are a few different limits that might come in to play in determining the number of connections an Oracle database supports. The simplest approach would be to use the SESSIONS parameter and V$SESSION, i.e.
The number of sessions the database was configured to allow
SELECT name, value 
  FROM v$parameter
 WHERE name = 'sessions'
The number of sessions currently active
SELECT COUNT(*)
  FROM v$session
As I said, though, there are other potential limits both at the database level and at the operating system level and depending on whether shared server has been configured. If shared server is ignored, you may well hit the limit of the PROCESSES parameter before you hit the limit of the SESSIONS parameter. And you may hit operating system limits because each session requires a certain amount of RAM
Ans-2: Question:  When connecting to Oracle I got this ORA-00020 error:
ORA-00020: maximum number of processes (1100) exceeded 
ERROR at line 1: 
ORA-01012: not logged on   
What is the cause of this ORA-00020 error and how do I fix it? 
Answer:  The ORA-00020 is a serious production error because a user cannot connect.  
The ORA-00020 is caused by two things:   
1. Disconnected processes:  Rogue “zombie” connections  to Oracle that are idle (not working).  To fix this, use the ALTER SYSTEM KILL command.  You may also need to kill session at the OS level with the KILL -9 or the ORAKILL command.   
2. Too few process buckets:  Oracle limits the number of connected processes with the processes parameter, and you may get the ORA-00020 error as the natural result in growth of system usage.  
To fix this, increase the processes parameter, usually doubling the value to allow for future growth.  
The OERR command shows these details for the ORA-00020 error: 
ORA-00020: maximum number of processes (string) exceeded

Cause: All process state objects are in use.   

Action: Increase the value of the PROCESSES initialization parameter
Simply increase your processes parameter and you are all set!  
Question:  I want to increase the max allowed number of sessions so I am going to increase the number of processes from 150 to 200. 
Do I need to change any other parameter value or simply I need to run the query (alter system set processes=200 scope=spfile;)?  
What else besides sessions is controlled by the processes parameter?
Answer:  You are correct that the processes parameter controls the number of allowed Oracle sessions, but the processes parameter also has "derivatives" parameters that are controlled by the value of the processes parameter.  These derived parameters include sessions, enqueue_resources, and _enqueue_hash_chains.
In this exercise our DBA students will learn to Administer the Monitoring of the Instance:


Changing Parameters

1.  Through careful consideration, you've decided to change the size of your SGA.  Perform the following changes:

    a) Buffer Cache to 1000M
    b) Shared Pool to 240M
    c) Keep Cache to 64M

2.  You also feel that you should change the CBO after the first round of changes.  Write the commands to tell Oracle:

    a) You want an optimizer mode that supports index-driven OLTP queries
    b) Roughly 60% of your index blocks are cached

3.  Change SESSION_CACHED_CURSORS to 100 so it will take effect after a reboot


8). What is grid & grid technology?
Ans:-  Grid computing is nothing but using the resources of a many separate computers connected by a network to solve large-scale computation problems
Grid computing infrastructure continually analyzes demands for resources and adjusts supply accordingly.  
i). the main idea of grid computing is to provide redundancy and scalability.
ii). Grid computing is similar to RAID concepts
iii). We can achieve reliability , availability and scalability.
For others:
1. Performance improve - different levels with explanation
2. expdp & exp
3. user level performance (when user tells its slow)
4. RMAN, Auto memory management - SGA_TARGET
5. Functions, procedures, triggers, PL/SQL concepts
6. spfile, pfile
7. Advantages of RMAN
8. catalog & nocatalog
9. DG broker use
10. oratab - location, when created and purpose
11. consistency=y
12. export schema level from expdp
13. when u run query, which optimizer ti will takes?
Ans: Cast based (In previous versions of Oracle you had two choices RBO (rule-based optimizer) or CBO (cost-based optimizer), RBO is available in Oracle 10g but is a deprecated product the CBO is the preferred and default method).
14. After upgradation, performance reduces, why?
15. cache fusion slow - how u check?
16. how to change archive dest changes with RMAN?
Ans: Did you configure the default destination for backup? What does SHOW ALL command say?

You have to configure Disk device and channels accordingly. Refer to Backup documentation for steps
17. how RMAN knows arvhive log dest?
18. what u do when tbs full?

For me:

Round-1: (Technical)
1. About urself
2. adding tbs
3. dropping tbs
4. resizing tbs
5. what is dbsize and how to check
Ans: The biggest portion of a database's size comes from the datafiles. To find out how many megabytes are allocated to ALL datafiles: 
select sum(bytes)/1024/1024 "Meg" from dba_data_files;
To get the size of all TEMP files: 
select nvl(sum(bytes),0)/1024/1024 "Meg" from dba_temp_files;
To get the size of the on-line redo-logs: 
select sum(bytes)/1024/1024 "Meg" from sys.v_$log;
Putting it all together into a single query: 
select a.data_size+b.temp_size+c.redo_size "total_size"
from ( select sum(bytes) data_size
         from dba_data_files ) a,
     ( select nvl(sum(bytes),0) temp_size
         from dba_temp_files ) b,
     ( select sum(bytes) redo_size
         from sys.v_$log ) c;
Another query ("Free space" reports data files free space): 
col "Database Size" format a20 
col "Free space" format a20 
select round(sum(used.bytes) / 1024 / 1024 ) || ' MB' "Database Size" 
,      round(free.p / 1024 / 1024) || ' MB' "Free space" 
from (select bytes from v$datafile 
      union all 
      select bytes from v$tempfile 
      union all 
      select bytes from v$log) used 
,    (select sum(bytes) as p from dba_free_space) free 
group by free.p 
/
[edit] How do I find the used space within the database size?
Select from the DBA_SEGMENTS or DBA_EXTENTS views to find the used space of a database. Example: 
SELECT SUM(bytes)/1024/1024 "Meg" FROM dba_segments;

6. RMAN
7. consistency=y
Ans: expdp, whenever  
8. COMPRESS=Y
9. schema level export using expdp
10. how many dumps are there in ORACLE
11. what you can do if max users exceeds for oracle connection?
And: I can increase the processes in pfile
12. how to run the script in background
13. how to check log synching in standby & prod
14. what is ur db size?

Round-2: (Technical)
1. Tell me about urself
2. What you want to be in future in your career path
3. when u felt YOU ARE THE REAL DBA
4. RAC - 2 node architecture
5. RAC - background processes
Ans: RAC Background Processes:
1. Lock Monitor Processes ( LMON)
2. Lock Monitor Services (LMS)
3. Lock Monitor Daemon Process ( LMD)
4. LCKn ( Lock Process)
5. DIAG (Diagnostic Daemon)
1. Lock Monitor Processes ( LMON)
It Maintains GCS memory structures.
Handles the abnormal termination of processes and instances.
Reconfiguration of locks & resources when an instance joins or leaves the cluster are handled by LMON ( During reconfiguration LMON generate the trace files)
It responsible for executing dynamic lock remastering every 10 mins ( Only in 10g R2 & later versions).
LMON Processes manages the global locks & resources.
It monitors all instances in cluster, primary for dictionary cache locks,library cache locks & deadlocks on deadlock sensitive on enqueue & resources.
LMON also provides cluster group services.
Also called Global enqueue service monitor.
2. Lock Monitor Services (LMS)
LMS is most very active background processes.
Consuming significant amount of CPU time. ( 10g R2 - ensure that LMS process does not encounter the CPU starvation).
Its primary job is to transport blocks across the nodes for cache-fusion requests. 
If there is a consistent-read request, the LMS process rolls back the block, makes a Consistent-Read image of the block and then ship this block across the HSI (High Speed Interconnect) to the process requesting from a remote node. 
LMS must also check constantly with the LMD background process (or our GES process) to get the lock requests placed by the LMD process.
Each node have 2 or more LMS processes.
GCS_SERVER_PROCESSES --> no of LMS processes specified in init. ora parameter.
Above parameter value set based on number of cpu's ( MIN(CPU_COUNT/2,2))
10gR2, single CPU instance,only one LMS processes started.
Increasing the parameter value,if global cache activity is very high.
Also called the GCS (Global Cache Services) processes.
Internal View: X$KJMSDP 
3. Lock Monitor Daemon Process ( LMDn)
LMD process performs global lock deadlock detection.
Also monitors for lock conversion timeouts.
Also sometimes referred to as the GES (Global Enqueue Service) daemon since its job is to manage the global enqueue and global resource access. 
LMD process also handles deadlock detection and remote enqueue requests.
Remote resource requests are the requests originating from another instance.
Internal View: X$KJMDDP 
4. LCKn ( Lock Process)
Manages instance resource requests & cross instance calls for shared resources. 
During instance recovery,it builds a list of invalid lock elements and validates lock elements.
5. DIAG (Diagnostic Daemon)
Oracle 10g - this one new background processes ( New enhanced diagnosability framework).
Regularly monitors the health of the instance.
Also checks instance hangs & deadlocks.
It captures the vital diagnostics data for instance & process failures.

6. DG - log synching
Ans: 1. Check for GAP on standby
2. Check redo received on standby
3. Check redo applied on standby
4. Identify missing archive log files
5. Copy archive log files
6. Register archive log files with standby
7. Restart the managed recovery operations


step 1. Check for GAP on standby

primary + standby >
 select max(sequence#) from v$log_history;

primary > SELECT THREAD# "Thread",SEQUENCE# "Last Sequence Generated"
          FROM V$ARCHIVED_LOG
          WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)
          ORDER BY 1;

step 2 and 3. Check redo received on standby and Check redo applied on standby

standby > SELECT ARCH.THREAD# "Thread", ARCH.SEQUENCE# "Last Sequence Received", APPL.SEQUENCE# "Last Sequence Applied", (ARCH.SEQUENCE# - APPL.SEQUENCE#) "Difference"
          FROM
         (SELECT THREAD# ,SEQUENCE# FROM V$ARCHIVED_LOG WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$ARCHIVED_LOG GROUP BY THREAD#)) ARCH,
         (SELECT THREAD# ,SEQUENCE# FROM V$LOG_HISTORY WHERE (THREAD#,FIRST_TIME ) IN (SELECT THREAD#,MAX(FIRST_TIME) FROM V$LOG_HISTORY GROUP BY THREAD#)) APPL
         WHERE
         ARCH.THREAD# = APPL.THREAD#
          ORDER BY 1;
          
step 4. Identify missing archive log files

standby > SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

-- if GAP
standby > SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

step 5.  Copy archive log files

After identifying a gap (as shown above), the DBA will need to query the primary database 
to locate the archived redo logs on the primary database. The following query assumes the 
local archive destination on the primary database is LOG_ARCHIVE_DEST_1:

primary > SELECT name
            FROM v$archived_log
            WHERE thread# = 1
              AND dest_id = 1
              AND sequence# BETWEEN 24 and 28;
  

step 6. Register archive log files with standby

--  Copy the above redo log files to the physical standby database and register 
    them using the ALTER DATABASE REGISTER LOGFILE ... SQL statement on the 
    physical standby database. 
    
    For example:

standby > ALTER DATABASE REGISTER LOGFILE '/u02/oraarchive/TESTDB/arch_t1_s24.dbf';
standby > ALTER DATABASE REGISTER LOGFILE '/u02/oraarchive/TESTDB/arch_t1_s25.dbf';
standby > ALTER DATABASE REGISTER LOGFILE '/u02/oraarchive/TESTDB/arch_t1_s26.dbf';
standby > ALTER DATABASE REGISTER LOGFILE '/u02/oraarchive/TESTDB/arch_t1_s27.dbf';
standby > ALTER DATABASE REGISTER LOGFILE '/u02/oraarchive/TESTDB/arch_t1_s28.dbf';

step 7. Restart the managed recovery operations


-- After the redo logs have been registered on the physical standby database, 
   the DBA can restart the managed recovery operations. 
   
   For example, to put the physical standby database into automatic recovery managed mode:

standby > alter database recover managed standby database disconnect from session;

7. Do you have any automation tools for any alert log critical errors?
8. Upgradation steps from 9i to 11g
Ans:  
9. Any Performance tuning you did?
10. User told one session is slow what you can do?
11. Explain your current project roles - share of your role with your team
12. what is your db size?
13. how many dbs you are handling?

Round-3: (Technical)
1. Tell me about yourself
2. have u worked on RAC?
3. Tel me how you added RAC node? - Installation of 2 Node RAC
4. where your server backups are located?
5. when u felt YOU ARE THE REAL DBA
6. How many dbs you are having?
7. what is your db size?
8. Have you done on DR (Disaster Recovery Scenerio)
9. Do your company have any DR documents for different stages? If so how many levels / stages / number of DR scenerio's are there?
10. What is your DR scenario and how you resolved it?
Round-4: (HR)
1. Tell me about yourself
2. Tell me about your background?
3. Tell me about your family background
4. Roles of current project - yours compared with others from your team
5. About your first company ?
6. Any development things / activities from first company?
7. About Current company?
8. Any developments roles handling/
9. How many DBs your are handling?
10. what is your DB size?
11. Why you resigned your first company? exact reason??
12. Compensation from your first company after relieving?
13. Why you resigned your first company? exact reason??
14. Compensation from your second company after relieving?
15. Are you onroll on 1st company? if not so tell me your 3rd party consultancy details
16. Are you onroll on 2nd company? if not so tell me your 3rd party consultancy details
17. What is your current CTC & how much you are expecting? - if permanent & if contract
IBM              Chennai
1. Tell me about yourself
2. RAC - VOTING DISK
3. GRD
4. One archive log missing - how u can backup RMAN
Ans: RMAN Backup error: What to do if some archivelog is missing or datafile is offline during RMAN backup? 
By shailesh.mishra on Sep 12, 2009
I always encounters the problem while my seminars that what DBA should do if some archivelog is missing during RMAN backup.
I think apart from crossing the fingers, one can at least make successful backup. backup should not be missed even if you miss some of the files. Nothing is important than successful backup. All future recovery depends upon the presence of backups.
RMAN> crosscheck archivelog all;
and then follwoing to delete expired archivelog.
RMAN> delete expired archive log;
Also refer to the skip inaccessible clause of the backup command. for example...
RMAN> backup archivelog all skip inaccessible delete input;
RMAN> change archivelog all validate;

because without it "skip inaccessible" will be needed any time You will start backup of archs.
Moreover, it can be done with following way also:
for the missing archive logs... u can do the following
connect target /
connect catalog rman/pwd@rmancatalog
run {
change archivelog from logseq = xxx until logseq =
yyy unavailable;
}
xxx and yyy are the logseg numbers, the one's you are missing.
RMAN> RUN
{
SET MAXCORRUPT FOR DATAFILE 1 TO 10;
BACKUP DATABASE
SKIP INACCESSIBLE
SKIP READONLY
SKIP OFFLINE;
}
To back up the database while skipping offline and read-only tablespaces, you can run the following command:
RMAN> BACKUP DATABASE
SKIP READONLY
SKIP OFFLINE;
Above discussion include other file types also if they are not available during backup and backup is generating exception.
5. what is archive error
6. diff between cumulative and differential backups after that in incremental
Ans: Incremental
A "normal" incremental backup will back up only those files that have been changed since the last backup of any type. This provides the quickest means of backup, since it makes copies only of files that have not yet been backed up. For instance, following a full backup on Friday, Monday’s tape will contain only those files changed since Friday. Tuesday’s tape contains only those files changed since Monday, and so on. The downside to this is that in order to perform a full restore, one needs to restore the last full backup first, followed by each of the subsequent incremental backups to the present day in the correct order. Should any one of these backup copies be damaged (particularly the full backup), the restore will be incomplete.
Differential
A cumulative backup of all changes made since the last full or normal backup, i.e., the differences since the last full backup. The advantage to this is the quicker recovery time, requiring only a full backup and the last differential backup to restore the system. The disadvantage is that for each day elapsed since the last full backup, more data needs to be backed up, especially if a significant proportion of the data has been changed.

7. ASM - disk removal explanation with concept and syntax
Ans: 
8. db cloning steps

Mode of Interview : Telephonic

These are first interview Questions

1. How do you check blocking sessions?
Ans: During adpatch ,deadlock will happen when some jobs depends on resource where another job its holding the resource.Using adctrl ,you can find two jobs in running state there will be no log updates.
* To find blocking session jobs below query will useful. It will return two rows.
select process,sid, blocking_session from v$session where blocking_session is not null;
E.g.
SQL> select process,sid, blocking_session from v$session where blocking_session is not null;
PROCESS SID BLOCKING_SESSION
———— ———- —————-
1234 365 366
1234 366 365
* Second step to find the serial number for the Blocking Session to kill
select SERIAL# from v$session where SID=<SID number>
E.g.
SQL> select SERIAL# from v$session where SID=365;
SERIAL#
———-
130
* Final step to kill the blocking session
alter system kill session ‘SID,SERIAL#’;
E.g.
SQL> alter system kill session ’365,130′;
System altered.


2. How do u check bkgd running jobs?
Ans: A task can usually be started and run as a background task by putting a '&' at the end of the command line.
If a task was started and is running in the foreground, it is still possible to move it to the background without cancelling it. To move a task from the foreground to the background perform the following steps:
1. CTRL-Z (That is, while holding the CTRL key down, tap the 'z' key) This will suspend the current foreground job (task).
2. Enter the job control command 'bg'
3. Tap the 'Enter' key
The job is now running in the background. 
Useful commands to see which jobs are still running is the 'jobs' or the 'ps ua' commands. If the 'jobs' command is used, a background jobs can be brought to the foreground with the command fg n where n is the job (not the PID) number.


4. How do u chceck locks on objects?
Ans: Ever wondered when a user comes to you saying that he/she cannot run a DML statement, which one should be able to run with in a second.
We suspect 2 reason for the same
1) Database is terribely slow (Which cannot be the case for processing a simple update)
2) Some one is holding an exclusive lock on object which user is trying to update (quite possible).
There can be many more reasons we can find upon further investigations. In this post we will see how to investigate reason 2 – If some other user is holding lock on the object which this user is trying to modify.
lets take a simple scenario.
session 1:
SQL> create table test (col1 number, col2 varchar2(5));
Table created.
SQL> insert into test values (1,’a');
1 row created.
SQL> insert into test values (2,’b');
1 row created.
SQL> commit;
Commit complete.
SQL> select * from test for update;
COL1 COL2
———- —–
1 a
2 b
Session 2:
SQL> update test set col2=’a’ where col1 = 1;
The above session 2 will hang !!!
01 SQL> select * from v$lock;
02  
03 ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
04 ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
05 00000003BFD5D868 00000003BFD5D888         54 CF          0          0          2          0     669613          0
06 00000003BFD5D900 00000003BFD5D920         54 XR          4          0          1          0     669625          0
07 00000003BFD5DA30 00000003BFD5DA50         54 RS         25          1          2          0     669613          0
08 00000003BFD5DAC8 00000003BFD5DAE8         35 TX     196652        882          0          6        344          0
09 00000003BFD5DB60 00000003BFD5DB80         53 TS          3          1          3          0     669599          0
10 00000003BFD5DBF8 00000003BFD5DC18         55 RT          1          0          6          0     669613          0
11 00000003BFD5DDC0 00000003BFD5DDE0         57 MR          1          0          4          0     669601          0
12 00000003BFD5DE58 00000003BFD5DE78         57 MR          2          0          4          0     669601          0
13 00000003BFD5DEF0 00000003BFD5DF10         57 MR          3          0          4          0     669601          0
14 00000003BFD5DF88 00000003BFD5DFA8         57 MR          4          0          4          0     669601          0
15 00000003BFD5E020 00000003BFD5E040         57 PW          1          0          3          0     669599          0
16  
17 ADDR             KADDR                   SID TY        ID1        ID2      LMODE    REQUEST      CTIME      BLOCK
18 ---------------- ---------------- ---------- -- ---------- ---------- ---------- ---------- ---------- ----------
19 00000003BFD5E3C8 00000003BFD5E3E8         57 MR         81          0          4          0     669593          0
20 00000003BE50B9B8 00000003BE50B9E0         49 TM      21837          0          3          0        374          0
21 00000003BE50BAB8 00000003BE50BAE0         35 TM      21837          0          3          0        344          0
22 00000003BDC81138 00000003BDC812C0         49 TX     196652        882          6          0        374          1
23  
24 15 rows selected.
If a session is blocking another session, you will see block = 1 for that session. So in out case SID=49 is blocking some other session. We can also find the session which got blocked because of SID=49.
There are 2 columns ID1 and ID2. The values of ID1 and ID2 for SID=49 will match with some other SID in v$lock table. If you see carefully in our case its matching with SID=35. So SID 35 is the session which got blocked because of SID=49. Also the session which gets blocked will not be able to get its request processed. So you will see REQUEST column will have a value > 0.
So from the above analysis we can say that SID 49 is blocking SID 35.
We can directly write a query which will give the required output.
SQL> select a.SID “Blocking Session”, b.SID “Blocked Session”
2  from v$lock a, v$lock b
3  where a.SID != b.SID
4  and a.ID1 = b.ID1
5  and a.ID2 = b.ID2
6  and b.request > 0
7  and a.block = 1;
Blocking Session Blocked Session
—————- —————
49              35
Lets understand rest of the columns in v$lock tables here.
ID1 and ID2 -> There represents the rollback segment and transaction table entries for that transaction. So when session 1 acquired the lock it got rollback segment and transaction table entry. When another session requested the same block, Oracle tried to generate a CR “Consistent read” image for the same by applying the rollback segment. But since there was exclusive lock it was not able to proceed. Unless first session relievs the lock, second session cannot proceed. So for second session its showing the value of ID1 and ID2 same as session 1 as session 2 was using the same values from rollback segment to make a CR copy.
TYPE -> This column gives the type of lock obtained on that table in which the data was locked. For more information on lock type check defination of v$lock table in Oracle references. For a session to change the data in a table it has to acquire a TX lock. This is the transaction enqueue lock.
LMODE -> This is the mode of lock. The is having values from 0 to 6, 6 being the most restrictive and 0 being least restrictive. When a session obtains lock in mode 6 that mean it has obtained exclusive lock and no other session is allowed to update the data. So for SID=49 we can see here that lock mode is exclusive (6). For more information on lock mode, you can check the v$lock table definition in Oracle references
REQUEST -> This column represent the lock mode requested by a blocking session. The value in this column gets updated only when the session is blocked. For example in our case SID=35 is being blocked so LMODE column shows a value of 0, but REQUEST column shows a value of 6. That means that SID 35 has requested lock mode 6 which has not yet assigned to it.
Some time if a session request for exclusive lock (mode 6), it might not get it because there are no Interested Transaction List (ITL) available in the block in which a user want to change data. For this user session will wait on mode 4 (shared mode) for some time and as soon as ITL is available, it will obtain the lock in mode 6.
Also in the above rows, you can see that for SID 49 and 35, there are 2 more rows and the TYPE column shows ‘TM’. There are the DML level enqueues and are acquired in lock mode 3 (Shared Row Exclusive). The lock will prevent any DDL activity on this table.
We can find the object name by getting the ID1 column value from these rows containing TM lock. 21837 in our case.
SQL> select object_name from dba_objects where object_id=21837;
OBJECT_NAME
————–
TEST
We can even get the row which is being blocked by transaction using v$session.
SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
2  from v$session where sid=35;
ROW_WAIT_OBJ# ROW_WAIT_FILE# ROW_WAIT_BLOCK# ROW_WAIT_ROW#
————- ————– ————— ————-
21837              1           45082             0
The above 4 components are the components of ROWID and we can generate ROWID number from there components using DBMS_ROWID package.
SQL> select dbms_rowid.rowid_create (1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW#)
2  from v$session where sid=35;
DBMS_ROWID.ROWID_C
——————
AAAFVNAABAAALAaAAA
Now we can check if this was the row which blocking session was trying to update
SQL> select * from test where rowid = ‘AAAFVNAABAAALAaAAA’;
COL1 COL2
———- —–
1 a
this was the row blocking session was trying to update.
Hope this helps !!


5. Diff b/w EXP & EXPDP
Ans:  If you have worked with prior 10g database you possibly are familiar with exp/imp utilities of oracle database. Oracle 10g introduces a new feature called data pump export and import.Data pump export/import differs from original export/import. The difference is listed below.

1)Impdp/Expdp has self-tuning unities. Tuning parameters that were used in original Export and Import, such as BUFFER and RECORDLENGTH, are neither required nor supported by Data Pump Export and Import.

2)Data Pump represent metadata in the dump file set as XML documents rather than as DDL commands. 

3)Impdp/Expdp use parallel execution rather than a single stream of execution, for improved performance.

4)In Data Pump expdp full=y and then impdp schemas=prod is same as of expdp schemas=prod and then impdp full=y where in original export/import does not always exhibit this behavior. 

5)Expdp/Impdp access files on the server rather than on the client. 

6)Expdp/Impdp operate on a group of files called a dump file set rather than on a single sequential dump file.

7)Sequential media, such as tapes and pipes, are not supported in oracle data pump.But in original export/import we could directly compress the dump by using pipes.

8)The Data Pump method for moving data between different database versions is different than the method used by original Export/Import.

9)When you are importing data into an existing table using either APPEND or TRUNCATE, if any row violates an active constraint, the load is discontinued and no data is loaded. This is different from original Import, which logs any rows that are in violation and continues with the load.

10)Expdp/Impdp consume more undo tablespace than original Export and Import. 

11)If a table has compression enabled, Data Pump Import attempts to compress the data being loaded. Whereas, the original Import utility loaded data in such a way that if a even table had compression enabled, the data was not compressed upon import.

12)Data Pump supports character set conversion for both direct path and external tables. Most of the restrictions that exist for character set conversions in the original Import utility do not apply to Data Pump. The one case in which character set conversions are not supported under the Data Pump is when using transportable tablespaces.

13)There is no option to merge extents when you re-create tables. In original Import, this was provided by the COMPRESS parameter. Instead, extents are reallocated according to storage parameters for the target table.
Differences between Data Pump impdp and import utility
The original import utility dates back to the earliest releases of Oracle, and it's quite slow and primitive compared to Data Pump.  While the old import (imp) and Data Pump import (impdp) do the same thing, they are completely different utilities, with different syntax and characteristics.  
Here are the major syntax differences between import and Data Pump impdp:
Data Pump does not use the BUFFERS parameter
Data Pump export represents the data in XML format
A Data Pump schema import will recreate the user and execute all of the associated security privileges (grants, user password history).
Data Pump's parallel processing feature is dynamic. You can connect to a Data Pump job that is currently running and dynamically alter the number of parallel processes.
Data Pump will recreate the user, whereas the old imp utility required the DBA to create the user ID before importing.  



6. Why do u use exp/imp
Ans: 

7. What is the purpose of SGA_TARGET?
Ans: SGA_TARGET is a database initialization parameter (introduced in Oracle 10g) that can be used for automatic SGA memory sizing. 
Parameter description: 
SGA_TARGET
Property          Description
Parameter type    Big integer
Syntax            SGA_TARGET = integer [K | M | G]
Default value     0 (SGA autotuning is disabled)
Modifiable        ALTER SYSTEM
Range of values   64 to operating system-dependent
Basic             Yes
SGA_TARGET provides the following: 
Single parameter for total SGA size 
Automatically sizes SGA components
Memory is transferred to where most needed
Uses workload information 
Uses internal advisory predictions 
STATISTICS_LEVEL must be set to TYPICAL 
By using one parameter we don't need to use all other SGA parameters like. 
DB_CACHE_SIZE (DEFAULT buffer pool) 
SHARED_POOL_SIZE (Shared Pool) 
LARGE_POOL_SIZE (Large Pool) 
JAVA_POOL_SIZE (Java Pool) 
[edit] Enable SGA_TARGET 
SQL> show parameter sga_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sga_target                           big integer 0
SQL> show parameter sga_max_size

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
sga_max_size                         big integer 600M
As we can see our automatic SGA tuning is not enabled so we can enable it by setting the SGA_TARGET parameter value. 
SQL> alter system set sga_target=500m;
System altered.

SQL> show parameter sga_target

NAME                                 TYPE        VALUE
------------------------------------ ----------- -------
sga_target                           big integer  500M
[edit] Resize SGA_TARGET 
SGA_TARGET is dynamic 
Can be increased till SGA_MAX_SIZE
Can be reduced till some component reaches minimum size
Change in value of SGA_TARGET affects only automatically sized components
SQL> show parameter sga_max_size
NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------
sga_max_size                         big integer 600M
SQL> show parameter sga_target
NAME                                 TYPE        VALUE
------------------------------------ ----------- -----------
sga_target                           big integer 500M
We can resize it to only 600m if we will try to increase it from 600m we will get error. 
SQL> alter system set sga_target=605m;
alter system set sga_target=605m
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-00823: Specified value of sga_target greater than sga_max_size
For that we must first increase our SGA_MAX_SIZE parameter value. But we must restart out instance because its STATIC parameter. 
SQL> alter system set sga_max_size=956m scope=spfile;
System altered.

SQL> startup force
ORACLE instance started.
Total System Global Area 1000189952 bytes
Fixed Size                  1337492 bytes
Variable Size             624953196 bytes
Database Buffers          369098752 bytes
Redo Buffers                4800512 bytes
Database mounted.
Database opened.

SQL> show parameter sga_max_size
NAME                                 TYPE        VALUE
------------------------------------ ----------- --------
sga_max_size                         big integer 956M

SQL> alter system set sga_target=900m;
System altered.
Be careful when you are using 32 bit system, sga_max_size should be less than 4GB otherwise Oracle instance will fail to startup. Also in 32 bit system, it is observed that total sga+total pga should be less than the actual physical memory available. 

It appear that on Windows 32 bits, maximum size for (SGA + PGA + Oracle memory for connection) must be < 2000 MO 

On Windows 32 bits with Windows "/3GB" and "/PAE" options (Windows 2003 Server and later), maximum size for (SGA + PGA + Oracle memory for connection) must be < 3000 MO 

[edit] Disable SGA_TARGET 
We can Disable our automatic SGA tuning by setting the parameter SGA_TARGET to value digit 0. 
SQL> alter system set sga_target=0;

System altered.


7.How do you clone database, for which cloning, Downtime is required, and for which cloning        downtime is not required?

8.What is the Procedure for RMAN cloning?

9.How do u enable archive mode?

10.How many archive destinations can we give?

Ans: 
11.How do u enable archivelog mode?
Ans: Archived Redo Log Data Dictionary Views
Oracle provides data dictionary views for the archived redo logs as seen in this list:
v$archived_log - Information about archived redo logs. 
v$parameter - Shows the location of the flash recovery area where archived redo logs are created. 
v$log_history - Contains information on previous redo logs 


12. How u enable flashback. Tll the parameters also.

13.How to apply a praticular archivelog command?

14.By using RMAN backup, can i store a particular table?
Ans: 

15. What is Refresh Database?
Ans: Database refresh using cold backup
Sometime, you will be asked to refresh an UAT database using an coldbackup of production one.  Here are the steps :
1.  Prepare controlfile creation script.  This can be done by
alter database backup controlfile to trace as '/export/home/oracle/ctlfile.sql';
2.  View the edit ctlfile.sql, so that it should contain the following
CREATE CONTROLFILE SET DATABASE "UATDB" RESETLOGS NOARCHIVELOG
  MAXLOGFILES 16
  MAXLOGMEMBERS 3
  MAXDATAFILES 100
  MAXINSTANCES 1
  MAXLOGHISTORY 453
LOGFILE
  GROUP 1 '/u01/oradata/UATDB/redo01.log' SIZE 200M,
  GROUP 2 '/u02/oradata/UATDB/redo02.log' SIZE 200M,
  GROUP 3 '/u03/oradata/UATDB/redo03.log' SIZE 200M
-- STANDBY LOGFILE
DATAFILE
  '/u01/oradata/UATDB/system01.dbf',
  '/u01/oradata/UATDB/undotbs01.dbf',
  '/u01/oradata/UATDB/users01.dbf',
  '/u01/oradata/UATDB/data01.dbf',
  '/u01/oradata/UATDB/tools01.dbf',
  '/u01/oradata/UATDB/perfstat01.dbf',
  '/u01/oradata/UATDB/sysaux01.dbf'
CHARACTER SET WE8ISO8859P1
;
3. Gernerate script to create TEMP tablespace
set long 2000
select DBMS_METADATA.GET_DDL('TABLESPACE','TEMP') from dual;
4. Shutdown UATDB cleanly
shutdown immediate
5. Copy the cold backup files from production to UAT box.
6. Start the database in nomount state

startup nomount
7. Run the create control file script
@/export/home/oracle/ctlfile.sql
8. Open the database with resetlogs
alter database open resetlogs;
9. Create temp tablespace from the script generated from step 3:
CREATE TEMPORARY TABLESPACE "TEMP1" TEMPFILE
'/u01/oradata/UATDB/TEMP01.dbf' SIZE 5242880000
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576


16.How do u analyse a table statistics?

Ans: To estimate statistics:
 ANALYZE TABLE tablename ESTIMATE STATISTICS SAMPLE 30 PERCENT;
To compute statistics:
 ANALYZE TABLE tablename COMPUTE STATISTICS;

17.What is log minor?

Ans: LogMiner is an Oracle utility. Using LogMiner one can query the contents of online redo log files and archived log files. It can be used as a powerful data audit tool, as well as a tool for sophisticated data analysis.The LogMiner tool can help the DBA to the find changed records in redo log files by using a set of PL/SQL procedures and functions. Log Miner extracts all DDL and DML activity from the redo log files for viewing by a DBA via the dynamic performance view V$LOGMNR_CONTENTS
18.Why we prefer RAC

19.When we apply patch in RAC, Is downtime required?

Ans: In 10g downtime is there, but 11gr2 no downtime because we have rolling upgrade features are available in 11gr2.
20.What is ASM Concept?

21.What si Data Guard Concept?

22.How/What  do u know about 11g?
-------------------------------------------------------------------------------------------

Mode of Interview : Telephonic

These are Second interview Questions

1. How do you do Dataguard configuration?

2. Is data guard configuration required Daily ?

3. What is Swithc over & fail over?

4. What is ASM file system and What is normal filesystem adn difference?

5. What are the steps for ASM Configuraion steps?

6. What is the purpose of DB_CREATE_FILE_DESTINATION and DB_CREATE_ONLNE_DESTINATION?
Ans:  db_create_file_dest = directory,
Db_create_online_log_dest _n= Default locn for Oracle-managed control files and online redo logs


7. What is materialized view? Why is it used? what is referesh option? what is fast refresh, fast incremental referesh, fast differential refresh?

Ans: 
8.What is tunning? 
Scenario:A big application normally takes 1 hr but today it took 6 hrs, why & How do u check?

8. What we`ll check in explain plan?
Ans: Oracle Explain Plan
The visual explain plan feature is supported for Oracle 9i and later versions. Unlike many other products, DB Solo does not require you to have a PLAN_TABLE in place for this feature. Instead, the V$SQL_PLAN table is used which is more accurate since it contains the actual plan, not a predicted one like the PLAN_TABLE approach does. 
For a user to be able to read the V$SQL_PLAN and V$SESSION tables, she must be granted the SELECT_CATALOG_ROLE role or the SELECT ANY DICTIONARY system privilege. The difference between these two is that SELECT_CATALOG_ROLE is a role that contains several privileges whereas SELECT ANY DICTIONARY is a single system privilege. You can check to see if you have the necessary privileges by issuing the 'SELECT COUNT(*) FROM V$SQL_PLAN' SQL statement in the query window. If you get the ORA-00942 (table or view does not exist) error, you need to ask your DBA to grant the necessary privileges.
The visual representation of the explain plan is constructed based on the following columns from the V$SQL_PLAN table 
OPERATION - Name of the operation that was performed, e.g. TABLE ACCESS
OPTIONS - Variation of the operation, e.g. FULL in case of a full table scan, i.e. TABLE ACCESS (FULL)
OBJECT_NAME - Name of the table or index that this step applies to
COST - Cost of the operation assigned to this step by the optimizer, NULL for rule-based optimizer
CPU_COST - CPU cost of the operation assigned to this step by the optimizer, NULL for rule-based optimizer
IO_COST - IO cost of the operation assigned to this step by the optimizer, NULL for rule-based optimizer
CARDINALITY - Estimated number of rows produced by this step
OPTIMIZER - RULE or CHOOSE 
ACCESS_PREDICATES - Predicate to locate rows in the access structure.
FILTER_PREDICATES - Predicate that will filter rows before returning them from this step
For the Oracle's cost-based optimizer (CBO) to work correctly, you need to regularly compute statistics for your tables and indexes using the Oracle-supplied DBMS_STATS package or the ANALYZE TABLE statement.


10.If u collect data for index internally, whats happening?

Ans: 
11.What is meant by Clustering factor?

12. what are joins?
Ans: Joins are used to fetch a query on more than one table.
You can broadly catagorize joins in to four types.
1. Equi Join
2.Non Equijoin
3.outer joins
4.self joins


13.Scenario: Temp file is corrpted - what will happen & how do u proceed?

Ans: When ever, we the temporary tablespace temp file is corrupted or deleted accidently, then it is easy to recover. RMAN never backup the temporary tablespace. Till oracle9i, we make the temp file offline and drop the temp file at the database level(alter database datafile 'c:/oracle/oradata/temp.dbf' offline drop). Once it is dropped, then drop the temporary tablespace and recreate new one. Oracle10g introduced new feature which we will create the temp file automatically when we restart the database.

Here is the steps in oracle10g. The database is running in windows OS.

Step1: Let us delete the temp file to simulate that the temp file is corrupted.

D:\oracle\product\10.2.0\oradata\orcl>dir *.dbf
Volume in drive D has no label.
Volume Serial Number is 70AE-6E52

Directory of D:\oracle\product\10.2.0\oradata\orcl

06/14/2009 08:55 PM 104,865,792 EXAMPLE01.DBF
06/14/2009 08:55 PM 251,666,432 SYSAUX01.DBF
06/14/2009 08:55 PM 503,324,672 SYSTEM01.DBF
06/14/2009 08:50 PM 20,979,712 TEMP01.DBF
06/14/2009 08:55 PM 31,465,472 UNDOTBS01.DBF
06/14/2009 08:55 PM 5,251,072 USERS01.DBF
6 File(s) 917,553,152 bytes
0 Dir(s) 15,348,064,256 bytes free

D:\oracle\product\10.2.0\oradata\orcl>del TEMP01.DBF

D:\oracle\product\10.2.0\oradata\orcl>dir *.dbf
Volume in drive D has no label.
Volume Serial Number is 70AE-6E52

Directory of D:\oracle\product\10.2.0\oradata\orcl

06/14/2009 08:55 PM 104,865,792 EXAMPLE01.DBF
06/14/2009 08:55 PM 251,666,432 SYSAUX01.DBF
06/14/2009 08:55 PM 503,324,672 SYSTEM01.DBF
06/14/2009 08:55 PM 31,465,472 UNDOTBS01.DBF
06/14/2009 08:55 PM 5,251,072 USERS01.DBF
5 File(s) 896,573,440 bytes
0 Dir(s) 15,369,043,968 bytes free

D:\oracle\product\10.2.0\oradata\orcl>

Step2: Let us restart the database. In oracle9i, the database will not open. But in oracle10g, when we start the database, it creates the temp file automatically and open the database. Just click to know more info.

D:\oracle\product\10.2.0\oradata\orcl>set oracle_sid=orcl

D:\oracle\product\10.2.0\oradata\orcl>sqlplus sys/password as sysdba

SQL*Plus: Release 10.2.0.1.0 - Production on Sun Jun 14 21:41:51 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 281018368 bytes
Fixed Size 1248552 bytes
Variable Size 88081112 bytes
Database Buffers 188743680 bytes
Redo Buffers 2945024 bytes
Database mounted.
Database opened.
SQL>

Step 3: Database opened successfully. Let us check the DB directory...
14.Where u can fnd a running RMAN jobs and How?

Ans: Script to monitor RMAN progress
Question:  I have a long running RMAN job, and I want a script to monitor the progress of RMAN execution.  How do you monitor RMAN progress?
Answer:  Oracle has several views that can monitor long running jobs, including v$session_longops and v$process with v$session.  Also see RMAN backup scripts from Windows DOS scripts for RMAN automated backups and example of RMAN shell script.
sselect
  sid,
  start_time,
  totalwork
  sofar, 
 (sofar/totalwork) * 100 pct_done
from 
   v$session_longops
where 
   totalwork > sofar
AND 
   opname NOT LIKE '%aggregate%'
AND 
   opname like 'RMAN%';
select 
   sid, 
   spid, 
   client_info, 
   event, 
   seconds_in_wait, 
   p1, p2, p3
 from 
   v$process p, 
   v$session s
 where 
   p.addr = s.paddr
 and 
   client_info like 'rman channel=%';

Yousef Rifai has published this RMAN monitoring script, quite handy when you need to monitor the status of a long running RMAN backup job:
REM RMAN Progress
alter session set nls_date_format='dd/mm/yy hh24:mi:ss'
/
select SID, START_TIME,TOTALWORK, sofar, (sofar/totalwork) * 100 done,
sysdate + TIME_REMAINING/3600/24 end_at
from v$session_longops
where totalwork > sofar
AND opname NOT LIKE '%aggregate%'
AND opname like 'RMAN%'
/

REM RMAN wiats
set lines 120
column sid format 9999
column spid format 99999
column client_info format a25
column event format a30
column secs format 9999
SELECT SID, SPID, CLIENT_INFO, event, seconds_in_wait secs, p1, p2, p3
  FROM V$PROCESS p, V$SESSION s
  WHERE p.ADDR = s.PADDR
  and CLIENT_INFO like 'rman channel=%'

15.What is RMAN Channel?

Ans: RMAN CHANNEL – An RMAN Channel is a communication pipeline between a RMAN executable and a target or auxiliary database. An RMAN channel consists of a server session on the target or auxiliary database and a data stream from the database to the backup device or vice-versa. RMAN console sends commands to the database using this channel, and the server session running on the database executes the command on behalf of the RMAN Recovery Manager.
16.Scenario:Control files 3 are there. 1 is corrupted. What will happen to database?
Ans: 

17.Scenario: DB has 3 redo logs, 3rd groups is current which has only 1 member, it got corrupted. Now it`ll abort. How will you up the DB?

Ans: 
18.What happens if active redolog group is corrupted that needs to be archived db links?

19. How do u create DB links in previous company?

20. What are the errors faced in last 3 months?

21.What will you do if UNDO gets corrupted?


> 1.     Tell me about yourself
Ans:
> 2.     What is PMON
Ans: PMON (Process MONitor) is an Oracle background process created when you start a database instance. The PMON process will free up resources if a user process fails (eg. release database locks). 
PMON normally wakes up every 3 seconds to perform its housekeeping activities. PMON must always be running for an instance. If not, the instance will terminate

> 3.     Write a script for exporting a table partition
Ans: 
> 4.     What is IGNORE=Y while importing
And: Find the link
> 5.     When you can rebuild the index – cases required
Ans:
> 6.     How you can resolve when backup server network fails when taking hot backup?
>
> 7.     How many backups are there?
>
> 8.     What is catalog / no-catalog in RMAN
Ans: Start RMAN without connecting to a database
 $ rman
Start RMAN and run a command file:
 $ rman @/scripts/rman_commands.txt  
Connect to a target database and a recovery catalog:
 $ rman TARGET SYS/pwd CATALOG cat_usr/pwd@cat_str
Connect to a target database without using a recovery catalog:
 $ rman TARGET SYS/pwd@sid NOCATALOG

>
> 9.     How to check latest Opatch applied version?
Ans: Anna has 
> 10.   RAC Installation steps
Ans:
> 11.   Steps on migration from 10.0.2.0 to 10.0.4.0
Ans:
> 12.   Routine activities

Intelligroup Interview Questions:

1.      Clusterware Products?
2.      what is Virtual IP?
3.      Name some Kernel Parameters ? their min values., etc
4.      Oracle Server BG process?
5.      Oracle SW installation steps?
6.      What is Checkpoint? When do Checkpoint occur?
7.      RMAN – Diff between a Fullbackup and ‘0’ level backup
Ans: Diff Between RMAN full backup & level 0 backup
—————————————————————————
A level 0 incremental backup, which is the base for subsequent
incremental backups, copies all blocks containing data, backing the
datafile up into a backup set just as a full backup would.
The only difference between a level 0 incremental backup and a full
backup is that a full backup is never included in an incremental
strategy
—————————————————————————
OR
************************************************************
A level 0 incremental backup, which is the base for subsequent
incremental backups, copies all blocks containing data. The only
difference between a level 0 backup and a full backup is that a full
backup is never included in an incremental strategy.
If no level 0 backup exists when you run a level 1 or higher backup,
RMAN makes a level 0 backup automatically to serve as the base.
The benefit of performing multilevel incremental backups is that RMAN
does not back up all block all of the time
.
8.      CTWR(control writer)?
Ans: 
1) DB is very slow
2) A particular SQL is taking very long time ( Lets say 10mins ). Yesterday it took only 2 sec to run.
3) DB hung
4) DB crashed suddenly
5) DB not cmg up after bounce







Block :

1) How to identify Corrupt block
2) how to recover a Corrupt block

Standby/ Datagaurd :

1) Corrupt archive shipped to Standby
2) how will U monitor the archive log gap
3) Wt's Ur Current Standby setup mode
4) U add a tablespace in Primary... does this require adding a tablespace in Standby as well? Or Will that be automatic ?
5) Diff bw Physical & Logical setup
6) How to set Delay ?
7) What is the acceptable archive log gap?
8) What is fail over & Switchover ? Diff ?


RAC :

1) One of the Cluster node crashed. What to do?
2) Voting Disk Crashed. What to Do?
3) How to verify the Interconnect speed?
4) How to know the nodes in the Cluster ? & How to know the RAC nodes ? What's the difference ?
5) Do we need to give VIP details or normal host tns details to Client ? How he connects ?
6) Cache Fusion ?

ASM :

1) How to add/remove ASM disk ?
2) What are RAID levels ? Which RAID level you are using ?
3) How to identify an ASM instance ?
4) How ASM instance is different from a normal instance ?
5) Advantage of ASM
6) What if ASM disk crashes ?

RMAN:

1) What backup strategy you are following for backup ? ( like daily incremental & Weekly Cumulative )
2) What's you DB size & How much time it takes for backup ?
3) Block change tracking ?
4) Difference bw RMAN Hot/Online backup & Complete Baseline RMAN backup ?
5) How to recover a datafile/controlfile/database using RMAN?
6) RMAN backup failed. What could be the reasons ?
7) Where RMAN stores the backup information ?
8) How you schedule RMAN backup ?

OS :

1) How to know OS bit version ?
2) Kernal version ?
3) RAM available
4) Huge Pages
5) No of CPU's
6) Why we need to set Kernal Para's ?
7) What is a Semaphore
8) Diff bw Latch & Lock ?

-------------------------------------------------------------------------------------------

1 comment: