Friday, March 31, 2017

Generating AWR Reports

The primary interface for generating AWR reports is Oracle Enterprise Manager. Whenever possible, you should generate AWR reports using Oracle Enterprise Manager. If Oracle Enterprise Manager is unavailable, you can generate AWR reports by running SQL scripts, as described below :-

# Generating an AWR Report


@$ORACLE_HOME/rdbms/admin/awrrpt.sql


# To generate an AWR report on a specific database instance:


@$ORACLE_HOME/rdbms/admin/awrrpti.sql

  
# To generate an AWR report in an Oracle RAC environment: 


@$ORACLE_HOME/rdbms/admin/awrgrpt.sql


#To generate an AWR report on a specific database instance in an Oracle RAC environment:


@$ORACLE_HOME/rdbms/admin/awrgrpti.sql


# To generate an AWR report for a particular SQL statement:


@$ORACLE_HOME/rdbms/admin/awrsqrpt.sql


# To generate an AWR report for a particular SQL statement on a specified database instance:


@$ORACLE_HOME/rdbms/admin/awrsqrpi.sql


#To generate an AWR Compare Periods report:


@$ORACLE_HOME/rdbms/admin/awrddrpt.sql


# To generate an AWR Compare Periods report in an Oracle RAC environment:


@$ORACLE_HOME/rdbms/admin/awrgdrpt.sql


# To generate an AWR Compare Periods report on a specified database instance:


@$ORACLE_HOME/rdbms/admin/awrddrpi.sql


# To generate an AWR Compare Periods report on a specified database instance in an Oracle RAC environment:


@$ORACLE_HOME/rdbms/admin/awrgdrpi.sql


# To generate an ASH report:


@$ORACLE_HOME/rdbms/admin/ashrpt.sql


# To generate an ASH report on a specified database instance:


@$ORACLE_HOME/rdbms/admin/ashrpti.sql.


# To generate an ASH report in an Oracle RAC environment:


@$ORACLE_HOME/rdbms/admin/ashrpti.sql



Job Failover In Oracle RAC

In RAC, if a node crashes, a jobs running on that node can automatically failover to another node but that job should be mapped to a service.

# Create a  database service j_srvice
        preferred instance : orcl1
        available instance : orcl2


[oracle@rac1 ~]$ srvctl add service -s j_srvice -d orcl -r orcl1 -a orcl2

[oracle@rac1 ~]$ srvctl status service -d orcl
Service j_srvice is not running.


# Create tnsnames.ora entry for service j_srvice


j_service =
      (DESCRIPTION =
         (ADDRESS = (PROTOCOL = TCP)(HOST = rac-cluster.rac1.dba.com)(PORT = 1521))
         (LOAD_BALANCE = YES)
         (CONNECT_DATA =
           (SERVER = DEDICATED)
           (SERVICE_NAME = j_srvice)
         )
      )


# Start the service and check that service is running on host01 (preferred instance)


[oracle@rac1 admin]$ srvctl start service -d orcl
PRCC-1014 : j_srvice was already running

[oracle@rac1 admin]$ srvctl status service -d orcl
Service j_srvice is running on instance(s) orcl1


# Create job class JC which maps to the service j_srvice


SYS@orcl1>BEGIN
sys.dbms_scheduler.create_job_class(
service => 'j_srvice',
job_class_name => '"JC"');
END;
/  2    3    4    5    6 

PL/SQL procedure successfully completed.

SYS@orcl1>SELECT JOB_CLASS_NAME FROM DBA_SCHEDULER_JOB_CLASSES where JOB_CLASS_NAME LIKE 'J%';

JOB_CLASS_NAME
------------------------------
JC


# Create a job and attach it to job class JC which inserts a row in table test every 5 seconds


SYS@orcl1>
BEGIN
DBMS_SCHEDULER.CREATE_JOB (
   job_name             => 'sys.my_job1',
   job_type             => 'PLSQL_BLOCK',
   job_action           => 'begin insert into test values (sysdate);commit;end;',
   start_date           =>  systimestamp,
   repeat_interval      => 'FREQ=SECONDLY;INTERVAL=5',
   end_date             => '31-MAR-17 06.55.00AM US/Pacific',
   job_class            => '"JC"',
   enabled              =>  TRUE);
END;
/
PL/SQL procedure successfully completed.


# Check for the records


SYS@orcl1>select to_char(dte, 'dd-mon-yyyy hh24:mi:ss') from test;

TO_CHAR(DTE,'DD-MON-YYYYHH24:
-----------------------------
31-mar-2017 06:37:02
31-mar-2017 06:37:07
31-mar-2017 06:37:12
31-mar-2017 06:37:17
31-mar-2017 06:37:22
31-mar-2017 06:37:27
......
14 rows selected.


# Fetch the job details


SYS@orcl1>select owner,job_name,state,job_class from dba_scheduler_jobs where job_name='MY_JOB1';

OWNER                          JOB_NAME                       STATE           JOB_CLASS
------------------------------ ------------------------------ --------------- ------------------------------
SYS                            MY_JOB1                        SCHEDULED       JC

SYS@orcl1>select distinct JOB_NAME, status , INSTANCE_ID from ALL_SCHEDULER_JOB_RUN_DETAILS where job_name ='MY_JOB1';

JOB_NAME                       STATUS                         INSTANCE_ID
------------------------------ ------------------------------ -----------
MY_JOB1                        SUCCEEDED                                1


 To simulate the crash of host01, kill the database instance on host01


[oracle@rac1 admin]$ ps -ef | grep pmon
oracle    4329     1  0 05:46 ?        00:00:00 asm_pmon_+ASM1
oracle    5161     1  0 05:48 ?        00:00:00 ora_pmon_orcl1
oracle    8518  5388  0 06:45 pts/1    00:00:00 grep pmon

[oracle@rac1 admin]$ kill -9 5161


# Check that job has failed over to host02


[oracle@rac1 admin]$ srvctl status service -d orcl
Service j_srvice is running on instance(s) orcl2

SYS@orcl1>select distinct JOB_NAME, status , INSTANCE_ID from ALL_SCHEDULER_JOB_RUN_DETAILS where job_name ='MY_JOB1';

JOB_NAME                       STATUS                         INSTANCE_ID
------------------------------ ------------------------------ -----------
MY_JOB1                        SUCCEEDED                                2
MY_JOB1                        SUCCEEDED                                1


# Verify that job is still running


SYS@orcl1>select to_char(dte, 'dd-mon-yyyy hh24:mi:ss') from test;

31-mar-2017 06:47:12
31-mar-2017 06:47:17
31-mar-2017 06:47:22
31-mar-2017 06:47:27
31-mar-2017 06:47:32
31-mar-2017 06:47:37
31-mar-2017 06:47:42

128 rows selected.

SYS@orcl1>!date
Fri Mar 31 06:47:43 EDT 2017


So from this example you can clearly understand that a job can failover to another node in case of node failure, but that job should be mapped to a service


Wednesday, March 29, 2017

Local Archiving In Oracle RAC

In a cluster database we can store archive-logs on

1) shared location e.g. ASM OR

2) Local archive log destination for each instance

If archive logs are locally stored and an instance is trying to perform recovery, it will need logs from all the instances. Hence, it is recommended that local archive log destinations be created for each instance with NFS-read mount points to all other instances. This is known as the local archive with network file system (NFS) scheme.

During recovery, one instance may access the logs from any host without having to first copy them to the local destination.



SYS@orcl2>archive log list
Database log mode              Archive Mode
Automatic archival             Enabled
Archive destination            +DATA
Oldest online log sequence     76
Next log sequence to archive   78
Current log sequence           78

SYS@orcl2>alter system set log_archive_dest_1 ='location=/home/oracle/arch1' scope=both sid='orcl1';

System altered.

SYS@orcl2>alter system set log_archive_dest_2='location=/home/oracle/arch2' scope=both sid='orcl2';

System altered.

SYS@orcl2>alter system switch logfile;

System altered.

SYS@orcl2>alter system switch logfile;

System altered.

[oracle@rac2 arch2]$ pwd
/home/oracle/arch2

[oracle@rac2 arch2]$ ls -ltr
total 2500
-rw-r----- 1 oracle oinstall 2545664 Mar 29 07:13 2_78_936514658.dbf
-rw-r----- 1 oracle oinstall    1536 Mar 29 07:13 2_79_936514658.dbf
-rw-r----- 1 oracle oinstall    4096 Mar 29 07:13 2_80_936514658.dbf

SYS@orcl1>alter system switch logfile;

System altered.

SYS@orcl1>/

System altered.

SYS@orcl1>!ls -ltr
total 11820
-rw-r----- 1 oracle oinstall 12051968 Mar 29 07:13 1_90_936514658.dbf
-rw-r----- 1 oracle oinstall    25088 Mar 29 07:15 1_91_936514658.dbf
-rw-r----- 1 oracle oinstall     3072 Mar 29 07:15 1_92_936514658.dbf

SYS@orcl1>alter tablespace arch_test offline immediate;

Tablespace altered.

SYS@orcl1>alter system checkpoint;

System altered.

SYS@orcl1>alter system switch logfile;

System altered.

SYS@orcl2>alter system switch logfile;

System altered.



If we try to change the tablespace online then we need to perform recovery and that recovery will get fail if archive locations are local and not shareable. Because node1 will required the archive logs of node2 as well.

Let's try to use NFS scheme to perform recovery successful.

# Make the folders containing archived logs on node2 sharable and start portmap and nfs service


[root@rac2 ~]# vi /etc/exports
# add this line "/home/oracle/arch2 *(rw,sync)"

[root@rac2 ~]# cat /etc/exports
/home/oracle/arch2 *(rw,sync)


# start portmap and nfs service on node2
  

[root@rac2 ~]# service portmap restart
Stopping portmap:                                          [  OK  ]
Starting portmap:                                           [  OK  ]

[root@rac2 ~]# service nfs restart
Shutting down NFS mountd:                                  [FAILED]
Shutting down NFS daemon:                                 [FAILED]
Shutting down NFS quotas:                                   [FAILED]
Shutting down NFS services:                                 [FAILED]
Starting NFS services:                                          [  OK  ]
Starting NFS quotas:                                           [  OK  ]
Starting NFS daemon:                                         [  OK  ]
Starting NFS mountd:                                         [  OK  ]
  

# On node1, create folders where archive log folders from node2 will be mounted


rac01$mkdir /home/oracle/arch2

  
#As root user on node1, 
   start portmap and nfs service
   Mount the archive folders on node2


[root@rac1 ~]# service portmap restart
Stopping portmap:                                          [  OK  ]
Starting portmap:                                          [  OK  ]

[root@rac1 ~]#  service nfs restart
Shutting down NFS mountd:                                  [FAILED]
Shutting down NFS daemon:                                  [FAILED]
Shutting down NFS quotas:                                  [FAILED]
Shutting down NFS services:                                [FAILED]
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS daemon:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]

[root@rac1 ~]# mount rac2:/home/oracle/arch2 /home/oracle/arch2

  
# Check that archivelogs on node2


[root@rac1 ~]# su - oracle
[oracle@rac1 ~]$ cd /home/oracle/arch2

[oracle@rac1 arch2]$ ls -ltr
total 2504
-rw-r----- 1 oracle oinstall 2545664 Mar 29 07:13 2_78_936514658.dbf
-rw-r----- 1 oracle oinstall    1536 Mar 29 07:13 2_79_936514658.dbf
-rw-r----- 1 oracle oinstall    4096 Mar 29 07:13 2_80_936514658.dbf
-rw-r----- 1 oracle oinstall    2048 Mar 29 07:15 2_81_936514658.dbf
  
  
#Try to bring example tablespace online – Needs media recovery


SYS@orcl1>alter tablespace arch_test online;
alter tablespace arch_test online
*
ERROR at line 1:
ORA-01113: file 8 needs media recovery
ORA-01110: data file 8: '+DATA/orcl/datafile/arch_test.434.939886351'


# Recover example tablespace from node1 – it'll complete successfully as archive logs from node2 will be available to node1 as well.


[oracle@rac1 ~]$ rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Wed Mar 29 07:42:42 2017
Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
connected to target database: ORCL (DBID=1464295389)

RMAN> recover tablespace arch_test;

Starting recover at 29-MAR-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=25 instance=orcl1 device type=DISK
starting media recovery
archived log for thread 1 with sequence 93 is already on disk as file /home/oracle/arch1/1_93_936514658.dbf
archived log for thread 1 with sequence 94 is already on disk as file /home/oracle/arch1/1_94_936514658.dbf
archived log for thread 1 with sequence 95 is already on disk as file /home/oracle/arch1/1_95_936514658.dbf
archived log for thread 1 with sequence 96 is already on disk as file /home/oracle/arch1/1_96_936514658.dbf
archived log for thread 2 with sequence 82 is already on disk as file +DATA/orcl/archivelog/2017_03_29/thread_2_seq_82.435.939886431
archived log for thread 2 with sequence 83 is already on disk as file +DATA/orcl/archivelog/2017_03_29/thread_2_seq_83.436.939886463
archived log for thread 2 with sequence 84 is already on disk as file +DATA/orcl/archivelog/2017_03_29/thread_2_seq_84.437.939886463
archived log for thread 2 with sequence 85 is already on disk as file +DATA/orcl/archivelog/2017_03_29/thread_2_seq_85.438.939886465
archived log for thread 2 with sequence 86 is already on disk as file +DATA/orcl/archivelog/2017_03_29/thread_2_seq_86.439.939886465
archived log file name=/home/oracle/arch1/1_93_936514658.dbf thread=1 sequence=93
archived log file name=+DATA/orcl/archivelog/2017_03_29/thread_2_seq_82.435.939886431 thread=2 sequence=82
media recovery complete, elapsed time: 00:00:01
Finished recover at 29-MAR-17

SYS@orcl1>alter tablespace arch_test online;
Tablespace altered.