Sunday, April 30, 2017

PRCD-1229 : An attempt to access configuration of database xxxxxx was rejected

PRCD-1229 : An attempt to access configuration of database xxxxxx was rejected because its version differs from the program version Instead run the program from /u01/app/oracle/product/

After database upgrade from to,I was trying to modify configuration of database and encountered below error :-

[oracle@poc~] srvctl modify database -d orcl -o /u01/app/oracle/product/

PRCD-1229 : An attempt to access configuration of database orcl was rejected because its version differs from the program version Instead run the program from /u01/app/oracle/product/

After manual upgrade or downgrade, we might have seen this error.This is happening because of the cluster configuration is still pointing to older oracle home however database is running from new OH. To tackle such situations we have to update the cluster configuration using below command, it can be either upgrade or downgrade.

Solution :-
We are using Oracle Clusterware, we must upgrade oracle clusterware keys for database.

[oracle@poc~] cd /u01/app/oracle/product/

[oracle@poc~] srvctl upgrade database -db orcl -o /u01/app/oracle/product/

Same rule applies when you downgrade your database.

Note :-
The "srvctl upgrade database" command is considered part of the database upgrade process, hence the old ORACLE_HOME must still exist for this command to succeed.  If the old ORACLE_HOME has already been removed, then "srvctl upgrade database" will fail with an error similar to "PRKH-1013 : The oracle home <old_home> does not exist."  To work around this issue, perform the following steps:

1. Manually create a directory with the same name as the old ORACLE_HOME and create a dummy oracle executable.
% mkdir -p <old_home>/bin
% touch <old_home>/bin/oracle

2. Re-run the "srvctl upgrade database" command
% <new_home>/bin/srvctl upgrade database -d <name> -o <new_home>

Friday, April 7, 2017

Converting Non RAC database to RAC database

Make sure we have core database ready which will be converted to RAC environment. We will be using database duplication of RMAN to move file from system core database to ASM enabled RAC environment. Once after moving from file system to ASM conversion from single instance to multiple instance database takes place. Database would be on ASM shared storage and each instance is start on each cluster node.Make multi-instance database, cluster aware so that failover and load balancing can be achieved. Make sure we have a core database ready to convert to RAC environment on server to complete the practice.


Single Instance Database Name :- orcl

RAC Database Name :- sorcl

Nodes :-

Perform Cloning and create single instance target database

In very first step we'll perform simple database cloning on For this we have multiple options, suppose we use RMAN duplicate command e.g. duplicate target database to 'sorcl'. Below is sample init file of my database

$vi $ORACLE_HOME/dbs/initsorcl.ora
# Set the RAC database name
db_name ="sorcl"
# set the location of the duplicate clone control file.
control_files ='+DATA/dev/controlfile/current.288.822251901'
#set the below parameters for default location of data files
# set below parameter to create two members for each redo
# set the location as per your environment
# set below parameter as per your environment
diagnostic_dest= ='/u01/app/oracle'
#Set the below to the same as the production target
db_block_size = 8192
undo_management =AUTO
undo_retention =10800
undo_tablespace =UNDOTBS1

After completing database duplication successfully, Now database is available to convert to RAC environment, but still single database.

Create UNDO and REDO for second instance

In order to convert single database to cluster database we need to create one UNDO for each instance and two redo log groups minimum as each instance changes are unique.
Create second thread of online redo logs in order to start instance 2 and enable thread 2.It is always recommended to create redo log groups across disk groups for redundancy

SYS@orcl>alter database add logfile thread 2 group 4('+DATA') size 50m reuse;

SYS@orcl>alter database add logfile thread 2 group 5('+DATA') size 50m reuse;

#Enable thread
SYS@orcl>alter database enable thread 2;

#Create UNDO tablespace for second instance
SYS@orcl>create undo tablespace UNDOTBS2 datafile '+DATA' size 500M;

Add Cluster Related parameters

The duplicated instance does not have any cluster related parameters, we need to add the cluster parameter in order to convert single instance to RAC. Add below cluster related parameters to initsorcl.ora

sorcl 1.thread=1
sorcl 2.thread=2
sorcl 1.undo_tablespace='UNDOTBS1'
sorcl 2.undo_tablespace='UNDOTBS2'

Copy the updated init.ora file to node2 e.g. and rename the files as per instance name

scp initsorcl.ora

on RAC1: $cp initsorcl.ora initsorcl1.ora

on RAC2: $cp initsorcl.ora initsorcl2.ora

Stop Listener and Database

As we are going to start sorcl instance as sorcl1 and sorcl2 let us stop sorcl instance from node1 e.g. and listener also.

Stop the listener.

Shutdown the sorcl database.

Start first instance

Export ORACLE_SID=sorcl1

#Create password file
$orapwd file=orapw$ORACLE_SID password=oracle

> startup
select instance_name from v$instance;

Start Second Instance

#create directory structure for audit_file_dest in node2 as well

export ORACLE_SID=sorcl2

#create password file
$orapwd file=orapw$ORACLE_SID password=oracle

> startup
select instance_name from v$instance;

Register the RAC database (sorcl) and instances (sorcl1 & sorcl2) with CRS

Now sorcl database is having two instance one on each RAC nodes. In order manage these instances and database let us make sorcl db and instance cluster aware.

export ORACLE_HOME=/u01/app/oracle/product/11.2.0/db_1

srvctl add database -d sorcl -o /u01/app/oracle/product/11.2.0/db_1

srvctl add instance -d sorcl -i sorcl1 -n rac1

srvctl add instance -d sorcl -i sorcl2 -n rac2

crsctl stat res -t

Update SPFILE Location

Let us create spfile from pfile and place spfile on shared location. Use pfile to point to spfile.

[oracle@rac3 ~]$ export ORACLE_SID=sorcl1

[oracle@rac3 ~]$ sqlplus / as sysdba

SYS@sorcl1>create spfile='+DATA' from pfile;
File created.

#Keeps an entry on each node in initsorcl1.ora initsorcl2.ora
spfile=' +DATA/SORCL/PARAMETERFILE/spfile.491.940617585'

#Restart the database in order to take effect of spfile
srvctl stop database -d sorcl

srvctl modify database -d sorcl -p '+DATA/SORCL/PARAMETERFILE/spfile.491.940617585'

srvctl start database -d sorcl

[oracle@rac1 ~]$ srvctl status database -d sorcl
Instance sorcl2 is running on node rac1
Instance sorcl1 is running on node rac2

Check clusterware and Database Status

[oracle@rac1 ~]$ crsctl stat res -t
[oracle@rac1 ~]$ cluvfy comp crs -n all -verbose

Thursday, April 6, 2017

Add Instance Manually

In this post we'll see the method to add an instance manually to a RAC database.

Scenario :-

Total no. of nodes in the cluster  : 3

Names of nodes                        : rac1,rac2,rac3

Name of RAC database               : orcl

Instances of orcl database           : orcl1, orcl2

Nodes hosting orcl instances       : rac1,rac2

Now I 'll add another instance orcl3 of orcl database on rac3 manually.

The steps which need to be taken:-

1) Rename init file and password file on new node

Here i'm assuming that we have already copied initfile and password from one of existing node to new node.

[oracle@rac3 ~]$ echo $ORACLE_HOME

[oracle@rac3 ~]$ cd $ORACLE_HOME/dbs
 [oracle@rac3 dbs]$ pwd

[oracle@rac3 dbs]$ ls -ltr
total 16
-rw-r----- 1 oracle oinstall 1536 Apr  2 20:40 orapworcl1
-rw-r----- 1 oracle oinstall   35 Apr  2 20:40 initorcl1.ora
-rw-r--r-- 1 oracle oinstall 2851 Apr  2 20:40 init.ora
-rw-rw---- 1 oracle oinstall 1544 Apr  2 20:40 hc_orcl1.dat

[oracle@rac3 dbs]$ mv initorcl1.ora initorcl3.ora

[oracle@rac3 dbs]$ mv orapworcl1 orapworcl3

[oracle@rac3 dbs]$ echo "orcl3:$ORACLE_HOME:N" >> /etc/oratab

Now you need to login into one of existing instance of database to perform below steps

2) Create redo-log group for new instance and enable thread

SYS@orcl1>alter database add logfile thread 3 group 11 ('+DATA') size 100M, group 12 ('+DATA') size 100M, group 13 ('+DATA') size 100M;
Database altered.

SYS@orcl1>alter system set thread =3 scope=spfile sid='orcl3';
System altered.

SYS@orcl1>alter database enable thread 3;
Database altered.

3) Create undo tablespace

SYS@orcl1>create undo tablespace undotbs3 datafile '+DATA' size 200M;
Tablespace created.

SYS@orcl1>alter system set undo_tablespace=undotbs3 scope=spfile sid='orcl3';
System altered.

3) Set other parameter for new instance

SYS@orcl1>alter system set instance_number=3 scope=spfile sid='orcl3';
System altered.

SYS@orcl1>alter system set cluster_database_instances=3 scope=spfile sid='*';
System altered.

4) Update Oracle Cluster Registry (OCR) for rac3

[oracle@rac3 dbs]$ db_env
[oracle@rac3 dbs]$ srvctl add instance -d orcl -i orcl3 -n rac3

[oracle@rac3]$ srvctl start instance -d orcl -i orcl3

[oracle@rac1 ~]$ srvctl status database -d orcl
Instance orcl1 is running on node rac1
Instance orcl2 is running on node rac2
Instance orcl3 is running on node rac3

[oracle@rac3 dbs]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl/spfileorcl.ora
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1,orcl2,orcl3
Disk Groups: DATA
Services: j_srvice
Database is administrator managed

5) Copy TNSENTRY on rac3

Copy TNSENTRY for ORCL from one of existing node to new node


    (ADDRESS = (PROTOCOL = TCP)(HOST = = 1521))

6) Test that remote connection can be made from host03

[oracle@rac3 dbs]$ sqlplus sys@orcl as sysdba
Enter password:
Connected to:


I have mentioned these steps in "add node to existing cluster" as well,
use link ""
for that.

Happy Learning !!