Monday, April 3, 2017

Add node to existing cluster

Add node to existing cluster

11g R2 RAC has extremely simplified the process of adding a node to the cluster after the introduction of SCAN and GPNP. We need to follow very simple steps.

Current scenario:-


Host names :
 rac1.dba.com
 rac2.dba.com

Node to be added:
 rac3.dba.com


Steps :-

1. Prepare new node with all required OS as we have done during the RAC first node installation.


#Ready Server and network
      #Install all the required rpms
      #Create user and groups
      #Configure oracleasm
           # /etc/init.d/oracleasm configure -i
           # oracleasm init -> No messages also fine
           # oraclasm status -> check the output it is mounted, that is fine.
           #oracleasm listdisks

  
2. Configure SSH manually among all three nodes

We need manually configure SSH between all the nodes of existing cluster to have password less connectivity. To configure SSH you need to perform the following steps on each node in the cluster.


          $ cd $HOME
          $ mkdir .ssh
          $ chmod 700 .ssh
          $ cd .ssh
          $ ssh-keygen -t rsa
          Now accept the default location for the key file
          Enter and confirm a passphrase. (you can also press enter twice).
          $ ssh-keygen -t dsa
          Now accept the default location for the key file
          Enter and confirm a passphrase. (you can also press enter twice).
          $ cat *.pub >> authorized_keys.<nodeX> (nodeX could be the nodename to differentiate files later)

          example: cat *.pub >> authorized_keys.rac3


Now do the same steps on the other nodes in the cluster.When all those steps are done on the other nodes, start to copy the authorized_keys.<nodeX> to all the nodes into $HOME/.ssh/


For example if you have 3 nodes you will have after the copy in the .ssh 3 files with the name authorized_keys.<nodeX>

   Then on EACH node continue the configuration of SSH by doing the following:
          $ cd $HOME/.ssh
          $ cat *.node* >> authorized_keys
          $ chmod 600 authorized_keys


To test that everything is working correct now execute the commands
$ ssh <hostnameX> date


 So on example in a 3 node environment:
          $ ssh node1 date
          $ ssh node2 date
          $ ssh node3 date

        
Repeat this 3 times on each node, including ssh back to the node itself. The nodeX is the hostname of the node.

The first time you will be asked to add the node to a file called 'known_hosts' this is correct and answer the question with 'yes'. After that when correctly configured you must be able to get the date returned and you will not be prompted for a password.

3) Verify the new node can be part of cluster, run the following command from one of the existing nodes.


[oracle@rac1 .ssh]$ cluvfy stage -post hwos -n rac3

Performing post-checks for hardware and operating system setup
Checking node reachability...
Node reachability check passed from node "rac1"
Checking user equivalence...
User equivalence check passed for user "oracle"
Checking node connectivity...
Checking hosts config file...
Verification of the hosts config file successful
Node connectivity passed for subnet "192.168.1.0" with node(s) rac3
TCP connectivity check passed for subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.0.0" with node(s) rac3
TCP connectivity check passed for subnet "192.168.0.0"
Interfaces found on subnet "192.168.1.0" that are likely candidates for a private interconnect are:
rac3 eth0:192.168.1.17
Interfaces found on subnet "192.168.0.0" that are likely candidates for a private interconnect are:
rac3 eth1:192.168.0.102
WARNING:
Could not find a suitable set of interfaces for VIPs
Node connectivity check passed
Check for multiple users with UID value 0 passed
Post-check for hardware and operating system setup was successful.


4) From an existing node, run ‘cluvfy’ to check inter-node compatibility:-


[oracle@rac1]$ cluvfy comp peer -refnode rac1 -n rac3 -orainv oinstall -osdba asmdba -verbose > a.txt

[oracle@rac1]$ vi a.txt -> look for failures or mismatch


The cluster verify utility – ‘cluvfy’ – is again used to determine the new node’s readiness. In this case, the new node is compared to an existing node to ensure compatibility/determine any conflicts.

5) Final Verification


[oracle@rac1]$ cluvfy stage -pre nodeadd -n rac3


6) Adding Node


[root@rac1 ~]# xhost +
access control disabled, clients can connect from any host

[root@rac1 ~]# su - oracle

[oracle@rac1 ~]$ export DISPLAY=:0.0

[oracle@rac1 ~]$ grid_env
[oracle@rac1 ~]$ cd /u01/app/11.2.0/grid/oui/bin/

[oracle@rac1~]$ export IGNORE_PREADDNODE_CHECKS=Y >> to avoid any ignorable error

[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 5951 MB    Passed
Oracle Universal Installer, Version 11.2.0.1.0 Production
Copyright (C) 1999, 2009, Oracle. All rights reserved.


During execution , you will be asked to execute below scripts


[root@rac3]# /u01/app/oraInventory/orainstRoot.sh

[root@rac3]# /u01/app/11.2.0/grid/root.sh > will start ohasd and all other rac related processes


Check the status below command


[oracle@rac3]# crs_stat -t -v -c rac3
[oracle@rac3]# crsctl stat res -t


7) complete adding node by verifying


[oracle@rac1 bin]$ cluvfy stage -post nodeadd -n rac3 -verbose >post_node_verification.txt


We are done with adding new node in cluster, Let's extend our exercise to clone oracle binaries and move one step to have one more instance.

Perform clone of oracle binaries


[oracle@rac1]$ db_env

[oracle@rac1]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/db_1

[oracle@rac1]$ cd $ORACLE_HOME/oui/bin/

[oracle@rac1 bin]$ ./addNode.sh -silent "CLUSTER_NEW_NODES={rac3}"


Satisfy Node Instance Dependencies from new node


[root@rac3 ~]# su - oracle

[oracle@rac3 ~]$ db_env

[oracle@rac3 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/11.2.0/db_1

[oracle@rac3 ~]$ cd $ORACLE_HOME/dbs
 [oracle@rac3 dbs]$ pwd
/u01/app/oracle/product/11.2.0/db_1/dbs

[oracle@rac3 dbs]$ ls -ltr
total 16
-rw-r----- 1 oracle oinstall 1536 Apr  2 20:40 orapworcl1
-rw-r----- 1 oracle oinstall   35 Apr  2 20:40 initorcl1.ora
-rw-r--r-- 1 oracle oinstall 2851 Apr  2 20:40 init.ora
-rw-rw---- 1 oracle oinstall 1544 Apr  2 20:40 hc_orcl1.dat

[oracle@rac3 dbs]$ mv initorcl1.ora initorcl3.ora

[oracle@rac3 dbs]$ mv orapworcl1 orapworcl3

[oracle@rac3 dbs]$ echo "orcl3:$ORACLE_HOME:N" >> /etc/oratab

From a node with an existing instance of ‘orcl,’ issue the following commands to create the needed public log thread, undo tablespace, and ‘init.ora’ entries for the new instance:

SYS@orcl1>alter database add logfile thread 3 group 11 ('+DATA') size 100M, group 12 ('+DATA') size 100M, group 13 ('+DATA') size 100M;
Database altered.

SYS@orcl1>alter system set thread =3 scope=spfile sid='orcl3';
System altered.

SYS@orcl1>alter database enable thread 3;
Database altered.

SYS@orcl1>create undo tablespace undotbs3 datafile '+DATA' size 200M;
Tablespace created.

SYS@orcl1>alter system set undo_tablespace=undotbs3 scope=spfile sid='orcl3';
System altered.

SYS@orcl1>alter system set instance_number=3 scope=spfile sid='orcl3';
System altered.

SYS@orcl1>alter system set cluster_database_instances=3 scope=spfile sid='*';
System altered.


Update Oracle Cluster Registry (OCR) for rac3


[oracle@rac3 dbs]$ db_env

[oracle@rac3 dbs]$ srvctl add instance -d orcl -i orcl3 -n rac3

[oracle@rac3]$ srvctl start instance -d orcl -i orcl3

[oracle@rac1 ~]$ srvctl status database -d orcl
Instance orcl1 is running on node rac1
Instance orcl2 is running on node rac2
Instance orcl3 is running on node rac3

[oracle@rac3 dbs]$ srvctl config database -d orcl
Database unique name: orcl
Database name: orcl
Oracle home: /u01/app/oracle/product/11.2.0/db_1
Oracle user: oracle
Spfile: +DATA/orcl/spfileorcl.ora
Domain: dba.com
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: orcl
Database instances: orcl1,orcl2,orcl3
Disk Groups: DATA
Services: j_srvice
Database is administrator managed


Happy Learning !!

No comments:

Post a Comment