Advertisement

Saturday, March 4, 2017

Oracle Database - RAC - Cluster Node Addition - 12c Cluster (Leaf Node and RDBMS Home)

 

I am going to add a Node to my current installation.
If you have followed earlier blogs, I am having a 2 Node - Flex Cluster configuration currently with node names - rac1 and rac2
I have a cloned machine of rac1 from during the installation (Virtualbox)

This is what we are going to do 


  1. Start the Machine which was cloned earlier 
  2. Change the hostname
  3. Change the IP addresses
  4. Run cluvfy to verify node add
  5. Run add node sh
  6. Run Root configuration Script to complete the addition process



I will now be adding this new machine to the cluster
Current configuration - 
[root@rac2 ~]# olsnodes -t
rac1 Unpinned
rac2 Unpinned

[root@rac2 ~]# crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

Make below changes and reboot (Note this machine is a clone of rac1)
For ease of understanding, this is my cluster configuration 
with node rac3 going to be added now.


[root@rac1 network-scripts]# grep IPADDR ifcfg*
ifcfg-enp0s10:IPADDR=192.168.10.3
ifcfg-enp0s3:IPADDR=10.10.10.3
ifcfg-enp0s8:IPADDR=192.168.0.3
ifcfg-enp0s9:IPADDR=192.168.1.3

[root@rac1 ~]# cat /etc/hostname
rac3.localdomain

[root@rac1 network-scripts]# cat /etc/resolv.conf 
# Generated by NetworkManager
search localdomain
domain localdomain
nameserver 10.10.10.1
options attempts:1
options timeout:1

Reboot the node

Once the Node - Rac3 is up and hostname /IP addresses are setup

We now setup the passwordless connectivity between then nodes for cloning

[grid@rac1 deinstall]$ pwd
/u01/app/12.1.0.2/grid/deinstall
[grid@rac1 deinstall]$ ./sshUserSetup.sh -user grid -hosts "rac1 rac2 rac3" -noPromptPassphrase -confirm -advanced

[grid@rac1 deinstall]$ ./sshUserSetup.sh -user oracle -hosts "rac1 rac2 rac3" -noPromptPassphrase -confirm -advance

Now node is ready to be cloned, I have set passwordless for oracle user as well because I am going to clone the database home too 

Cluster Verification utility for pre-node Addition checks

The general syntax for cluvfy is below.

cluvfy stage -pre nodeadd -n <node_list> [-vip <vip_list>]|-flex [-hub <hub_list> [-vip <vip_list>]] [-leaf <leaf_list>]
[-fixup] [-fixupnoexec] [-method sudo -user <user_name> [-location <dir_path>]|-method root] [-verbose]

 Now since the addition is going to be a flex cluster and I am going to add a leaf node then -
(Note I have highlighted the only failures in cluvfy in red once you scroll below) 

[grid@rac1 bin]$ pwd
/u01/app/12.1.0.2/grid/bin
[grid@rac1 bin]$ ./cluvfy stage -pre nodeadd -flex -leaf rac3 -verbose

Performing pre-checks for node addition (Truncated output)

Checking node reachability...

Check: Node reachability from node "rac1"
  Destination Node                      Reachable?              
  ------------------------------------  ------------------------
  rac3                                  yes                     
Result: Node reachability check passed from node "rac1"

Check: 
Total memory   Node Name     Available                 Required            Status      ------------  ------------------------  ------------------------  ----------  
rac1          1.9543GB (2049240.0KB)    4GB (4194304.0KB)         failed      rac3          1.9543GB (2049240.0KB)    4GB (4194304.0KB)         failed    Result: Total memory check failedCheck: Available memory   Node Name               yes                       rac2          no                        yes           

GNS VIP resource configuration check passed.GNS integrity check passedChecking Flex Cluster node role configuration...Flex Cluster node role configuration check passed

Pre-check for node addition was unsuccessful on all the nodes. 

Though the node addition check was unsuccessful the only failure was related to memory
I am going to ignore that pre-check and start going to add the Node.

[grid@rac1 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={rac3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac3-vip}" "CLUSTER_NEW_NODE_ROLES={leaf}" -ignorePrereq
Starting Oracle Universal Installer...

Checking Temp space: must be greater than 120 MB.   Actual 13056 MB    Passed
Checking swap space: must be greater than 150 MB.   Actual 1703 MB    Passed

Prepare Configuration in progress.

Prepare Configuration successful.
..................................................   8% Done.
You can find the log of this install session at:
 /u01/app/oraInventory/logs/addNodeActions2017-03-04_02-24-04PM.log

Instantiate files in progress.

Instantiate files successful.
..................................................   14% Done.

Copying files to node in progress.

Copying files to node successful.
..................................................   73% Done.

Saving cluster inventory in progress.
..................................................   80% Done.

Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/12.1.0.2/grid was successful.
Please check '/tmp/silentInstall.log' for more details.

Setup Oracle Base in progress.

Setup Oracle Base successful.
..................................................   88% Done.

As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/12.1.0.2/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes: 
[rac3]
Execute /u01/app/12.1.0.2/grid/root.sh on the following nodes: 
[rac3]

The scripts can be executed in parallel on all the nodes.

..........
Update Inventory in progress.

..................................................   100% Done.



Update Inventory successful.

Successfully Setup Software.

The final step is to execute the given root scripts for Cluster configuration

[root@rac3 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac3 ~]# /u01/app/12.1.0.2/grid/root.sh
Check /u01/app/12.1.0.2/grid/install/root_rac3.localdomain_2017-03-04_14-29-18.log for the output of root script

Contents of the logfile

[grid@rac3 trace]$ cat /u01/app/12.1.0.2/grid/install/root_rac3.localdomain_2017-03-04_14-29-18.log 
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.1.0.2/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/03/04 14:29:36 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

2017/03/04 14:30:01 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.

2017/03/04 14:30:03 CLSRSC-363: User ignored prerequisites during installation

OLR initialization - successful
2017/03/04 14:30:44 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac3'
CRS-2672: Attempting to start 'ora.evmd' on 'rac3'
CRS-2676: Start of 'ora.mdnsd' on 'rac3' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac3'
CRS-2676: Start of 'ora.gpnpd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac3'
CRS-2676: Start of 'ora.gipcd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac3'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac3'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac3'
CRS-2676: Start of 'ora.diskmon' on 'rac3' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac3'
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-2676: Start of 'ora.ctssd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac3'
CRS-2676: Start of 'ora.storage' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac3'
CRS-2676: Start of 'ora.crf' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac3'
CRS-2676: Start of 'ora.crsd' on 'rac3' succeeded
CRS-6017: Processing resource auto-start for servers: rac3
CRS-6016: Resource auto-start has completed for server rac3
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/03/04 14:32:38 CLSRSC-343: Successfully started Oracle Clusterware stack

Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Preparing packages...
cvuqdisk-1.0.9-1.x86_64
2017/03/04 14:32:42 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded


Below is the final check with 2 hub nodes and 1 leaf node in the Flex Cluster 

[grid@rac3 bin]$ ./crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'leaf'

./cluvfy stage -post nodeadd -n rac3

Performing post-checks for node addition 

Checking node reachability...
Node reachability check passed from node "rac1"


Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "10.10.10.0"
Node connectivity passed for subnet "10.10.10.0" with node(s) rac1,rac3
TCP connectivity check passed for subnet "10.10.10.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed.

Node connectivity check passed


Checking cluster integrity...


Cluster integrity check passed


Checking CRS integrity...

CRS integrity check passed

Clusterware version consistency passed.

Checking shared resources...

Checking CRS home location...
"/u01/app/12.1.0.2/grid" is not shared
Shared resources check for node addition passed


Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity using interfaces on subnet "192.168.10.0"
Node connectivity passed for subnet "192.168.10.0" with node(s) rac3,rac1
TCP connectivity check passed for subnet "192.168.10.0"


Check: Node connectivity using interfaces on subnet "192.168.0.0"
Node connectivity passed for subnet "192.168.0.0" with node(s) rac3,rac1
TCP connectivity check passed for subnet "192.168.0.0"


Check: Node connectivity using interfaces on subnet "192.168.1.0"
Node connectivity passed for subnet "192.168.1.0" with node(s) rac1,rac3
TCP connectivity check passed for subnet "192.168.1.0"


Check: Node connectivity using interfaces on subnet "10.10.10.0"
Node connectivity passed for subnet "10.10.10.0" with node(s) rac1,rac3
TCP connectivity check passed for subnet "10.10.10.0"

Checking subnet mask consistency...
Subnet mask consistency check passed for subnet "10.10.10.0".
Subnet mask consistency check passed for subnet "192.168.0.0".
Subnet mask consistency check passed for subnet "192.168.1.0".
Subnet mask consistency check passed for subnet "192.168.10.0".
Subnet mask consistency check passed.

Node connectivity check passed

Checking multicast communication...

Checking subnet "192.168.0.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.0.0" for multicast communication with multicast group "224.0.0.251" passed.

Checking subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.1.0" for multicast communication with multicast group "224.0.0.251" passed.

Checking subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251"...
Check of subnet "192.168.10.0" for multicast communication with multicast group "224.0.0.251" passed.

Check of multicast communication passed.

Checking node application existence...

Checking existence of VIP node application (required)
VIP node application check passed

Checking existence of NETWORK node application (required)
NETWORK node application check passed

Checking existence of ONS node application (optional)
ONS node application check passed


User "grid" is not part of "root" group. Check passed
Oracle Clusterware is installed on all nodes.
CTSS resource check passed
Query of CTSS for time offset passed

CTSS is in Active state. Proceeding with check of clock time offsets on all nodes...
Check of clock time offsets passed


Oracle Cluster Time Synchronization Services check passed


Post-check for node addition was successful. 


This is the second part where we are going to clone the RDBMS Home to Node 3.
Once GI Home is cloned, RDBMS follows and is pretty simillar and straight forward

It's a good practice to do a pre-check for db install from the new Node 


finally  clone from DB Home
[As oracle user]
/u01/app/oracle/product/12.1.0.2/dbhome_1/addnode/addnode.sh -silent CLUSTER_NEW_NODES={rac3}

This clones the DB home on Node rac3

No comments:
Write comments