In this blog I am going to cover steps on how to add a Node to 2 node cluster.
If you have followed my blogs for installation of Cluster. I am going to use the same cluster to have another node configured to my cluster.
You can find my blog with details on cloning of machine and attaching shared disks.
In this blog I will add a node with below configuration
1. Hostname - rac18c03
2. IP - 192.168.10.13
3. VIP - 192.168.10.23
4. Private IP - 192.168.20.13, 30.13
Step 1. Update DNS Entries on Node 1 (192.168.10.11)
Update DNS Entries in the DNS Server on Node 1
File - /var/named/novalocal.zone
# Contents of File can be found in end of the blog
Restart dns server
systemctl restart named
Step 2. Update /etc/host entries on all nodes
# Contens of File can be found in last section of the blog.
Step 3. Configure Node with required configurations
(Leave the DNS Server configuration part)
Step 4. Make sure Shared disks are attached
Step 5. Update /etc/resolv.conf
nameserver 192.168.10.11
options attempts:1
options timeout:1
search novalocal
Step 6. Discover Disks on oracleasm
oracleasm configure -i
oracleasm init
oracleasm scandisks
Step 7. Create the required directories
mkdir /u01
mkdir /u01/app
chown root:oinstall /u01 /u01/app
chmod 755 /u01 /u01/app
mkdir /u01/app/180
chown grid:oinstall /u01/app/180
chmod 755 /u01/app/180
mkdir /u01/app/grid
chown grid:oinstall /u01/app/grid
chmod 755 /u01/app/grid
mkdir /u01/app/oraInventory
chown grid:oinstall /u01/app/oraInventory
chmod 755 /u01/app/oraInventory
[As grid user]
mkdir -p /u01/app/180/grid
Step 8. Setup Passwordless connectivity
cd /u01/app/180/grid/deinstall
[As grid]
./sshUserSetup.sh -user grid -hosts "rac18c01 rac18c02 rac18c03" -noPromptPassphrase -confirm -advanced
[As oracle]
./sshUserSetup.sh -user oracle -hosts "rac18c01 rac18c02 rac18c03" -noPromptPassphrase -confirm -advanced
[As root]
./sshUserSetup.sh -user root -hosts "rac18c01 rac18c02 rac18c03" -noPromptPassphrase -confirm -advanced
Step 9. Do Cluster Preadd Check
[As grid]
cd /u01/app/180/grid/bin
./cluvfy stage -pre nodeadd -flex -hub rac18c03 -verbose
Pre-check for node addition was successful.
CVU operation performed: stage -pre nodeadd
Date: Aug 17, 2018 5:09:31 AM
CVU home: /u01/app/180/grid/
User: grid
New Hosts (/etc/hosts) file
192.168.10.11 rac18c01.novalocal rac18c01
192.168.10.12 rac18c02.novalocal rac18c02
192.168.10.13 rac18c03.novalocal rac18c03
192.168.10.21 rac18c01-vip.novalocal rac18c01-vip
192.168.10.22 rac18c02-vip.novalocal rac18c02-vip
192.168.10.23 rac18c03-vip.novalocal rac18c03-vip
192.168.20.11 rac18c01-priv01.novalocal rac18c01-priv01
192.168.20.12 rac18c02-priv01.novalocal rac18c02-priv01
192.168.20.13 rac18c02-priv01.novalocal rac18c03-priv01
192.168.30.11 rac18c01-priv02.novalocal rac18c01-priv02
192.168.30.12 rac18c02-priv02.novalocal rac18c02-priv02
192.168.30.13 rac18c02-priv02.novalocal rac18c03-priv02
New DNS Configuration File
$TTL 86400
@ IN SOA novalocal. novalocal.(
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS rac18c01.novalocal.
localhost IN A 127.0.0.1
rac18c01.novalocal. IN A 192.168.10.11
rac18c02.novalocal. IN A 192.168.10.12
rac18c03.novalocal. IN A 192.168.10.13
rac18c01-vip.novalocal. IN A 192.168.10.21
rac18c02-vip.novalocal. IN A 192.168.10.22
rac18c03-vip.novalocal. IN A 192.168.10.23
rac-scan.novalocal. IN A 192.168.10.31
rac-scan.novalocal. IN A 192.168.10.32
rac-scan.novalocal. IN A 192.168.10.33
If you have followed my blogs for installation of Cluster. I am going to use the same cluster to have another node configured to my cluster.
You can find my blog with details on cloning of machine and attaching shared disks.
In this blog I will add a node with below configuration
1. Hostname - rac18c03
2. IP - 192.168.10.13
3. VIP - 192.168.10.23
4. Private IP - 192.168.20.13, 30.13
Step 1. Update DNS Entries on Node 1 (192.168.10.11)
Update DNS Entries in the DNS Server on Node 1
File - /var/named/novalocal.zone
# Contents of File can be found in end of the blog
Restart dns server
systemctl restart named
Step 2. Update /etc/host entries on all nodes
# Contens of File can be found in last section of the blog.
Step 3. Configure Node with required configurations
(Leave the DNS Server configuration part)
Step 4. Make sure Shared disks are attached
Step 5. Update /etc/resolv.conf
nameserver 192.168.10.11
options attempts:1
options timeout:1
search novalocal
Step 6. Discover Disks on oracleasm
oracleasm configure -i
oracleasm init
oracleasm scandisks
Step 7. Create the required directories
mkdir /u01
mkdir /u01/app
chown root:oinstall /u01 /u01/app
chmod 755 /u01 /u01/app
mkdir /u01/app/180
chown grid:oinstall /u01/app/180
chmod 755 /u01/app/180
mkdir /u01/app/grid
chown grid:oinstall /u01/app/grid
chmod 755 /u01/app/grid
mkdir /u01/app/oraInventory
chown grid:oinstall /u01/app/oraInventory
chmod 755 /u01/app/oraInventory
[As grid user]
mkdir -p /u01/app/180/grid
Step 8. Setup Passwordless connectivity
cd /u01/app/180/grid/deinstall
[As grid]
./sshUserSetup.sh -user grid -hosts "rac18c01 rac18c02 rac18c03" -noPromptPassphrase -confirm -advanced
[As oracle]
./sshUserSetup.sh -user oracle -hosts "rac18c01 rac18c02 rac18c03" -noPromptPassphrase -confirm -advanced
[As root]
./sshUserSetup.sh -user root -hosts "rac18c01 rac18c02 rac18c03" -noPromptPassphrase -confirm -advanced
Step 9. Do Cluster Preadd Check
[As grid]
cd /u01/app/180/grid/bin
./cluvfy stage -pre nodeadd -flex -hub rac18c03 -verbose
Pre-check for node addition was successful.
CVU operation performed: stage -pre nodeadd
Date: Aug 17, 2018 5:09:31 AM
CVU home: /u01/app/180/grid/
User: grid
Step 10. Run Add Node Script
cd /u01/app/180/grid/addnode
[grid@rac18c01 addnode]$ ./addnode.sh -silent "CLUSTER_NEW_NODES={rac18c03}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac18c03-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"
Prepare Configuration in progress.
Prepare Configuration successful.
.................................................. 7% Done.
Copy Files to Remote Nodes in progress.
.................................................. 12% Done.
.................................................. 17% Done.
..............................
Copy Files to Remote Nodes successful.
You can find the log of this install session at:
/u01/app/oraInventory/logs/addNodeActions2018-08-17_05-23-28AM.log
Instantiate files in progress.
Instantiate files successful.
.................................................. 49% Done.
Saving cluster inventory in progress.
.................................................. 83% Done.
Saving cluster inventory successful.
The Cluster Node Addition of /u01/app/180/grid was successful.
Please check '/u01/app/180/grid/inventory/silentInstall2018-08-17_05-23-28AM.log' for more details.
Setup Oracle Base in progress.
Setup Oracle Base successful.
.................................................. 90% Done.
Update Inventory in progress.
Update Inventory successful.
.................................................. 97% Done.
As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/180/grid/root.sh
Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[rac18c03]
Execute /u01/app/180/grid/root.sh on the following nodes:
[rac18c03]
The scripts can be executed in parallel on all the nodes.
Successfully Setup Software.
.................................................. 100% Done.
(Note - you can look into /u01/app/oraInventory/logs directory and look for last log file with addNode prefix, this will be the log file for your session - if you feel your sessions is stuck.)
Step 11. Run Root Scripts
Complete the addition by running root scripts
[root@rac18c03 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.
Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@rac18c03 ~]# /u01/app/180/grid/root.sh
Check /u01/app/180/grid/install/root_rac18c03.novalocal_2018-08-17_05-32-32-456358325.log for the output of root script
Verification
1. Verify Node roles
[root@rac18c01 ~]# crsctl get node role config -all
Node 'rac18c01' configured role is 'hub'
Node 'rac18c02' configured role is 'hub'
Node 'rac18c03' configured role is 'hub'
2. Verify Using Cluvfy
[grid@rac18c03 ~]$ /u01/app/180/grid/bin/cluvfy stage -post nodeadd -n rac18c03
Post-check for node addition was successful.
CVU operation performed: stage -post nodeadd
Date: Aug 17, 2018 5:57:32 AM
CVU home: /u01/app/180/grid/
User: grid
New Hosts (/etc/hosts) file
192.168.10.11 rac18c01.novalocal rac18c01
192.168.10.12 rac18c02.novalocal rac18c02
192.168.10.13 rac18c03.novalocal rac18c03
192.168.10.21 rac18c01-vip.novalocal rac18c01-vip
192.168.10.22 rac18c02-vip.novalocal rac18c02-vip
192.168.10.23 rac18c03-vip.novalocal rac18c03-vip
192.168.20.11 rac18c01-priv01.novalocal rac18c01-priv01
192.168.20.12 rac18c02-priv01.novalocal rac18c02-priv01
192.168.20.13 rac18c02-priv01.novalocal rac18c03-priv01
192.168.30.11 rac18c01-priv02.novalocal rac18c01-priv02
192.168.30.12 rac18c02-priv02.novalocal rac18c02-priv02
192.168.30.13 rac18c02-priv02.novalocal rac18c03-priv02
New DNS Configuration File
$TTL 86400
@ IN SOA novalocal. novalocal.(
42 ; serial (d. adams)
3H ; refresh
15M ; retry
1W ; expiry
1D ) ; minimum
IN NS rac18c01.novalocal.
localhost IN A 127.0.0.1
rac18c01.novalocal. IN A 192.168.10.11
rac18c02.novalocal. IN A 192.168.10.12
rac18c03.novalocal. IN A 192.168.10.13
rac18c01-vip.novalocal. IN A 192.168.10.21
rac18c02-vip.novalocal. IN A 192.168.10.22
rac18c03-vip.novalocal. IN A 192.168.10.23
rac-scan.novalocal. IN A 192.168.10.31
rac-scan.novalocal. IN A 192.168.10.32
rac-scan.novalocal. IN A 192.168.10.33
No comments:
Write comments