Advertisement

Wednesday, February 27, 2019

Oracle Database 19c : GI Upgrade (from 18c) - 2 Nodes Cluster - Part 1/2

In this blog I am going to work on upgrade of my GI from 18c to 19c for my 3 node Cluster running on OEL 7.3.
(You can download the latest binaries from oracle edelivery )

If you directly want to run on Part 2 then click here.

Here is a brief of my cluster


1. Version 
/u01/app/180/grid/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

2. Disk Groups

[As grid]$ /u01/app/180/grid/bin/asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512             512   4096  4194304    368628   360856           122876          118990              0             N  DATA/
MOUNTED  NORMAL  N         512             512   4096  4194304    307188   253828           102396           75716              0             Y  OCR_VOTE

3. Databases
$ /u01/app/180/grid/bin/srvctl config database
orpl

4. Patches (Grid and Database)
$ /u01/app/180/grid/OPatch/opatch lspatches
28656071;OCW RELEASE UPDATE 18.4.0.0.0 (28656071)
28655963;DBWLM RELEASE UPDATE 18.4.0.0.0 (28655963)
28655916;ACFS RELEASE UPDATE 18.4.0.0.0 (28655916)
28655784;Database Release Update : 18.4.0.0.181016 (28655784)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

$ /u01/app/oracle/product/180/db/OPatch/opatch lspatches
28502229;OJVM RELEASE UPDATE: 18.4.0.0.181016 (28502229)
28656071;OCW RELEASE UPDATE 18.4.0.0.0 (28656071)
28655784;Database Release Update : 18.4.0.0.181016 (28655784)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171


In my cluster upgrade, I always keep backup of cluster registry, cluster resources running and listeners & instances running on each node for finaly comparison (just in case)

If configuration is correct, all things will come up automatically, however few things might not - for example a listener running from DB home, not registered with Clusterware.

In general below is the advise for 18c to 19c upgrade.

1. Backup OCR (Local & Global, Logical too)
2. Take Snapshot of resources (just in case you want to compare)
3. Have 33G+ free space in DG with OCR as this is mandatory requirement for upgrade else you will get an error -  INS-43100 asking for space
4. Have /etc/resolv.conf files matching across all nodes
5. Have the mandatory patch installed for GI Upgrade - I generally always patch for upgrades.
6. Have atleast 15GB free space on your installation mount point.
7. And lastly make sure you have on all servers clusterware up and running (There is an option in the installer to skip upgrade on unreachable nodes, which I doubt you want it)
8. Ensure all Packges are up to date as listed here (I had issues with kmod and kmod-libs)
9. Install Mandatory Patch - 28553832 before proceeding.
Steps to Install are as follows
[As root - on each node rolling]
$ unzip -qq /tmp/p28553832_184000OCWRU_Linux-x86-64.zip
$ export ORACLE_HOME=/u01/app/180/grid/

$ export PATH=$PATH:$ORACLE_HOME/OPatch
$ opatchauto apply /tmp/install/28553832/
Note - It might take 30-40 minutes depending no how fast your system is to apply this patch on one node.

10. Finally run the Upgrade Steps

On  Node 1,2,3 (and more if there)
[As grid]
$ mkdir -p /home/grid/org/Upgrade19cGI

On  Node 1
[As root:]
Set the environment to 18c Grid Home / (+ASM1)

$ ocrconfig  -export /home/grid/org/Upgrade19cGI/OCRLogicalBackupCluster.bak

$ ocrconfig -showbackuploc
Note down your backup loc here. 
$ ocrconfig -backuploc  
$ ocrconfig  -manualbackup 


$ ocrconfig -local -export /home/grid/org/Upgrade19cGI/OCRLogicalBackup_Local.bak


[As root:]
Set the environment to 18c Grid Home / (+ASM1)
$ crsctl stat res -t > /home/grid/org/Upgrade19cGI/crsctl_stat_res_t.log

On  Node 2, 3 and others (if more) 
[As root:]
Set the environment to 18c Grid Home / (+ASM2) (+ASM3)
$ ocrconfig -local -export /home/grid/org/Upgrade19cGI/OCRLogicalBackup_Local.bak

On Each Node 
$ ps -ef | grep pmon > /home/grid/org/Upgrade19cGI/pmon_snapshot.log

$  ps -ef | grep tns > /home/grid/org/Upgrade19cGI/tns_snapshot.log

Create a blackout in OEM
Disable any jobs / cron (better disable cron daemon, if only oracle server is running)

Create Grid Home Directory on each node 


[As root on All Nodes]

$ mkdir -p /u01/app/190/grid



Unzip Oracle Media on first node. 


[As root on Node 1]
cd /u01/app/190/grid
$ unzip -qq  V981627-01.zip
[As root on all Nodes]
$ chown -R grid:oinstall /u01/app/190/grid
(Make a note of image based installation here) 


[As Grid]  - Run Cluster Verification in pre - crs install stage

$ cd /u01/app/190/grid
$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome  /u01/app/180/grid -dest_crshome /u01/app/190/grid -dest_version 19.2.0.0 -fixup -verbose  | tee /tmp/cluvfy_upgd.out

Pre-check for cluster services setup was successful.
Verifying RPM Package Manager database ...INFORMATION
PRVG-11250 : The check "RPM Package Manager database" was not performed because
it needs 'root' user privileges.


CVU operation performed:      stage -pre crsinst
Date:                         Feb 27, 2019 7:20:38 AM
CVU home:                     /u01/app/190/grid/
User:                         grid



I generally redirect output using tee command to see it incoming and also have a copy

Fix anything which is shown as failed.

Finally Unset any environment variables related to Oracle
$ unset ORACLE_HOME ORACLE_SID ORACLE_BASE

Follow the next blog which provides screenshots and the upgrade process. 
Make sure you connect using grid user and do not do "su" for UI to work. 

$ ./gridSetup.sh

No comments:
Write comments