Advertisement

Wednesday, August 29, 2018

Oracle Database 18c: RAC Upgrade 12c to 18c (12.2.0.1 to 18.3) - Part 2/2

In this blog, I pickup from my last blog where I had done all the pre-reqs successfully. 

In this blog we will be doing the real upgrade. 

Connect using user - grid (make sure 'X' is enabled and you use 'X' server - I prefer MobaXterm)

cd /u01/app/180/grid
./gridSetup.sh

Step 1 - Select Upgrade 

Step 2 - Select Node - Do not select skip unreachable nodes

Step 3 - Include EM registration if you are using EM in your environment 

Step 4 - Select your Grid Base

Step 5  - We Will run the script manually

Step 6 - If all okay - you should go to the summary screen, Save response file if you want and click "Submit"


Before the next step. If you have any patches. You can install them right away. 
Because when you install the patches before the root script execution, your cluster will have the latest patch.

Though in case of 18c we always have latest RU of the quarter :)

Step 7 - run rootupgrade.sh script. 
Refer end of blog to see the log of the execution.



Step 8 - Click OK after rootupgarde execution is complete on all nodes.





Step 9 - Upgrade is complete Click Close and Verify your cluster now. 




Note - if you had made any entries in /etc/oratab - Correct them to point to new homes respectively for ASM,APX and MGMT DB. 



[root@prodrac01 ~]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
[grid@prodrac01 OPatch]$ pwd
/u01/app/180/grid/OPatch
[grid@prodrac01 OPatch]$ ./opatch lspatches
pw27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

Rootupgradelog
Node 1 - prodrac01

[root@prodrac01 ~]# /u01/app/180/grid/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/180/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/180/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/prodrac01/crsconfig/rootcrs_prodrac01_2018-08-29_05-05-11AM.log
2018/08/29 05:05:26 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2018/08/29 05:05:26 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/08/29 05:06:02 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/08/29 05:06:02 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2018/08/29 05:06:06 CLSRSC-595: Executing upgrade step 3 of 19: 'GetOldConfig'.
2018/08/29 05:06:06 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/08/29 05:06:14 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minute
2018/08/29 05:07:48 CLSRSC-693: CRS entities validation completed successfully.
2018/08/29 05:07:53 CLSRSC-515: Starting OCR manual backup.
2018/08/29 05:08:02 CLSRSC-516: OCR manual backup successful.
2018/08/29 05:08:09 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2018/08/29 05:08:09 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.
2018/08/29 05:08:09 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2018/08/29 05:08:09 CLSRSC-615:
3. The last node to downgrade cannot be a Leaf node.
2018/08/29 05:08:15 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/08/29 05:08:15 CLSRSC-595: Executing upgrade step 4 of 19: 'GenSiteGUIDs'.
2018/08/29 05:08:16 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2018/08/29 05:08:29 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2018/08/29 05:08:37 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2018/08/29 05:08:37 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2018/08/29 05:10:01 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2018/08/29 05:10:01 CLSRSC-482: Running command: '/u01/app/12201/grid/bin/crsctl start rollingupgrade 18.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2018/08/29 05:10:06 CLSRSC-482: Running command: '/u01/app/180/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -tNode true -startRolling false '
ASM configuration upgraded in local node successfully.
2018/08/29 05:10:15 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2018/08/29 05:10:19 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/08/29 05:10:55 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2018/08/29 05:10:58 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2018/08/29 05:10:59 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2018/08/29 05:11:09 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2018/08/29 05:11:09 CLSRSC-595: Executing upgrade step 12 of 19: 'UpgradeAFD'.
2018/08/29 05:11:16 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2018/08/29 05:11:22 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2018/08/29 05:11:22 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/08/29 05:12:00 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'prodrac01'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'prodrac01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/08/29 05:12:44 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2018/08/29 05:12:50 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'prodrac01'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'prodrac01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'prodrac01'
CRS-2672: Attempting to start 'ora.evmd' on 'prodrac01'
CRS-2676: Start of 'ora.mdnsd' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.evmd' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'prodrac01'
CRS-2676: Start of 'ora.gpnpd' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'prodrac01'
CRS-2676: Start of 'ora.gipcd' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'prodrac01'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'prodrac01'
CRS-2676: Start of 'ora.cssdmonitor' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'prodrac01'
CRS-2672: Attempting to start 'ora.diskmon' on 'prodrac01'
CRS-2676: Start of 'ora.diskmon' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.crf' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.cssd' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'prodrac01'
CRS-2672: Attempting to start 'ora.ctssd' on 'prodrac01'
CRS-2676: Start of 'ora.ctssd' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'prodrac01'
CRS-2676: Start of 'ora.asm' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'prodrac01'
CRS-2676: Start of 'ora.storage' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'prodrac01'
CRS-2676: Start of 'ora.crsd' on 'prodrac01' succeeded
CRS-6017: Processing resource auto-start for servers: prodrac01
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'prodrac02'
CRS-2673: Attempting to stop 'ora.prodrac01.vip' on 'prodrac02'
CRS-2672: Attempting to start 'ora.ons' on 'prodrac01'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'prodrac02' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'prodrac02'
CRS-2677: Stop of 'ora.prodrac01.vip' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.prodrac01.vip' on 'prodrac01'
CRS-2677: Stop of 'ora.scan1.vip' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'prodrac01'
CRS-2676: Start of 'ora.prodrac01.vip' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'prodrac01'
CRS-2676: Start of 'ora.scan1.vip' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'prodrac01'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'prodrac01'
CRS-2676: Start of 'ora.ons' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.asm' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.RECO.dg' on 'prodrac01'
CRS-2676: Start of 'ora.RECO.dg' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'prodrac01'
CRS-2676: Start of 'ora.DATA.dg' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.prbrm.db' on 'prodrac01'
CRS-2676: Start of 'ora.prbrm.db' on 'prodrac01' succeeded
CRS-6016: Resource auto-start has completed for server prodrac01
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/08/29 05:14:00 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/08/29 05:14:20 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
2018/08/29 05:14:23 CLSRSC-474: Initiating upgrade of resource types
2018/08/29 05:14:58 CLSRSC-475: Upgrade of resource types successfully initiated.
2018/08/29 05:15:10 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2018/08/29 05:15:16 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@prodrac01 ~]#

Node 2 - prodrac02
[root@prodrac02 ~]# /u01/app/180/grid/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/180/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/180/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/prodrac02/crsconfig/rootcrs_prodrac02_2018-08-29_05-21-05AM.log
2018/08/29 05:21:14 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2018/08/29 05:21:14 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/08/29 05:21:55 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/08/29 05:21:55 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2018/08/29 05:21:56 CLSRSC-595: Executing upgrade step 3 of 19: 'GetOldConfig'.
2018/08/29 05:21:56 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/08/29 05:22:19 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/08/29 05:22:19 CLSRSC-595: Executing upgrade step 4 of 19: 'GenSiteGUIDs'.
2018/08/29 05:22:19 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2018/08/29 05:22:21 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2018/08/29 05:22:25 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2018/08/29 05:22:25 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
ASM configuration upgraded in local node successfully.
2018/08/29 05:22:37 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/08/29 05:23:08 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2018/08/29 05:23:25 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2018/08/29 05:23:25 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2018/08/29 05:23:30 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2018/08/29 05:23:30 CLSRSC-595: Executing upgrade step 12 of 19: 'UpgradeAFD'.
2018/08/29 05:23:33 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2018/08/29 05:23:34 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2018/08/29 05:23:34 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/08/29 05:24:05 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'prodrac02'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'prodrac02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/08/29 05:24:46 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2018/08/29 05:24:48 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'prodrac02'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'prodrac02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'prodrac02'
CRS-2672: Attempting to start 'ora.evmd' on 'prodrac02'
CRS-2676: Start of 'ora.mdnsd' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.evmd' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'prodrac02'
CRS-2676: Start of 'ora.gpnpd' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'prodrac02'
CRS-2676: Start of 'ora.gipcd' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'prodrac02'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'prodrac02'
CRS-2676: Start of 'ora.cssdmonitor' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'prodrac02'
CRS-2672: Attempting to start 'ora.diskmon' on 'prodrac02'
CRS-2676: Start of 'ora.diskmon' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.crf' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.cssd' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'prodrac02'
CRS-2672: Attempting to start 'ora.ctssd' on 'prodrac02'
CRS-2676: Start of 'ora.ctssd' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'prodrac02'
CRS-2676: Start of 'ora.asm' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'prodrac02'
CRS-2676: Start of 'ora.storage' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'prodrac02'
CRS-2676: Start of 'ora.crsd' on 'prodrac02' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: prodrac02
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'prodrac01'
CRS-2673: Attempting to stop 'ora.prodrac02.vip' on 'prodrac01'
CRS-2672: Attempting to start 'ora.ons' on 'prodrac02'
CRS-2677: Stop of 'ora.prodrac02.vip' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.prodrac02.vip' on 'prodrac02'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'prodrac01' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'prodrac01'
CRS-2677: Stop of 'ora.scan1.vip' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'prodrac02'
CRS-2676: Start of 'ora.prodrac02.vip' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'prodrac02'
CRS-2676: Start of 'ora.scan1.vip' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'prodrac02'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'prodrac02'
CRS-2676: Start of 'ora.ons' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.asm' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.RECO.dg' on 'prodrac02'
CRS-2676: Start of 'ora.RECO.dg' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'prodrac02'
CRS-2676: Start of 'ora.DATA.dg' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.prbrm.db' on 'prodrac02'
CRS-2676: Start of 'ora.prbrm.db' on 'prodrac02' succeeded
CRS-6016: Resource auto-start has completed for server prodrac02
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/08/29 05:26:27 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/08/29 05:27:01 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
Start upgrade invoked..
2018/08/29 05:27:06 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2018/08/29 05:27:06 CLSRSC-482: Running command: '/u01/app/180/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 18.0.0.0.0.
2018/08/29 05:28:13 CLSRSC-479: Successfully set Oracle Clusterware active version
2018/08/29 05:28:13 CLSRSC-476: Finishing upgrade of resource types
2018/08/29 05:28:14 CLSRSC-477: Successfully completed upgrade of resource types
2018/08/29 05:29:39 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2018/08/29 05:29:52 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@prodrac02 ~]#

Oracle Database 18c: RAC Upgrade 12c to 18c (12.2.0.1 to 18.3) - Part 1/2

In this blog series of 2 blogs, I am going to upgrade my 12c cluster with one Database to 18c Cluster. 

If you want to jump to next part click here for Part 2

If you want to see how to upgrade 12cr1 to 12cr2 you can check one of my previous blogs.
There will be few repeat points for your information from my last blog. 

Here is a brief of my cluster. 

1. Version
/u01/app/12201/grid/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.2.0.1.0]

2. Disk Groups

[As grid]
/u01/app/12201/grid/bin/asmcmd lsdg
State Type Rebal Sector Logical_Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED NORMAL N 512 512 4096 4194304 307188 238296 102396 67950 0 Y DATA/
MOUNTED NORMAL N 512 512 4096 4194304 307188 304148 102396 100876 0 N RECO/

3. Databases 
/u01/app/12201/grid/bin/srvctl config database

PRBRM


4. Patch Versions
Grid
./opatch lspatches
28163235;ACFS JUL 2018 RELEASE UPDATE 12.2.0.1.180717 (28163235)
28163190;OCW JUL 2018 RELEASE UPDATE 12.2.0.1.180717 (28163190)
28163133;Database Jul 2018 Release Update : 12.2.0.1.180717 (28163133)
27144050;Tomcat Release Update 12.2.0.1.0(ID:171023.0830) (27144050)

26839277;DBWLM RELEASE UPDATE 12.2.0.1.0(ID:170913) (26839277)

Database
./opatch lspatches
27923353;OJVM RELEASE UPDATE: 12.2.0.1.180717 (27923353)
28163133;Database Jul 2018 Release Update : 12.2.0.1.180717 (28163133)



In my cluster upgrade, I always keep backup of cluster registry, cluster resources running and listeners & instances running on each node for finaly comparison (just in case)

If configuration is correct, all things will come up automatically, however few things might not - for example a listener running from DB home, not registered with Clusterware.

In general below is the advise for18c upgrade.

1. Backup OCR (Local & Global, Logical too)
2. Take Snapshot of resources (just in case you want to compare)
3. Have 33G+ free space in DG with OCR as this is mandatory requirement for upgrade else you will get an error -  INS-43100 asking for space
4. Have /etc/resolv.conf files matching across all nodes
5. Have the mandatory patch installed for GI Upgrade - I generally always patch for upgrades.
6. Have atleast 15GB free space on your installation mount point.
7. And lastly make sure you have on all servers clusterware up and running (There is an option in the installer to skip upgrade on unreachable nodes, which I doubt you want it)


8. Finally run the Upgrade

On  Node 1,2 (and more if there)
[As grid:]
mkdir -p /home/grid/org/Upgrade18cGI

On  Node 1
[As root:]
Set the environment to 12.2.0.1.x Grid Home / (+ASM1)
#ocrconfig  -export /home/grid/org/Upgrade18cGI/OCRLogicalBackupCluster.bak

#ocrconfig -showbackuploc
Note down your backup loc here. 
#ocrconfig -backuploc  #ocrconfig  -manualbackup 

#ocrconfig -local -export /home/grid/org/Upgrade18cGI/OCRLogicalBackup_Local.bak

[As root:]
Set the environment to 12.2.0.1.x Grid Home / (+ASM1)
#crsctl stat res -t > /home/grid/org/Upgrade18cGI/crsctl_stat_res_t.log

On  Node 2, 3 and others (if more) 
[As root:]
Set the environment to 12.2.0.1.x Grid Home / (+ASM2) (+ASM3)
#ocrconfig -local -export /home/grid/org/Upgrade18cGI/OCRLogicalBackup_Local.bak

On Each Node 
#ps -ef | grep pmon > /home/grid/org/Upgrade18cGI/pmon_snapshot.log

# ps -ef | grep tns > /home/grid/org/Upgrade18cGI/tns_snapshot.log


Create a blackout in OEM
Disable any jobs / cron (better disable cron daemon, if only oracle server is running)

Create Grid Home Directory on each node 

[As root:]
#mkdir -p /u01/app/180/grid

Unzip Oracle Media on first node. 

[As root on Node 1]
cd /u01/app/180/grid
#unzip -qq LINUX.X64_180000_grid_home.zip 
[As root on all Nodes]
#chown -R grid:oinstall /u01/app/180/grid
(Make a note of image based installation here) 
[As Grid]  - Run Cluster Verification in pre - crs install stage

#cd /u01/app/180/grid
#./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome  /u01/app/12201/grid -dest_crshome /u01/app/180/grid -dest_version 18.3.0.0 -fixup -verbose  | tee /tmp/cluvfy_upgd.out

Pre-check for cluster services setup was successful.

CVU operation performed:      stage -pre crsinst
Date:                         Aug 29, 2018 4:48:11 AM
CVU home:                     /u01/app/180/grid/
User:                         grid

I generally redirect output using tee command to see it incoming and also have a copy

Fix anything which is shown as failed.

Finally Unset any environment variables related to Oracle
# unset ORACLE_HOME ORACLE_SID ORACLE_BASE

Follow the next blog which provides screenshots and the upgrade process.
[grid@rac1 grid]$ ./gridSetup.sh 

Tuesday, August 28, 2018

Oracle Cloud (OCI) - VM RAC Database Creation - Part 2/2

This blog is in continuation with my previous blog on VM RAC DB Creation.
You had created a VM RAC and it was in provisioning state.

In this blog we look into how to see details of the instances and access them. 

Wait for the system to become available from provision state - Wait time is about 1 hour for me.
Step 1 - Click on "View DB System Details" to view details of the system. 


Step 2 - Note down DB Details such as scan name, IP addresses etc for your usage. 



Step 3 - Bottom Left click on Nodes to see Nodes of the database


Step 4 - Note down the Private IP address and DNS name for the instances which have been launched. 




You can see as in RAC cluster, there is floating IP (VIP), Scan addresses and Public IPs.
There are obviously Private interfaces, which we will look shortly after logging on to the system. 

To Login to system you need to provide the rsa key. I will use the Windows VM which was launched as part of this series and login. To see details on login see my previous blog.

We will use any of the private ip address which was generated  - 10.10.11.4 and 10.10.11.5 to login.

ssh -i id_rsa opc@10.10.11.4

[opc@rac1 ~]$ ps -ef | grep pmon
grid      2206     1  0 05:46 ?        00:00:00 asm_pmon_+ASM1
grid     11892     1  0 05:48 ?        00:00:00 apx_pmon_+APX1

oracle   50457     1  0 06:19 ?        00:00:00 ora_pmon_db181

sudo su - 
[root@rac1 ~]# /u01/app/18.0/grid/bin/olsnodes -t
rac1    Unpinned

rac2    Unpinned

[root@rac1 ~]# /u01/app/18.0/grid/bin/oifcfg getif
eth0  10.10.11.0  global  public
eth1  192.168.16.0  global  cluster_interconnect,asm


[root@rac1 ~]#  /u01/app/18.0/grid/bin/crsctl stat res -t  | less
[root@rac1 ~]# /u01/app/18.0/grid/bin/srvctl config database
db18_iad2rj
[root@rac1 ~]# /u01/app/18.0/grid/bin/srvctl status database -d db18_iad2rj
Instance db181 is running on node rac1
Instance db182 is running on node rac2

[root@rac1 ~]# /u01/app/18.0/grid/bin/srvctl config scan
SCAN name: rac-scan.privatesubnet1.dbvcn.oraclevcn.com, Network: 1
Subnet IPv4: 10.10.11.0/255.255.255.0/eth0, static
Subnet IPv6:
SCAN 1 IPv4 VIP: 10.10.11.8
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 2 IPv4 VIP: 10.10.11.9
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:
SCAN 3 IPv4 VIP: 10.10.11.10
SCAN VIP is enabled.
SCAN VIP is individually enabled on nodes:
SCAN VIP is individually disabled on nodes:


ssh -i id_rsa opc@10.10.11.5

[opc@rac2 ~]$ ps -ef | grep pmon
oracle   12376     1  0 06:19 ?        00:00:00 ora_pmon_db182
opc      33405 33365  0 06:55 pts/0    00:00:00 grep pmon
grid     79123     1  0 05:45 ?        00:00:00 asm_pmon_+ASM2
grid     93356     1  0 05:48 ?        00:00:00 apx_pmon_+APX2



Oracle Cloud (OCI) - VM RAC Database Creation - Part 1/2

In this blog I talk about VM RAC Database Creation. 
Go to DB system Launch Page and click on Launch DB system. 

In this part we are going to launch VM standard 2.2 - 2node cluster. 
This is in continuation with my blog series of manual launch of OCI services.
You can track the complete series and my other blogs in my KB Blog.

Step 1 - Click on Launch DB system and provide the details.
IN case you are using a trial account then you will have to raise an SR with Oracle to increase your service limits to have 4 oCPUs to allow launch of 2 oCPU x 2 nodes.

Keep the size limited due to limitations put on service limits by OCI.



Step 2 - Name your cluster anything you want

Step 3 - Provide further details. 


Step 4 - Provide DB Password and click on Launch finally




Step 5 - Server will go to Provisioning takes - It can take up to 2-4 hours for the service to get launched and servers to be in available state.

In the next blog I discuss on how to access the servers and view your data.

Friday, August 24, 2018

Oracle Cloud (OCI) - Part 8 - DB Instance Verification and Access

In this blog we are going to access the newly created DB VM.

In order to do so 
Use ssh to ssh to the server 

ssh -i .ssh/id_rsa opc@10.10.11.3 

where 10.10.11.3 is the private IP of the machine You can do your normal operations as in Linux 

opc user has sudo access 

Verify Running processes. 

[opc@ocdb ~]$ sudo su -
[root@ocdb ~]# su - oracle
[oracle@ocdb ~]$ ps -ef | grep pmon
oracle   25963 25883  0 08:24 pts/0    00:00:00 grep pmon
grid     70954     1  0 07:16 ?        00:00:00 asm_pmon_+ASM1
oracle   85219     1  0 07:39 ?        00:00:00 ora_pmon_db12

grid     86676     1  0 07:18 ?        00:00:00 apx_pmon_+APX1


Verify Connectivity using SQL Developer
Connect as sysdba and make sure to enter the details as given to me during BM / VM Creation


Oracle Cloud (OCI) - Part 7 - Database VM Creation

This is in continuation with last blog of Windows System access and Creation. 

In this blog we will create a Database VM. 

Step 1 - Click on "Bare Metal, VM, and Exadata"


Step 2 - Click on "Launch DB System"


Step 3 - Provide in DB System information. Select AD with Private IP address . 


Step 4 - Provide details of storage, licensing and network. 


Step 5 - Provide hostname prefix, DB name and other details.




Step 6 - Cick on Launch DB system.

Step 7 - Click on Launch DB system.
 Step 8 - Wait till the status turns green. 




Oracle Cloud (OCI) - Part 6 - Configure Windows System

In this blog I am going to access the newly added Windows System which is created. 


Step 1 - Open Mstsc and put in Public IP as found in last blog's page. 

Note down Public IP and Initial Password, you must change your password after first login. 



Step 2 - Verify IP address using cmd

Step 3 - Download Firefox, Mobaxteerm and puttygen
Technical Issue resolved in Downloading Firefox.


Oracle Cloud (OCI) - Part 5 - Create Compute Instance (Windows)

This blog is in continuation of my last blog of addition of Route Rules and Security ACL's. 

In this blog we create a Windows Instance on Public Network, this instance will be used to access the Database VM which we create in the later blogs. 


Step 1 - Click on Compute --> Instances

Step 2 - Click on "Create Instance" 

Step 3 - Provide Details of Instance (windows Server)



Step 4 - Choose Public Subnet, AssignPublic IP Address, Select Security List and finally cick on Create Instance.

 Step 5 - Provisioning Screen appears


 Step 6 - Wait till status becomes available and see password and Public IP from the highlighted boxes in the screenshot. 




Oracle Cloud (OCI) - Part 4 - Create Route Table and ACL Manually

This blog is in continuation with my last blog of Creation of creation of IGW

We continue our journey of creation and configuration of VCN. In this blog we create a route table for machines to access internet in Public Subnet. 

Step 1 - Click on Route Tables on the Left

Step 2 - Click Edit Route Table


 Step 3 - Create new route for machines to access internet using IGW - as shown below. 

Step 4 - Click on "Security Lists"

Step 5 - Click Edit and Add new Ingress rule as below for RDP access.


Step  6- Next Blog we discuss on how to create a compute instance of Windows

Oracle Cloud (OCI) - Part 3 - Creating Internet Gateway (IGW) Manually

This is continuation with last blog of creation of Subnet Manually.
We continue in this blog and create Internet Gateway for Internet Access. 

Step 1 - Click on "Internet Gateways" on the left.

Step 2 - Click on "Create Internet Gateway"


Step 3 - Provide Name to Internet Gateway 



Step 4 - Click Create Internet Gateway and wait for completion. 


Step 5- In next blog we create Route Table for Newly added Gateway.