Advertisement

Showing posts with label Upgrade. Show all posts
Showing posts with label Upgrade. Show all posts

Thursday, June 13, 2019

Oracle Database - 19c Important Documentation and Patch Information

In this blog I discuss important documentation on Database 19c. 
It lists the important links which you require when thinking of Database 19c. 


This list is quite handy for Database Administrators of 19c when planning new installation, upgrade or finding about new features of 19c.

The Patch information in the end (point 5) is a quick link to find and download the Patches for this release. 


  1. Oracle Database 19c Documentation 
  2. Oracle Database 19c tutorials
  3. Oracle 19c - Complete Checklist for Manual Upgrades to Non-CDB Oracle Database 19c - DocID 2539778.1
  4. Oracle 19c - Complete Checklist for upgrading Oracle 12c, 18c Container Database (CDB) to Oracle 19c Release using DBUA - Doc ID 2543981.1
  5. Oracle 19c - Complete Checklist for Upgrading to Oracle Database 19c (19.x) using DBUA - Doc ID 2545064.1
  6. 18c & 19c Physical Standby Switchover Best Practices using SQL*Plus -  Doc ID 2485237.1
  7. DBCA Silent Mode New features in Database 19C - Doc ID 2477805.1
  8. Assistant: Download Reference for Oracle Database/GI Update, Revision, PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases Doc ID 2118136.2
  9. Desupport of Oracle Real Application Clusters (RAC) with Oracle Database Standard Edition 19c - Doc ID 2504078.1



Friday, April 5, 2019

Oracle Database: Archive Log Repository

Archive log repository is one of the seldom used and known concepts in Oracle Database. 

An Oracle Database can be configured to send archive logs to remote destination without database being present in that site. 

To explain
1. Database Instance is running
2. Database Control File is present
3. The CF has to be standby CF (ensure this)
4. Database is in mount state
5. No Datafiles present 

If you configure your source database similar to a dataguard configuration, your primary will start sending archive logs to the Archive log repostiory site. 

What are the use case? 
1. Backup of archive logs on remote site 
2. Remote Site can be used for tape backup etc
3. Setup during dataguard setup - the time spent to backup and transfer the archivelogs can be used by sending the logs using the archive log repo. 
So when you have your database restored, you will have your archives already there. 

For More Information : https://docs.oracle.com/cd/E11882_01/server.112/e41134/log_transport.htm#SBYDB4745

Friday, March 15, 2019

Oracle Database 11g/18c: Installing 11gr2 on 18c (Things to Keep in Mind)

In this blog I list out key things to keep in mind when doing installation of 11gR2 on 18c Cluster
This can be a useful case when you are thinking of migrating and Upgrading your infrastructure.


1. Keep Note of all the rpms and create a superset of both lists of 18c and 11g.
elfutils-libelf-devel.x86_64 is the rpm which is missing from 18c install. 

2. Prefer ASM LIB always for installation 
For RHEL - the kmod-oracleasm is available from RHN or in the Product DVD
for other libraries use - https://www.oracle.com/technetwork/server-storage/linux/asmlib/rhel7-2773795.html

3. If you want to share some FS with your previous severs, remember to create the users/groups with same UID / GID.

4. In the Grid Installation 


5. Before DB Binaries Installation, follow below docs - 
  • RAC RDBMS Installation fails with Error:"PRVF-4037 : CRS is not installed on any of the nodes" (Doc ID 2315020.1)
  • Error PRVF-4037 On Install Of Oracle Database 11.2 Binaries With 12.2.0.1 Grid Infrastructure (Doc ID 2302700.1)
  • PRVF-4037 : CRS is not installed on any of the nodes (Doc ID 1316815.1)
  • error in invoking target 'agent nmhs' of make file ins_emagent.mk while installing Oracle 11.2.0.4 on Linux (Doc ID 2299494.1)     
  • Linux:CVU NTP Prerequisite check fails with PRVF-7590, PRVG-1024 and PRVF-5415 (Doc ID 2126223.1)          

And as always refer MoS for any other erros.

Thursday, February 28, 2019

Oracle Database: 19c - Knowledge Base

Wednesday, February 27, 2019

Oracle Database 19c : GI Upgrade (from 18c) - 2 Nodes Cluster - Part 2/2

In this blog, I pickup from my last blog where I had done all the pre-reqs successfully. 

In this blog we will be doing the real upgrade. 

Connect using user - grid (make sure 'X' is enabled and you use 'X' server - I prefer MobaXterm)



$ cd /u01/app/190/grid
$ ./gridSetup.sh
Launching Oracle Grid Infrastructure Setup Wizard...

Step 1  - Select "Upgrade Oracle Grid Infrastructure" and Click "Next"



Step 2 - Select Nodes "Do not select - Skip upgrade on unreachable nodes and " Click "Next"


Step 3 - Skip EM Registeration Check - Do this after installation and Click "Next"

Step 4 - Select oracle base - generally it is populated correctly


Step 5 - Un-check Root Script Execution - We will do this manually. Click "Next"



Step  6 - RPM DB checks can be ignored (this is probably due to this GI version is specific to Exadata - I am still not sure but a confirmation will come when official normal Linux release will be done. by Oracle). Click "Next" and "Yes" when Prompted



Step 7 -  Click Submit to continue


Step 8 - Wait for the operations to complete






Step 8.1 - Execute the Root Scripts (1 by 1 on both nodes)
First on Node 1 (where Installer is running and then on node 2)
Node 1 - 
$ /u01/app/190/grid/rootupgrade.sh

Node 2 
$ /u01/app/190/grid/rootupgrade.sh

Step 8.2 - Click "OK" after root script execution is complete
The Complete output of the root scripts is in the last. 


Step 9 - Wait for the Config tools and post upgrade to complete. 



Step 10 - Click Close to Complete the installation. 




Verify the Cluster Version
[As root]
$ /u01/app/190/grid/bin/crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [19.0.0.0.0]


Note - if you had made any entries in /etc/oratab - Correct them to point to new homes respectively for ASM,APX and MGMT DB. 


Root Scripts Output
Node 1 

$ /u01/app/190/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/190/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/190/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac18c01/crsconfig/rootcrs_rac18c01_2019-02-27_09-28-36PM.log
2019/02/27 21:28:53 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2019/02/27 21:28:53 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2019/02/27 21:28:53 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2019/02/27 21:28:58 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2019/02/27 21:28:58 CLSRSC-464: Starting retrieval of the cluster configuration data
2019/02/27 21:29:04 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minutes.
2019/02/27 21:30:38 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2019/02/27 21:30:45 CLSRSC-693: CRS entities validation completed successfully.
2019/02/27 21:30:50 CLSRSC-515: Starting OCR manual backup.
2019/02/27 21:31:06 CLSRSC-516: OCR manual backup successful.
2019/02/27 21:33:08 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2019/02/27 21:33:08 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.
2019/02/27 21:33:08 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2019/02/27 21:33:16 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2019/02/27 21:33:16 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2019/02/27 21:33:18 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2019/02/27 21:33:21 CLSRSC-363: User ignored prerequisites during installation
2019/02/27 21:33:34 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2019/02/27 21:33:34 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.
2019/02/27 21:36:32 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2019/02/27 21:36:32 CLSRSC-482: Running command: '/u01/app/180/grid/bin/crsctl start rollingupgrade 19.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2019/02/27 21:36:37 CLSRSC-482: Running command: '/u01/app/190/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/180/grid -oldCRSVersion 18.0.0.0.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2019/02/27 21:36:41 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2019/02/27 21:36:46 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2019/02/27 21:37:08 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2019/02/27 21:37:09 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2019/02/27 21:37:11 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2019/02/27 21:37:20 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2019/02/27 21:37:20 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2019/02/27 21:37:27 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2019/02/27 21:37:34 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2019/02/27 21:37:34 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2019/02/27 21:39:34 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2019/02/27 21:42:03 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2019/02/27 21:42:09 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2019/02/27 21:43:30 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2019/02/27 21:43:50 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
2019/02/27 21:43:54 CLSRSC-474: Initiating upgrade of resource types
2019/02/27 21:44:47 CLSRSC-475: Upgrade of resource types successfully initiated.
2019/02/27 21:44:57 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
2019/02/27 21:45:05 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded


Node 2 
$ /u01/app/190/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/190/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/190/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac18c02/crsconfig/rootcrs_rac18c02_2019-02-27_09-46-43PM.log
2019/02/27 21:46:52 CLSRSC-595: Executing upgrade step 1 of 18: 'UpgradeTFA'.
2019/02/27 21:46:52 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2019/02/27 21:46:52 CLSRSC-595: Executing upgrade step 2 of 18: 'ValidateEnv'.
2019/02/27 21:46:53 CLSRSC-595: Executing upgrade step 3 of 18: 'GetOldConfig'.
2019/02/27 21:46:53 CLSRSC-464: Starting retrieval of the cluster configuration data
2019/02/27 21:48:23 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2019/02/27 21:48:23 CLSRSC-595: Executing upgrade step 4 of 18: 'GenSiteGUIDs'.
2019/02/27 21:48:23 CLSRSC-595: Executing upgrade step 5 of 18: 'UpgPrechecks'.
2019/02/27 21:48:24 CLSRSC-363: User ignored prerequisites during installation
2019/02/27 21:48:26 CLSRSC-595: Executing upgrade step 6 of 18: 'SetupOSD'.
2019/02/27 21:48:26 CLSRSC-595: Executing upgrade step 7 of 18: 'PreUpgrade'.
2019/02/27 21:48:49 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

ASM configuration upgraded in local node successfully.

2019/02/27 21:49:56 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2019/02/27 21:50:27 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2019/02/27 21:51:48 CLSRSC-595: Executing upgrade step 8 of 18: 'CheckCRSConfig'.
2019/02/27 21:51:49 CLSRSC-595: Executing upgrade step 9 of 18: 'UpgradeOLR'.
2019/02/27 21:51:54 CLSRSC-595: Executing upgrade step 10 of 18: 'ConfigCHMOS'.
2019/02/27 21:51:54 CLSRSC-595: Executing upgrade step 11 of 18: 'UpgradeAFD'.
2019/02/27 21:51:56 CLSRSC-595: Executing upgrade step 12 of 18: 'createOHASD'.
2019/02/27 21:51:57 CLSRSC-595: Executing upgrade step 13 of 18: 'ConfigOHASD'.
2019/02/27 21:51:57 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2019/02/27 21:53:39 CLSRSC-595: Executing upgrade step 14 of 18: 'InstallACFS'.
2019/02/27 21:55:52 CLSRSC-595: Executing upgrade step 15 of 18: 'InstallKA'.
2019/02/27 21:55:53 CLSRSC-595: Executing upgrade step 16 of 18: 'UpgradeCluster'.
2019/02/27 21:57:17 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 19 detected.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2019/02/27 21:58:52 CLSRSC-595: Executing upgrade step 17 of 18: 'UpgradeNode'.
Start upgrade invoked..
2019/02/27 21:58:57 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2019/02/27 21:58:57 CLSRSC-482: Running command: '/u01/app/190/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Started to upgrade Oracle ACFS.
Oracle ACFS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 19.0.0.0.0.
2019/02/27 22:00:09 CLSRSC-479: Successfully set Oracle Clusterware active version
2019/02/27 22:00:12 CLSRSC-476: Finishing upgrade of resource types
2019/02/27 22:00:41 CLSRSC-477: Successfully completed upgrade of resource types
2019/02/27 22:01:04 CLSRSC-595: Executing upgrade step 18 of 18: 'PostUpgrade'.
Successfully updated XAG resources.
2019/02/27 22:01:26 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Oracle Database 19c : GI Upgrade (from 18c) - 2 Nodes Cluster - Part 1/2

In this blog I am going to work on upgrade of my GI from 18c to 19c for my 3 node Cluster running on OEL 7.3.
(You can download the latest binaries from oracle edelivery )

If you directly want to run on Part 2 then click here.

Here is a brief of my cluster


1. Version 
/u01/app/180/grid/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]

2. Disk Groups

[As grid]$ /u01/app/180/grid/bin/asmcmd lsdg
State    Type    Rebal  Sector  Logical_Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  NORMAL  N         512             512   4096  4194304    368628   360856           122876          118990              0             N  DATA/
MOUNTED  NORMAL  N         512             512   4096  4194304    307188   253828           102396           75716              0             Y  OCR_VOTE

3. Databases
$ /u01/app/180/grid/bin/srvctl config database
orpl

4. Patches (Grid and Database)
$ /u01/app/180/grid/OPatch/opatch lspatches
28656071;OCW RELEASE UPDATE 18.4.0.0.0 (28656071)
28655963;DBWLM RELEASE UPDATE 18.4.0.0.0 (28655963)
28655916;ACFS RELEASE UPDATE 18.4.0.0.0 (28655916)
28655784;Database Release Update : 18.4.0.0.181016 (28655784)
28547619;TOMCAT RELEASE UPDATE 18.0.0.0.0 (28547619)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)

$ /u01/app/oracle/product/180/db/OPatch/opatch lspatches
28502229;OJVM RELEASE UPDATE: 18.4.0.0.181016 (28502229)
28656071;OCW RELEASE UPDATE 18.4.0.0.0 (28656071)
28655784;Database Release Update : 18.4.0.0.181016 (28655784)
27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171


In my cluster upgrade, I always keep backup of cluster registry, cluster resources running and listeners & instances running on each node for finaly comparison (just in case)

If configuration is correct, all things will come up automatically, however few things might not - for example a listener running from DB home, not registered with Clusterware.

In general below is the advise for 18c to 19c upgrade.

1. Backup OCR (Local & Global, Logical too)
2. Take Snapshot of resources (just in case you want to compare)
3. Have 33G+ free space in DG with OCR as this is mandatory requirement for upgrade else you will get an error -  INS-43100 asking for space
4. Have /etc/resolv.conf files matching across all nodes
5. Have the mandatory patch installed for GI Upgrade - I generally always patch for upgrades.
6. Have atleast 15GB free space on your installation mount point.
7. And lastly make sure you have on all servers clusterware up and running (There is an option in the installer to skip upgrade on unreachable nodes, which I doubt you want it)
8. Ensure all Packges are up to date as listed here (I had issues with kmod and kmod-libs)
9. Install Mandatory Patch - 28553832 before proceeding.
Steps to Install are as follows
[As root - on each node rolling]
$ unzip -qq /tmp/p28553832_184000OCWRU_Linux-x86-64.zip
$ export ORACLE_HOME=/u01/app/180/grid/

$ export PATH=$PATH:$ORACLE_HOME/OPatch
$ opatchauto apply /tmp/install/28553832/
Note - It might take 30-40 minutes depending no how fast your system is to apply this patch on one node.

10. Finally run the Upgrade Steps

On  Node 1,2,3 (and more if there)
[As grid]
$ mkdir -p /home/grid/org/Upgrade19cGI

On  Node 1
[As root:]
Set the environment to 18c Grid Home / (+ASM1)

$ ocrconfig  -export /home/grid/org/Upgrade19cGI/OCRLogicalBackupCluster.bak

$ ocrconfig -showbackuploc
Note down your backup loc here. 
$ ocrconfig -backuploc  
$ ocrconfig  -manualbackup 


$ ocrconfig -local -export /home/grid/org/Upgrade19cGI/OCRLogicalBackup_Local.bak


[As root:]
Set the environment to 18c Grid Home / (+ASM1)
$ crsctl stat res -t > /home/grid/org/Upgrade19cGI/crsctl_stat_res_t.log

On  Node 2, 3 and others (if more) 
[As root:]
Set the environment to 18c Grid Home / (+ASM2) (+ASM3)
$ ocrconfig -local -export /home/grid/org/Upgrade19cGI/OCRLogicalBackup_Local.bak

On Each Node 
$ ps -ef | grep pmon > /home/grid/org/Upgrade19cGI/pmon_snapshot.log

$  ps -ef | grep tns > /home/grid/org/Upgrade19cGI/tns_snapshot.log

Create a blackout in OEM
Disable any jobs / cron (better disable cron daemon, if only oracle server is running)

Create Grid Home Directory on each node 


[As root on All Nodes]

$ mkdir -p /u01/app/190/grid



Unzip Oracle Media on first node. 


[As root on Node 1]
cd /u01/app/190/grid
$ unzip -qq  V981627-01.zip
[As root on all Nodes]
$ chown -R grid:oinstall /u01/app/190/grid
(Make a note of image based installation here) 


[As Grid]  - Run Cluster Verification in pre - crs install stage

$ cd /u01/app/190/grid
$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome  /u01/app/180/grid -dest_crshome /u01/app/190/grid -dest_version 19.2.0.0 -fixup -verbose  | tee /tmp/cluvfy_upgd.out

Pre-check for cluster services setup was successful.
Verifying RPM Package Manager database ...INFORMATION
PRVG-11250 : The check "RPM Package Manager database" was not performed because
it needs 'root' user privileges.


CVU operation performed:      stage -pre crsinst
Date:                         Feb 27, 2019 7:20:38 AM
CVU home:                     /u01/app/190/grid/
User:                         grid



I generally redirect output using tee command to see it incoming and also have a copy

Fix anything which is shown as failed.

Finally Unset any environment variables related to Oracle
$ unset ORACLE_HOME ORACLE_SID ORACLE_BASE

Follow the next blog which provides screenshots and the upgrade process. 
Make sure you connect using grid user and do not do "su" for UI to work. 

$ ./gridSetup.sh

Wednesday, August 29, 2018

Oracle Database 18c: RAC Upgrade 12c to 18c (12.2.0.1 to 18.3) - Part 2/2

In this blog, I pickup from my last blog where I had done all the pre-reqs successfully. 

In this blog we will be doing the real upgrade. 

Connect using user - grid (make sure 'X' is enabled and you use 'X' server - I prefer MobaXterm)

cd /u01/app/180/grid
./gridSetup.sh

Step 1 - Select Upgrade 

Step 2 - Select Node - Do not select skip unreachable nodes

Step 3 - Include EM registration if you are using EM in your environment 

Step 4 - Select your Grid Base

Step 5  - We Will run the script manually

Step 6 - If all okay - you should go to the summary screen, Save response file if you want and click "Submit"


Before the next step. If you have any patches. You can install them right away. 
Because when you install the patches before the root script execution, your cluster will have the latest patch.

Though in case of 18c we always have latest RU of the quarter :)

Step 7 - run rootupgrade.sh script. 
Refer end of blog to see the log of the execution.



Step 8 - Click OK after rootupgarde execution is complete on all nodes.





Step 9 - Upgrade is complete Click Close and Verify your cluster now. 




Note - if you had made any entries in /etc/oratab - Correct them to point to new homes respectively for ASM,APX and MGMT DB. 



[root@prodrac01 ~]# crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [18.0.0.0.0]
[grid@prodrac01 OPatch]$ pwd
/u01/app/180/grid/OPatch
[grid@prodrac01 OPatch]$ ./opatch lspatches
pw27908644;UPDATE 18.3 DATABASE CLIENT JDK IN ORACLE HOME TO JDK8U171
27923415;OJVM RELEASE UPDATE: 18.3.0.0.180717 (27923415)
28256701;TOMCAT RELEASE UPDATE 18.3.0.0.0 (28256701)
28090564;DBWLM RELEASE UPDATE 18.3.0.0.0 (28090564)
28090557;ACFS RELEASE UPDATE 18.3.0.0.0 (28090557)
28090553;OCW RELEASE UPDATE 18.3.0.0.0 (28090553)
28090523;Database Release Update : 18.3.0.0.180717 (28090523)

Rootupgradelog
Node 1 - prodrac01

[root@prodrac01 ~]# /u01/app/180/grid/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/180/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/180/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/prodrac01/crsconfig/rootcrs_prodrac01_2018-08-29_05-05-11AM.log
2018/08/29 05:05:26 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2018/08/29 05:05:26 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/08/29 05:06:02 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/08/29 05:06:02 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2018/08/29 05:06:06 CLSRSC-595: Executing upgrade step 3 of 19: 'GetOldConfig'.
2018/08/29 05:06:06 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/08/29 05:06:14 CLSRSC-692: Checking whether CRS entities are ready for upgrade. This operation may take a few minute
2018/08/29 05:07:48 CLSRSC-693: CRS entities validation completed successfully.
2018/08/29 05:07:53 CLSRSC-515: Starting OCR manual backup.
2018/08/29 05:08:02 CLSRSC-516: OCR manual backup successful.
2018/08/29 05:08:09 CLSRSC-486:
At this stage of upgrade, the OCR has changed.
Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2018/08/29 05:08:09 CLSRSC-541:
To downgrade the cluster:
1. All nodes that have been upgraded must be downgraded.
2018/08/29 05:08:09 CLSRSC-542:
2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2018/08/29 05:08:09 CLSRSC-615:
3. The last node to downgrade cannot be a Leaf node.
2018/08/29 05:08:15 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/08/29 05:08:15 CLSRSC-595: Executing upgrade step 4 of 19: 'GenSiteGUIDs'.
2018/08/29 05:08:16 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2018/08/29 05:08:29 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2018/08/29 05:08:37 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2018/08/29 05:08:37 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2018/08/29 05:10:01 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2018/08/29 05:10:01 CLSRSC-482: Running command: '/u01/app/12201/grid/bin/crsctl start rollingupgrade 18.0.0.0.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2018/08/29 05:10:06 CLSRSC-482: Running command: '/u01/app/180/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -tNode true -startRolling false '
ASM configuration upgraded in local node successfully.
2018/08/29 05:10:15 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2018/08/29 05:10:19 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/08/29 05:10:55 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2018/08/29 05:10:58 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2018/08/29 05:10:59 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2018/08/29 05:11:09 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2018/08/29 05:11:09 CLSRSC-595: Executing upgrade step 12 of 19: 'UpgradeAFD'.
2018/08/29 05:11:16 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2018/08/29 05:11:22 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2018/08/29 05:11:22 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/08/29 05:12:00 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'prodrac01'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'prodrac01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/08/29 05:12:44 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2018/08/29 05:12:50 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'prodrac01'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'prodrac01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'prodrac01'
CRS-2672: Attempting to start 'ora.evmd' on 'prodrac01'
CRS-2676: Start of 'ora.mdnsd' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.evmd' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'prodrac01'
CRS-2676: Start of 'ora.gpnpd' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'prodrac01'
CRS-2676: Start of 'ora.gipcd' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'prodrac01'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'prodrac01'
CRS-2676: Start of 'ora.cssdmonitor' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'prodrac01'
CRS-2672: Attempting to start 'ora.diskmon' on 'prodrac01'
CRS-2676: Start of 'ora.diskmon' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.crf' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.cssd' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'prodrac01'
CRS-2672: Attempting to start 'ora.ctssd' on 'prodrac01'
CRS-2676: Start of 'ora.ctssd' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'prodrac01'
CRS-2676: Start of 'ora.asm' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'prodrac01'
CRS-2676: Start of 'ora.storage' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'prodrac01'
CRS-2676: Start of 'ora.crsd' on 'prodrac01' succeeded
CRS-6017: Processing resource auto-start for servers: prodrac01
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'prodrac02'
CRS-2673: Attempting to stop 'ora.prodrac01.vip' on 'prodrac02'
CRS-2672: Attempting to start 'ora.ons' on 'prodrac01'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'prodrac02' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'prodrac02'
CRS-2677: Stop of 'ora.prodrac01.vip' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.prodrac01.vip' on 'prodrac01'
CRS-2677: Stop of 'ora.scan1.vip' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'prodrac01'
CRS-2676: Start of 'ora.prodrac01.vip' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'prodrac01'
CRS-2676: Start of 'ora.scan1.vip' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'prodrac01'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'prodrac01'
CRS-2676: Start of 'ora.ons' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'prodrac01' succeeded
CRS-2676: Start of 'ora.asm' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.RECO.dg' on 'prodrac01'
CRS-2676: Start of 'ora.RECO.dg' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'prodrac01'
CRS-2676: Start of 'ora.DATA.dg' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.prbrm.db' on 'prodrac01'
CRS-2676: Start of 'ora.prbrm.db' on 'prodrac01' succeeded
CRS-6016: Resource auto-start has completed for server prodrac01
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/08/29 05:14:00 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/08/29 05:14:20 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
2018/08/29 05:14:23 CLSRSC-474: Initiating upgrade of resource types
2018/08/29 05:14:58 CLSRSC-475: Upgrade of resource types successfully initiated.
2018/08/29 05:15:10 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2018/08/29 05:15:16 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@prodrac01 ~]#

Node 2 - prodrac02
[root@prodrac02 ~]# /u01/app/180/grid/rootupgrade.sh
Performing root user operation.
The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/180/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)
[n]: y
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/180/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/prodrac02/crsconfig/rootcrs_prodrac02_2018-08-29_05-21-05AM.log
2018/08/29 05:21:14 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2018/08/29 05:21:14 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2018/08/29 05:21:55 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2018/08/29 05:21:55 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2018/08/29 05:21:56 CLSRSC-595: Executing upgrade step 3 of 19: 'GetOldConfig'.
2018/08/29 05:21:56 CLSRSC-464: Starting retrieval of the cluster configuration data
2018/08/29 05:22:19 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2018/08/29 05:22:19 CLSRSC-595: Executing upgrade step 4 of 19: 'GenSiteGUIDs'.
2018/08/29 05:22:19 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2018/08/29 05:22:21 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2018/08/29 05:22:25 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2018/08/29 05:22:25 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
ASM configuration upgraded in local node successfully.
2018/08/29 05:22:37 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2018/08/29 05:23:08 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2018/08/29 05:23:25 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2018/08/29 05:23:25 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2018/08/29 05:23:30 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2018/08/29 05:23:30 CLSRSC-595: Executing upgrade step 12 of 19: 'UpgradeAFD'.
2018/08/29 05:23:33 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2018/08/29 05:23:34 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2018/08/29 05:23:34 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2018/08/29 05:24:05 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'prodrac02'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'prodrac02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2018/08/29 05:24:46 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2018/08/29 05:24:48 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'prodrac02'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'prodrac02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'prodrac02'
CRS-2672: Attempting to start 'ora.evmd' on 'prodrac02'
CRS-2676: Start of 'ora.mdnsd' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.evmd' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'prodrac02'
CRS-2676: Start of 'ora.gpnpd' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'prodrac02'
CRS-2676: Start of 'ora.gipcd' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'prodrac02'
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'prodrac02'
CRS-2676: Start of 'ora.cssdmonitor' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'prodrac02'
CRS-2672: Attempting to start 'ora.diskmon' on 'prodrac02'
CRS-2676: Start of 'ora.diskmon' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.crf' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.cssd' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'prodrac02'
CRS-2672: Attempting to start 'ora.ctssd' on 'prodrac02'
CRS-2676: Start of 'ora.ctssd' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'prodrac02'
CRS-2676: Start of 'ora.asm' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'prodrac02'
CRS-2676: Start of 'ora.storage' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'prodrac02'
CRS-2676: Start of 'ora.crsd' on 'prodrac02' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: prodrac02
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'prodrac01'
CRS-2673: Attempting to stop 'ora.prodrac02.vip' on 'prodrac01'
CRS-2672: Attempting to start 'ora.ons' on 'prodrac02'
CRS-2677: Stop of 'ora.prodrac02.vip' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.prodrac02.vip' on 'prodrac02'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'prodrac01' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'prodrac01'
CRS-2677: Stop of 'ora.scan1.vip' on 'prodrac01' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'prodrac02'
CRS-2676: Start of 'ora.prodrac02.vip' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'prodrac02'
CRS-2676: Start of 'ora.scan1.vip' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'prodrac02'
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'prodrac02'
CRS-2676: Start of 'ora.ons' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'prodrac02' succeeded
CRS-2676: Start of 'ora.asm' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.RECO.dg' on 'prodrac02'
CRS-2676: Start of 'ora.RECO.dg' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.DATA.dg' on 'prodrac02'
CRS-2676: Start of 'ora.DATA.dg' on 'prodrac02' succeeded
CRS-2672: Attempting to start 'ora.prbrm.db' on 'prodrac02'
CRS-2676: Start of 'ora.prbrm.db' on 'prodrac02' succeeded
CRS-6016: Resource auto-start has completed for server prodrac02
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2018/08/29 05:26:27 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2018/08/29 05:27:01 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
Start upgrade invoked..
2018/08/29 05:27:06 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2018/08/29 05:27:06 CLSRSC-482: Running command: '/u01/app/180/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 18.0.0.0.0.
2018/08/29 05:28:13 CLSRSC-479: Successfully set Oracle Clusterware active version
2018/08/29 05:28:13 CLSRSC-476: Finishing upgrade of resource types
2018/08/29 05:28:14 CLSRSC-477: Successfully completed upgrade of resource types
2018/08/29 05:29:39 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2018/08/29 05:29:52 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@prodrac02 ~]#