Advertisement

Friday, March 31, 2017

Oracle Database - Queries doing Full Table Scan - Last 15 minutes

 
In this blog, I am writing down a query to find out queries doing Full Table scan in last 15 minutes in Oracle Database.

It can be extrapolated to do queries doing full table scan in 30 / 60 or x minutes

Below is the query 

I am taking out queries from v$active_session_history (hoping the queries were sampled doing a full table scan)
and then joining the table with V$sql_plan.
I am making double sure using sql plan that the query is doing a full table scan using V$sql_plan view. 


set pages 500 lines 500
with sql_queries as 
(
select distinct sql_id from V$active_session_history
where sample_time > sysdate - 1/96   -- 1/24 for 1 hour
and sql_plan_options = 'FULL'
)
select distinct sp.sql_id from V$sql_plan sp, sql_queries sq
where sp.sql_id = sq.sql_id
and  sp.operation = 'TABLE ACCESS'
    AND sp.options = 'FULL';
 and sp.object_owner not in ('SYS','XDB','SYSTEM','DBSNMP')
;

I have also put a where clause to restrict sys,system,xdb tables, generally dbsnmp will not have any ownership of tables.

This will give distinct sql ids with table access full operation in last 15 minutes

Sunday, March 26, 2017

Oracle Database - Query Slowness on Active DataGuard - High number of Consistent Gets

 
Recently I faced a peculiar issue of high consistent gets on the Active Data Guard side for a query.

The query had the exact same plan on both the sides - primary and standby. 

I verified - the consistent gets for both the sides using autotrace.
The peculiar thing I noticed, whenever I verify consistent gets for ADG site - they keep fluctuating, they are definitely high from primary, but yes keep on fluctuating.

Active Data Guard  site  - 

set autotrace traceonly explain statistics
 


Execution Plan
----------------------------------------------------------
Plan hash value: 4229718575




Statistics
----------------------------------------------------------
  1  recursive calls
  0  db block gets
1596127  consistent gets
  0  physical reads
  0  redo size
14215  bytes sent via SQL*Net to client
535  bytes received via SQL*Net from client
  3  SQL*Net roundtrips to/from client
  0  sorts (memory)
  0  sorts (disk)
  1  rows processed


Primary site  -

Execution Plan
----------------------------------------------------------
Plan hash value: 4229718575


Statistics
----------------------------------------------------------
  1  recursive calls
  0  db block gets
946319  consistent gets
  0  physical reads
  0  redo size
14215  bytes sent via SQL*Net to client
535  bytes received via SQL*Net from client
  3  SQL*Net roundtrips to/from client
  0  sorts (memory)
  0  sorts (disk)
  1  rows processed
Elapsed: 00:00:02.19

I took a trace of the sessions and used tkprof to verify it -

Active Data Guard  site  -

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      9.84       9.86          0    1585661          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      9.85       9.87          0    1585661          0           1


Primary site  - 

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.01          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        2      2.43       2.45          0     946318          0           1
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        4      2.44       2.46          0     946318          0           1



Finally, I had to trace the session level statistics to understand, what was the key component to consistent gets...

I trimmed down my analysis to undo related statistics.

Active Data Guard  site  -

 SELECT a.username, c.name, b.value
    FROM   v$session a, v$sesstat b, v$statname c
    WHERE  a.sid        = b.sid
    AND    b.statistic# = c.statistic#
    AND    b.value      != 0
    AND         a.sid  = 3605
    AND  c.name in ('session logical reads','consistent gets','consistent gets from cache','consistent gets - examination','consistent changes','CR blocks created','data blocks consistent reads - undo records applied','IMU CR rollbacks');



Before Running Query


USERNAME                       NAME          VALUE
------------ ------------------------------- ----------
SYS          session logical reads           49
SYS          consistent gets                 49
SYS          consistent gets from cache      49
SYS          consistent gets - examination   19

Post Running Query
USERNAME                       NAME                                VALUE
------------- ---------------------------------------------------- ----------
SYS           session logical reads                                1869022
SYS           consistent gets                                      1869022
SYS           consistent gets from cache                           1869022
SYS           consistent gets - examination                        922675
SYS           consistent changes                                   556329
SYS           CR blocks created                                    556139
SYS           data blocks consistent reads - undo records applied  43


Primary site  -

Before Running Query

USERNAME                       NAME          VALUE
------------ ------------------------------- ----------
SYS          session logical reads           3
SYS          consistent gets                 3
SYS          consistent gets from cache      3


Post Running Query
USERNAME                       NAME                                VALUE
------------- ---------------------------------------------------- ----------
SYS           session logical reads                                946382
SYS           consistent gets                                      946382
SYS           consistent gets from cache                           946382
SYS           consistent gets - examination                        24
SYS           consistent changes                                   5
SYS           CR blocks created                                    4
SYS           data blocks consistent reads - undo records applied  5



If you see a lot of undo block applied and consistent gets examinations are there on the standby site.


Finally with due - diligence and working with Oracle we set the parameter _serial_direct_read=false on the Active Data Guard site

Remember this is default on primary site still. 

This parameter is more related to use on Exadata for disabling smart scans, however it does have a use in non-Exa environments as well. 






  Session Level Statistic - ('table scans (direct read)')
This increments only by 1 for 1 query execution 

USERNAME               NAME                                     VALUE
------------------------------ ---------------------------------------------------------------- ----------
SYS                   table scans (direct read)                         1


The explanation is as below -

 Oracle will first issue fast checkpoint at object level to write all dirty buffers of that object to the disk and perform direct path reads in to the PGA of shadow process.


This is default behavior based on 11G databases based on size of SGA, in order to use direct reads for read of data. 
This behavior is based on size of SGA, size of object, buffer cache size of the instance.

Prior to 10G direct reads went via buffer cache and from 11G direct reads go directly via PGA bypassing Buffer Cache.

You can see below from the slow session, I could see what was happening - 
Fast Object Check point and direct path reads. 

The fast object check point in itself is taking around 2.7 seconds and finally there is time taken for direct path read.

The case of primary - the data is completely read from Buffer Cache and no data is via direct path reads.

*** - Below output is from Slow side - ADG site - ***

PARSE 
WAIT #140527153926352: nam='reliable message' ela= 1452 channel context=322148229376 channel handle=323221996856 broadcast message=322692222808 obj#=-1 tim=1490721285371785
WAIT #140527153926352: nam='enq: KO - fast object checkpoint' ela= 2581 name|mode=1263468550 2=65610 0=1 obj#=-1 tim=1490721285374623
WAIT #140527153926352: nam='enq: KO - fast object checkpoint' ela= 134 name|mode=1263468545 2=65610 0=2 obj#=-1 tim=1490721285374858
WAIT #140527153926352: nam='Disk file operations I/O' ela= 345 FileOperation=2 fileno=11 filetype=2 obj#=40842 tim=1490721285375461
WAIT #140527153926352: nam='Disk file operations I/O' ela= 171 FileOperation=2 fileno=0 filetype=15 obj#=40842 tim=1490721285375749
WAIT #140527153926352: nam='Disk file operations I/O' ela= 20 FileOperation=2 fileno=0 filetype=15 obj#=40842 tim=1490721285376553
WAIT #140527153926352: nam='direct path read' ela= 58 file number=11 first dba=3184329 block cnt=15 obj#=40842 tim=1490721285377218




 

Saturday, March 25, 2017

Oracle Database - RAC/Grid Infrastructure upgrade - Flex Cluster 12.1.0.2 (.x) to 12.2.0.1 - 3 Nodes

 
In this blog, I am going to discuss about upgrading clusterware from 12.1.0.2.x to the recently released (Feb 2017) 12cr2 clusterware 12.2.0.1.

I am having a setup of 3 node cluster with 2 nodes as hub nodes and 1 as leaf node.

[root@rac1 app]# crsctl get node role config -all
Node 'rac1' configured role is 'hub'
Node 'rac2' configured role is 'hub'
Node 'rac3' configured role is 'leaf'

I will be upgrading to R2 of 12c Clusterware from my 12.1.0.2 version
In my cluster upgrade, I always keep backup of cluster registry, cluster resources running and listeners & instances running on each node for finaly comparison (just in case)

If configuration is correct, all things will come up automatically, however few things might not - for example a listener running from DB home, not registered with Clusterware.

In general below is the advise for 12cR2 upgrade.

1. Backup OCR (Local & Global, Logical too)
2. Take Snapshot of resources (just in case you want to compare)
3. Have 33G+ free space in DG with OCR as this is mandatory requirement for upgrade else you will get an error -  INS-43100 asking for space
4. Have your memory requirements right 8GB + (Ignored in testing lab environment)
5. Have /etc/resolv.conf files matching across all nodes
6. Have the mandatory patch installed for GI Upgrade which is Patch - 21255373 
7. Have atleast 15GB free space on your installation mount point.
8. And lastly make sure you have on all servers clusterware up and running (There is an option in the installer to skip upgrade on unreachable nodes, which I doubt you want it)

9. Finally run the Upgrade

Note - in my lab setup (if you have followed installation blogs)
I was having DG with 15GB (normal redundancy), I have created an external DG with 45G and relocated my OCR and Voting Disk on the new OCR Disk Group.

On  Node 1
[As root:]
Set the environment to 12.1.0.2.x Grid Home / (+ASM1)
#ocrconfig  -export /home/oracle/org/Upgrade12cGI/OCRLogicalBackupCluster.bak

#ocrconfig  -manualbackup -backuploc /home/oracle/org/Upgrade12cGI/OCRManualBackupCluster.bak

#ocrconfig -local -export /home/oracle/org/Upgrade12cGI/OCRLogicalBackup_Local.bak

[As root:]
Set the environment to 12.1.0.2.x Grid Home / (+ASM1)
#crsctl stat res -t > /home/oracle/org/Upgrade12cGI/crsctl_stat_res_t.log

On  Node 2, 3 and others (if more) 
[As root:]
Set the environment to 12.1.0.2.x Grid Home / (+ASM2) (+ASM3)
#ocrconfig -local -export /home/oracle/org/Upgrade12cGI/OCRLogicalBackup_Local.bak

On Each Node 
#ps -ef | grep pmon > /home/oracle/org/Upgrade12cGI/pmon_snapshot.log

# ps -ef | grep tns > /home/oracle/org/Upgrade12cGI/tns_snapshot.log

Create a blackout in OEM

Create Grid Home Directory on each node 

[As root:]
#mkdir -p /u01/app/12.2.0.1/grid
# chown -R grid:oinstall /u01/app/12.2.0.1/

There is a minimum patch requirement of Patch which is done as part of pre-reqs check - 21255373
Make sure this patch version is installed.

Unzip Oracle Media on first node. 

cd /u01/app/12.2.0.1/grid
#unzip -qq /media/sf_RAC12c/Grid/12.2.0.1/V840012-01.zip
#chown -R grid:oinstall /u01/app/12.2.0.1/grid

[As Grid]  - Run Cluster Verification in pre - crs install stage

#cd /u01/app/12.2.0.1/grid
#./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome  /u01/app/12.1.0.2/grid -dest_crshome /u01/app/12.2.0.1/grid -dest_version 12.2.0.1 -fixup -verbose  | tee /tmp/cluvfy_upgd.out

I generally redirect output using tee command to see it incoming and also have a copy

Fix anything which is shown as failed. I will be  ignoring few pre-reqs such as memory requirement.

Finally Unset any environment variables related to Oracle
# unset ORACLE_HOME ORACLE_SID ORACLE_BASE

In this version Oracle has finally come up with a new name gridSetup which we are going to run from our unzipped directory.
Note - Oracle has given an image based installation, which is more of an existing Image of the install and the re-linking process. 
This is really good in standalone based installations, and is much faster also.

[grid@rac1 grid]$ ./gridSetup.sh 



In case SSH connectivity is not there, it can be setup using the given button in the screen. 
I prefer setting it up using sshUserSetup.sh file which is present in the installation unzipped file.





The groups which are chosen by default are the ones used in your installed configuration, however it is still good idea to verify at the time of installation to be double sure.



An highlight here is that I am not specifying Oracle home in this release during installation.
Oracle home is taken where I have unzipped my software.




I prefer having self execution of root scripts and also allows to run in parallel if many nodes are there. 







At this point in the upgrade, run the rootupgrade.sh script.
The output of rootupgrade.sh script in this release is bit different from previous release(s).

 It consists of 19 steps which include the upgrade of OCR and resources.
I will be running root upgrade on rac1, then rac2 and finally rac3 (flex -leaf node).
I have pasted output of running rootupgrade.sh at the end of the blog to make look installation process bit tidy. You can see that in the end.








Note - I have ignored few cluvfy checks, due my lab machines restriction, however all other checks are passed.

[grid@rac2 ~]$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [12.2.0.1.0]

This completes the upgrade of RAC cluster to 12.2.0.1

Root Upgrade execution.


******root Upgrade Node 1*****
[root@rac1 trace]# /u01/app/12.2.0.1/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.2.0.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: n
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac1/crsconfig/rootcrs_rac1_2017-03-24_10-56-45PM.log
2017/03/24 22:56:48 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2017/03/24 22:56:48 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2017/03/24 22:57:50 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2017/03/24 22:57:51 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2017/03/24 22:58:17 CLSRSC-595: Executing upgrade step 3 of 19: 'GenSiteGUIDs'.
2017/03/24 22:58:19 CLSRSC-595: Executing upgrade step 4 of 19: 'GetOldConfig'.
2017/03/24 22:58:19 CLSRSC-464: Starting retrieval of the cluster configuration data
2017/03/24 22:58:29 CLSRSC-515: Starting OCR manual backup.
2017/03/24 22:58:39 CLSRSC-516: OCR manual backup successful.
2017/03/24 22:58:56 CLSRSC-486: 
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.
2017/03/24 22:58:56 CLSRSC-541: 
 To downgrade the cluster: 
 1. All nodes that have been upgraded must be downgraded.
2017/03/24 22:58:57 CLSRSC-542: 
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.
2017/03/24 22:58:57 CLSRSC-615: 
 3. The last node to downgrade cannot be a Leaf node.
2017/03/24 22:59:06 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2017/03/24 22:59:06 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2017/03/24 22:59:10 CLSRSC-363: User ignored prerequisites during installation
2017/03/24 22:59:22 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2017/03/24 22:59:31 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2017/03/24 22:59:44 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2017/03/24 22:59:49 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode
2017/03/24 22:59:49 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl start rollingupgrade 12.2.0.1.0'
CRS-1131: The cluster was successfully set to rolling upgrade mode.
2017/03/24 23:00:06 CLSRSC-482: Running command: '/u01/app/12.2.0.1/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/12.1.0.2/grid -oldCRSVersion 12.1.0.2.0 -firstNode true -startRolling false '

ASM configuration upgraded in local node successfully.

2017/03/24 23:00:09 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode
2017/03/24 23:00:24 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2017/03/24 23:00:54 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2017/03/24 23:00:57 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2017/03/24 23:00:57 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2017/03/24 23:01:07 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2017/03/24 23:01:07 CLSRSC-595: Executing upgrade step 12 of 19: 'InstallAFD'.
2017/03/24 23:01:14 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2017/03/24 23:01:21 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2017/03/24 23:01:37 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2017/03/24 23:02:00 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/03/24 23:02:33 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2017/03/24 23:02:40 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.evmd' on 'rac1'
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.evmd' on 'rac1' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac1'-
CRS-2676: Start of 'ora.storage' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac1'
CRS-2676: Start of 'ora.crf' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac1'
CRS-2676: Start of 'ora.crsd' on 'rac1' succeeded
CRS-6017: Processing resource auto-start for servers: rac1
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac2'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
CRS-2672: Attempting to start 'ora.ons' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac2'
CRS-2677: Stop of 'ora.rac1.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.scan1.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac1'
CRS-2676: Start of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac1'
CRS-2676: Start of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2676: Start of 'ora.ons' on 'rac1' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.DATA2.dg' on 'rac1'
CRS-2672: Attempting to start 'ora.OCR.dg' on 'rac1'
CRS-2676: Start of 'ora.DATA2.dg' on 'rac1' succeeded
CRS-2676: Start of 'ora.OCR.dg' on 'rac1' succeeded
CRS-6016: Resource auto-start has completed for server rac1
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/03/24 23:06:35 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2017/03/24 23:08:32 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
2017/03/24 23:09:04 CLSRSC-474: Initiating upgrade of resource types
2017/03/24 23:13:46 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p first'
2017/03/24 23:13:48 CLSRSC-475: Upgrade of resource types successfully initiated.
2017/03/24 23:15:22 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2017/03/24 23:15:45 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded



****Root Upgrade Node 2******
[root@rac2 trace]/u01/app/12.2.0.1/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.2.0.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: 
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac2/crsconfig/rootcrs_rac2_2017-03-24_11-22-35PM.log
2017/03/24 23:22:43 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2017/03/24 23:22:43 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2017/03/24 23:26:46 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2017/03/24 23:26:47 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2017/03/24 23:27:10 CLSRSC-595: Executing upgrade step 3 of 19: 'GenSiteGUIDs'.
2017/03/24 23:27:10 CLSRSC-595: Executing upgrade step 4 of 19: 'GetOldConfig'.
2017/03/24 23:27:10 CLSRSC-464: Starting retrieval of the cluster configuration data
2017/03/24 23:27:46 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2017/03/24 23:27:47 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2017/03/24 23:27:51 CLSRSC-363: User ignored prerequisites during installation
2017/03/24 23:27:56 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2017/03/24 23:28:10 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2017/03/24 23:28:16 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.

ASM configuration upgraded in local node successfully.

2017/03/24 23:28:35 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2017/03/24 23:29:21 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2017/03/24 23:29:26 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2017/03/24 23:29:27 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2017/03/24 23:29:34 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2017/03/24 23:29:35 CLSRSC-595: Executing upgrade step 12 of 19: 'InstallAFD'.
2017/03/24 23:29:36 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2017/03/24 23:29:38 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2017/03/24 23:29:54 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2017/03/24 23:30:26 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/03/24 23:30:55 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2017/03/24 23:30:57 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac2'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac2' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac2'
CRS-2672: Attempting to start 'ora.evmd' on 'rac2'
CRS-2676: Start of 'ora.mdnsd' on 'rac2' succeeded
CRS-2676: Start of 'ora.evmd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac2'
CRS-2676: Start of 'ora.gpnpd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac2'
CRS-2676: Start of 'ora.gipcd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac2'
CRS-2676: Start of 'ora.diskmon' on 'rac2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac2'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac2'
CRS-2676: Start of 'ora.ctssd' on 'rac2' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac2'
CRS-2676: Start of 'ora.asm' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac2'
CRS-2676: Start of 'ora.storage' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac2'
CRS-2676: Start of 'ora.crf' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac2'
CRS-2676: Start of 'ora.crsd' on 'rac2' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: rac2
CRS-2673: Attempting to stop 'ora.rac2.vip' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'rac1'
CRS-2672: Attempting to start 'ora.ons' on 'rac2'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'rac1'
CRS-2677: Stop of 'ora.rac2.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.rac2.vip' on 'rac2'
CRS-2677: Stop of 'ora.scan1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'rac2'
CRS-2676: Start of 'ora.rac2.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac2'
CRS-2676: Start of 'ora.scan1.vip' on 'rac2' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'rac2'
CRS-2676: Start of 'ora.ons' on 'rac2' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac2' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'rac2' succeeded
CRS-6016: Resource auto-start has completed for server rac2
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/03/24 23:32:29 CLSRSC-343: Successfully started Oracle Clusterware stack
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 2.
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2017/03/24 23:32:45 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
2017/03/24 23:33:06 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2017/03/24 23:33:11 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded


****Root Upgrade Node 3********
[root@rac3 ~]# /u01/app/12.2.0.1/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.2.0.1/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n) 
[n]: 
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/12.2.0.1/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac3/crsconfig/rootcrs_rac3_2017-03-24_11-37-53PM.log
2017/03/24 23:38:01 CLSRSC-595: Executing upgrade step 1 of 19: 'UpgradeTFA'.
2017/03/24 23:38:01 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.
2017/03/24 23:40:24 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.
2017/03/24 23:40:24 CLSRSC-595: Executing upgrade step 2 of 19: 'ValidateEnv'.
2017/03/24 23:40:38 CLSRSC-595: Executing upgrade step 3 of 19: 'GenSiteGUIDs'.
2017/03/24 23:40:38 CLSRSC-595: Executing upgrade step 4 of 19: 'GetOldConfig'.
2017/03/24 23:40:39 CLSRSC-464: Starting retrieval of the cluster configuration data
2017/03/24 23:44:15 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.
2017/03/24 23:44:16 CLSRSC-595: Executing upgrade step 5 of 19: 'UpgPrechecks'.
2017/03/24 23:44:20 CLSRSC-363: User ignored prerequisites during installation
2017/03/24 23:44:24 CLSRSC-595: Executing upgrade step 6 of 19: 'SaveParamFile'.
2017/03/24 23:44:37 CLSRSC-595: Executing upgrade step 7 of 19: 'SetupOSD'.
2017/03/24 23:44:50 CLSRSC-595: Executing upgrade step 8 of 19: 'PreUpgrade'.
2017/03/24 23:45:00 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack
2017/03/24 23:45:10 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.
2017/03/24 23:45:48 CLSRSC-595: Executing upgrade step 9 of 19: 'CheckCRSConfig'.
2017/03/24 23:45:50 CLSRSC-595: Executing upgrade step 10 of 19: 'UpgradeOLR'.
2017/03/24 23:45:58 CLSRSC-595: Executing upgrade step 11 of 19: 'ConfigCHMOS'.
2017/03/24 23:45:59 CLSRSC-595: Executing upgrade step 12 of 19: 'InstallAFD'.
2017/03/24 23:46:01 CLSRSC-595: Executing upgrade step 13 of 19: 'createOHASD'.
2017/03/24 23:46:04 CLSRSC-595: Executing upgrade step 14 of 19: 'ConfigOHASD'.
2017/03/24 23:46:20 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.service'
2017/03/24 23:46:44 CLSRSC-595: Executing upgrade step 15 of 19: 'InstallACFS'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/03/24 23:47:21 CLSRSC-595: Executing upgrade step 16 of 19: 'InstallKA'.
2017/03/24 23:47:23 CLSRSC-595: Executing upgrade step 17 of 19: 'UpgradeCluster'.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac3'
CRS-2672: Attempting to start 'ora.evmd' on 'rac3'
CRS-2676: Start of 'ora.evmd' on 'rac3' succeeded
CRS-2676: Start of 'ora.mdnsd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac3'
CRS-2676: Start of 'ora.gpnpd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'rac3'
CRS-2676: Start of 'ora.gipcd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac3'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac3'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac3'
CRS-2676: Start of 'ora.diskmon' on 'rac3' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac3'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac3'
CRS-2676: Start of 'ora.ctssd' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'rac3'
CRS-2676: Start of 'ora.storage' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'rac3'
CRS-2676: Start of 'ora.crf' on 'rac3' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'rac3'
CRS-2676: Start of 'ora.crsd' on 'rac3' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
CRS-6017: Processing resource auto-start for servers: rac3
CRS-6016: Resource auto-start has completed for server rac3
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
2017/03/24 23:48:07 CLSRSC-343: Successfully started Oracle Clusterware stack
Successfully taken the backup of node specific configuration in OCR. 
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2017/03/24 23:48:28 CLSRSC-595: Executing upgrade step 18 of 19: 'UpgradeNode'.
Start upgrade invoked..
2017/03/24 23:48:38 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded
2017/03/24 23:48:38 CLSRSC-482: Running command: '/u01/app/12.2.0.1/grid/bin/crsctl set crs activeversion'
Started to upgrade the active version of Oracle Clusterware. This operation may take a few minutes.
Started to upgrade CSS.
CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade CRS.
CRS was successfully upgraded.
Successfully upgraded the active version of Oracle Clusterware.
Oracle Clusterware active version was successfully set to 12.2.0.1.0.
2017/03/24 23:49:50 CLSRSC-479: Successfully set Oracle Clusterware active version
2017/03/24 23:49:50 CLSRSC-476: Finishing upgrade of resource types
2017/03/24 23:50:33 CLSRSC-482: Running command: 'srvctl upgrade model -s 12.1.0.2.0 -d 12.2.0.1.0 -p last'
2017/03/24 23:50:34 CLSRSC-477: Successfully completed upgrade of resource types
2017/03/24 23:51:19 CLSRSC-595: Executing upgrade step 19 of 19: 'PostUpgrade'.
2017/03/24 23:51:52 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded