Advertisement

Friday, May 31, 2019

Oracle Database: Tools for Monitoring (I/O, Network)

In this blog I discuss few tools which can be used for database or systems monitoring to be precisely. 

If you want to know what rpms you should be installing just beyond the required ones to install oracle Database, then these are the ones  

Now assuming you have these installed

1. IOTOP
One of the really good tools which can help in debugging i/o performance 
this can tell 
1. Total disk read and write happening
2. Disk read and write by process
3. IO wait % for a process
4. Swap information for a process

Just run iotop as root


Total DISK READ :     574.79 K/s | Total DISK WRITE :       2.92 M/s
Actual DISK READ:     574.79 K/s | Actual DISK WRITE:       2.95 M/s
   TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND
  5174 rt/4 grid      969.66 B/s    0.00 B/s  0.00 %  1.64 % ocssd.bin
175547 be/4 oracle    106.06 K/s    0.00 B/s  0.00 %  0.87 % ora_j001_PR11
 10213 be/4 oracle    151.51 K/s  227.26 K/s  0.00 %  0.86 % ora_ckpt_PR11
 10335 be/4 oracle      0.00 B/s  674.22 K/s  0.00 %  0.74 % ora_rvwr_PR11
175549 be/4 oracle    151.51 K/s    0.00 B/s  0.00 %  0.68 % ora_j002_PR11
 17843 be/4 grid      121.21 K/s   15.15 K/s  0.00 %  0.66 % mdb_ckpt_-MGMTDB
  5170 rt/4 grid      969.66 B/s    0.00 B/s  0.00 %  0.17 % ocssd.bin

  5167 rt/4 grid      969.66 B/s    0.00 B/s  0.00 %  0.16 % ocssd.bin


2. IOSTAT
Iostat is complimentary to iotop, it provides details at the disk level 
run iostat -dmx 2 sdb sdc sds sdq sdm as root
You will get information like this below 
1. You can tell individual disk utilization 
2. -dmx 2 are arguments (you can look up in the man page)
3. to find out the sd devices you want to see use lsblk




Linux 3.10.0-693.17.1.el7.x86_64 (CHUMR1DB001)  05/31/2019      _x86_64_        (16 CPU)
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.00    4.73    0.73     0.15     0.01    59.82     0.05    9.07   10.02    2.91   4.71   2.57
sdc               0.00     0.00    5.38    5.32     2.82     1.98   918.41     1.44  134.90  140.15  129.58  12.13  12.98
sdm               0.00     0.00    2.21    1.33     0.00     0.00     2.20     0.01    2.28    1.90    2.92   2.09   0.74
sdq               0.00     0.00    2.33    1.21     0.10     0.04    79.00     0.02    6.47    8.15    3.22   5.63   1.99
sds               0.00     0.00    2.23    1.34     0.00     0.00     2.30     0.01    2.32    1.88    3.05   2.21   0.79
Device:         rrqm/s   wrqm/s     r/s     w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sdb               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00    0.00    0.00   0.00   0.00
sdc               0.00     0.00    0.00    2.50     0.00     0.76   624.00     0.01    3.60    0.00    3.60   3.60   0.90
sdm               0.00     0.00    2.00    1.00     0.00     0.00     1.00     0.00    0.67    0.50    1.00   0.67   0.20
sdq               0.00     0.00    1.50    1.00     0.02     0.05    63.60     0.00    1.40    1.00    2.00   1.40   0.35
sds               0.00     0.00    2.00    1.00     0.00     0.00     1.00     0.00    0.83    0.75    1.00   0.83   0.25


2. iptraf-ng
The next big tool is which is for networking is iptraf-ng
this can be a very useful network tool with a lot of features such as interfaces stats, breakdowns, traffic monitoring. 
You really want this as it is there in official redhat repo 

Just run iptraf-ng as root, and you will get all the options which you can choose from 


IP traffic monitor             
General interface statistics   
Detailed interface statistics  
Statistical breakdowns...      
LAN station monitor           

Choose the option you want - say traffic monitor, then select the interfaces and then you can see the live monitoring. 
You can look into Detailed interface statistics and choose the interface to find out the bandwidth (in rate and out rate currently)
It supports logical interfaces as well

Oracle Database: How to relocate tablespaces to different Diskgroup (except SYS)

in this blog, I list out steps for Oracle RAC relocating tablespaces to different diskgroups with partial downtime only 

Step1 - 
backup tablespaces to new diskgroup - 

run 
{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate channel c3 type disk;
allocate channel c4 type disk;
allocate channel c5 type disk;
allocate channel c6 type disk;
allocate channel c7 type disk;
allocate channel c8 type disk;
backup as copy tablespace TB1,TB2,TB3 format '+DATA2';
}

test

Repeat for each tablespaces
1. SQL> 
     alter tablespace TB1 offline;
2. rman  
     switch tablespace TB1 to copy;
     run {
    allocate channel c1 type disk;
    recover tablespace TB1;
    }
3. SQL> 
    alter tablespace TB1 online;

This completes the relocation.
If you want to do it for datafile level (in case your tabelspace is large)
You can follow same steps (with minor modifications from tablespace to datafile )

Oracle Database: RAC - Relocate UNDO to different Diskgroup (and Standby)

In this blog I discuss steps to relocate Undotablespaces on a 2 node rac cluster.

Currently tablespaces are
Node 1 - undotbs1
Node 2 - undotbs2


Step 1 - Create Undotablespaces (as per the size required) 
(Note the name difference) 

 CREATE UNDO TABLESPACE "UNDOTBS11" DATAFILE '+DATA2' SIZE 209715200 AUTOEXTEND ON NEXT 52428800 MAXSIZE 32767M,
  BLOCKSIZE 8192
  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;

 CREATE UNDO TABLESPACE "UNDOTBS22" DATAFILE '+DATA2' SIZE 209715200 AUTOEXTEND ON NEXT 52428800 MAXSIZE 32767M,

  BLOCKSIZE 8192

  EXTENT MANAGEMENT LOCAL AUTOALLOCATE;


Next set the undo tablespace parameter 

  alter system set undo_tablespace=UNDOTBS11 scope=spfile sid='NODE1';  
  alter system set undo_tablespace=UNDOTBS22 scope=spfile sid='NODE2';


Next drop the the undo tablespace after undo retention has expired
Drop tablespace UNDOTBS1 including contents and datafiles;
Drop tablespace UNDOTBS2 including contents and datafiles;

This completes the relocation.

In case you are running a standby RAC, then you will need to set these a well, however these parameters are not dynamic, so you will have to set it to spfile and restart the standby DB (not just instance)

Oracle Database: How much can you shrink tablespace / Size

It's one of the big questions on how much I will be able to shrink my tablespace or my datafiles. 

It is easy just run this and substitute the name of tablespace (currently SYSAUX in bold)
You will find list of files and how much they can be reduced 

set verify off
column file_name format a50 word_wrapped
column smallest format 999,990 heading "Smallest|Size|Poss."
column currsize format 999,990 heading "Current|Size"
column savings  format 999,990 heading "Poss.|Savings"
break on report
compute sum of savings on report
column value new_val blksize
select value from v$parameter where name = 'db_block_size';
/
select file_name,
       ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) smallest,
       ceil( blocks*&&blksize/1024/1024) currsize,
       ceil( blocks*&&blksize/1024/1024) -
       ceil( (nvl(hwm,1)*&&blksize)/1024/1024 ) savings
from dba_data_files a,
     ( select file_id, max(block_id+blocks-1) hwm
         from dba_extents
        group by file_id ) b
where a.file_id = b.file_id(+)  and
a.tablespace_name = 'SYSAUX
order by savings desc

/

Output similar to below 


                                                   Smallest
                                                       Size  Current    Poss.
FILE_NAME                                             Poss.     Size  Savings
-------------------------------------------------- -------- -------- --------
+DATA2/chmprod/datafile/sysaux.431.1009335297         9,372   30,720   21,348
+DATA2/chmprod/datafile/sysaux.421.1009334935        29,894   30,720      826
+DATA2/chmprod/datafile/sysaux.299.1009331535        32,269   32,767      498
+DATA2/chmprod/datafile/sysaux.297.1009331535        32,669   32,750       81
+DATA2/chmprod/datafile/sysaux.296.1009331535        32,717   32,750       33
+DATA2/chmprod/datafile/sysaux.422.1009334937        30,690   30,720       30
+DATA2/chmprod/datafile/sysaux.298.1009331535        32,764   32,767        3
+DATA2/chmprod/datafile/sysaux.420.1009334933        30,720   30,720        0
                                                                     --------

                                                   Smallest
                                                       Size  Current    Poss.
FILE_NAME                                             Poss.     Size  Savings
-------------------------------------------------- -------- -------- --------

sum                                                                    22,819

Oracle Database 11g: SYSAUX Too large

SYSAUX has a lot of bugs when it comes to purging, and I had to face quite a lot of them. 

My SYSAUX started with this (refer end of blog to find out query used for free space.) 

Tablespace                 Used MB     Free MB    Total MB  Pct. Free
---------------------- ----------- ----------- ----------- ----------

SYSAUX                     230,379      23,535     253,914          9

and ended with this

Tablespace                 Used MB     Free MB    Total MB  Pct. Free
---------------------- ----------- ----------- ----------- ----------

SYSAUX                      17,084     236,830     253,914         93

So what all I had to do 
The number 1 thing is to find out if you are hitting any bugs. How to do that?
Find out table which are of highest size.
You can either run rdbms/admin/awrinfo.sql to get a report or you can run just this

set markup html on spool on 
spool sysaux_segm.html 
set echo on 
with a as (select owner,segment_name,partition_name,segment_type,bytes/1024/1024/1024 from dba_segments where tablespace_name='SYSAUX' order by 5 desc) select * from a where rownum<21; 
spool off 

set markup html off 

This will give you a list of segments which are top consumers of space
in my case they were. 

  1. SYS_LOB0000006339C00038$$
  2. WRH$_SQL_BIND_METADATA
  3. WRH$_SQL_BIND_METADATA_PK
  4. SYS_LOB0000006331C00004$$
  5. WRH$_ACTIVE_SESSION_HISTORY
  6. WRH$_EVENT_HISTOGRAM_PK
  7. WRH$_EVENT_HISTOGRAM


and a lot of others. But these are the top ones to start. 
So what do you do. 
For few tables there will be orphaned rows these are 
1. WRH$_EVENT_HISTOGRAM
2. WRH$_ACTIVE_SESSION_HISTORY
3. WRH$_SQL_BIND_METADATA

How to find out
run this ->

SELECT COUNT(1) Orphaned_ASH_Rows 
FROM wrh$_active_session_history a 
WHERE NOT EXISTS 
(SELECT 1 
FROM wrm$_snapshot 
WHERE snap_id = a.snap_id 
AND dbid = a.dbid 
AND instance_number = a.instance_number 
);

SELECT COUNT(1) Orphaned_EVENT_Rows 
FROM WRH$_EVENT_HISTOGRAM a 
WHERE NOT EXISTS 
(SELECT 1 
FROM wrm$_snapshot 
WHERE snap_id = a.snap_id 
AND dbid = a.dbid 
AND instance_number = a.instance_number 
); 

For WRH$_SQL_BIND_METADATA, find out the min snap_id from the table and see the min snap_id from wrm$_snapshot



Now how to purge? 
If these rows are too many then use a pl/sql block like this, this will help you commit in batches

Below is okay for EVENT_HISTOGRAM AND ACTIVE_SESSION_HISTORY

declare
v_count integer;
begin

SELECT count(1) into v_count
FROM WRH$_EVENT_HISTOGRAM a <- replace with WRH$_ACTIVE_SESSION_HISTORY
WHERE NOT EXISTS
(SELECT 1
FROM wrm$_snapshot
WHERE snap_id = a.snap_id
AND dbid = a.dbid
AND instance_number = a.instance_number
)and rownum <=100;

while v_count != 0 loop

DELETE
FROM WRH$_EVENT_HISTOGRAM a <- replace with WRH$_ACTIVE_SESSION_HISTORY 
WHERE NOT EXISTS
(SELECT 1
FROM wrm$_snapshot
WHERE snap_id = a.snap_id
AND dbid = a.dbid
AND instance_number = a.instance_number
) and rownum <=100000;
commit;
dbms_output.put_line('Done -');
end loop;
end;

/



For WRH$_SQL_BIND_METADATA use this 

delete from WRH$_SQL_BIND_METADATA tab 
where ( tab.snap_id <= :end_snap_id and 
dbid = :dbid) 
and not exists (select 1 from WRM$_BASELINE b 
where (tab.dbid = b.dbid) and 
(tab.snap_id >= b.start_snap_id) and 
(tab.snap_id <= b.end_snap_id)) 

Queries
AT this stage you should do a manual purge of all the snaps
 
exec dbms_workload_repository.drop_snapshot_range(low_snap_id => 1, high_snap_id=>10000); 
(substitute values accordingly)
the high snap_id is from dba_hist_snapshot or till the snap you want to keep 

After this point you are ready for re-org
you can either do for your tabels
1. shrink space cascade
2. move (Local Index UNUSABLE for Short Period in Alter Table Move Partition (Doc ID 1374199.1)
3. rebuild indexes
4. move lob storage 

Finally depending on the situation you have 

1. How to Reduce SYSAUX Tablespace Occupancy Due to Fragmented TABLEs and INDEXes (Doc ID 1563921.1)
2. How to Resize a Datafile (Doc ID 1029252.6)


Queries
Free Space SYSAUX

col "Tablespace" for a22
col "Used MB" for 99,999,999
col "Free MB" for 99,999,999
col "Total MB" for 99,999,999

select df.tablespace_name "Tablespace",
totalusedspace "Used MB",
(df.totalspace - tu.totalusedspace) "Free MB",
df.totalspace "Total MB",
round(100 * ( (df.totalspace - tu.totalusedspace)/ df.totalspace))
"Pct. Free"
from
(select tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from dba_data_files where tablespace_name = 'SYSAUX'
group by tablespace_name) df,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name
from dba_segments 
group by tablespace_name) tu

where df.tablespace_name = tu.tablespace_name ;

Friday, May 10, 2019

Oracle Database: Recommended RPM's except the default ones

In this blog I list RPM's which I always recommend on a Linux installation additional to what Oracle mandates. 

these RPM's are helpful on operational purposes and are required by DBAs and sys-admins when they are doing debugging of an issue. 

So below is the list which I do -
1. psmisc
2. bc
3. unzip
4. zip
5. bind-utils 
6. iotop
7. traceroute
8. tcpdump
9. strace
10. lsof 
11. sysstat
12. procps-ng
13. net-tools
14. iptraf-ng


to install (if you have yum confiugred)

yum install psmisc bc unzip zip bind-utils iotop traceroute tcpdump strace lsof sysstat procps-ng net-tools -y

Tuesday, May 7, 2019

Oracle Database: Updating Grid / Oracle Inventory Manually

In this small blog, I will tell you how to update the inventory file with node names manually.

This is useful in many cases
1.  Inventory does not have node names
2. RAC Node addition / deletion - automatic update does not happen
3. During Installation - inventory does not have node names


Case 1
Update Inventory RAC Node - Add Node names
(To be run as grid installation owner)
$ /opt/oracle/product/180/grid/oui/bin/runInstaller -silent -ignoreSysPrereqs -updateNodeList ORACLE_HOME=/opt/oracle/product/180/grid "CLUSTER_NODES={Node1,Node2}" CRS=true LOCAL_NODE=Node1

This will update the inventory on all cluster nodes (2 in this case)

Case 2
Update only on one node in case of cluster installation
(To be run as grid installation owner)
$ /opt/oracle/product/180/grid/oui/bin/runInstaller -updateNodeList -noClusterEnabled ORACLE_HOME=/opt/oracle/product/180/grid "CLUSTER_NODES={Node1,Node2}" CRS=true  LOCAL_NODE=Node2

Oracle Database: VOTE Disk (Closed) / Not Available

In this blog I am going to discuss an issue which I faced during my maintenance on cluster. 
One of the votedisk became unavailable and was dropped from the system in my cluster.


crsctl will only show me 2 Vote disks


$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   578049fd7bd34f51bfdf9cd7d2e02205 (/dev/oracleasm/disks/OCR_VOTE1) [OCR_VOTE]
 3. ONLINE   232d262659714f84bfbede64f662f566 (/dev/oracleasm/disks/OCR_VOTE2) [OCR_VOTE]

From V$asm_disk I try to see the status of disk. 

SQL> select mount_status, path, header_status, mode_status from V$asm_disk  where path like '%OCR_VOTE%'


MOUNT_S PATH                                     HEADER_STATU MODE_ST
------- ---------------------------------------- ------------ -------
CLOSED  /dev/oracleasm/disks/OCR_VOTE2           MEMBER       ONLINE
CACHED  /dev/oracleasm/disks/OCR_VOTE3           MEMBER       ONLINE
CACHED  /dev/oracleasm/disks/OCR_VOTE1           MEMBER       ONLINE

Solution  - I add back the Disk to Diskgroup
SQL> alter diskgroup OCR_VOTE add disk '/dev/oracleasm/disks/OCR_VOTE2' force;
(Reference - 
OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE) (Doc ID 428681.1)

and then I check the status again from v$asm_disk and crsctl

SQL> select mount_status, path, header_status, mode_status from V$asm_disk  where path like '%OCR_VOTE%' ;

MOUNT_S PATH                                     HEADER_STATU MODE_ST
------- ---------------------------------------- ------------ -------
CACHED  /dev/oracleasm/disks/OCR_VOTE3           MEMBER       ONLINE
CACHED  /dev/oracleasm/disks/OCR_VOTE2           MEMBER       ONLINE
CACHED  /dev/oracleasm/disks/OCR_VOTE1           MEMBER       ONLINE

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   578049fd7bd34f51bfdf9cd7d2e02205 (/dev/oracleasm/disks/OCR_VOTE1) [OCR_VOTE]
 2. ONLINE   65cc344fc7da4fbbbf22772629b12203 (/dev/oracleasm/disks/OCR_VOTE3) [OCR_VOTE]
 3. ONLINE   232d262659714f84bfbede64f662f566 (/dev/oracleasm/disks/OCR_VOTE2) [OCR_VOTE]

Monday, May 6, 2019

Oracle Database 19c: Grid Infrastructure / RAC Silent Installation

In this blog I will be telling you how to install Oracle 19c RAC in Silent Mode. 
Remember the key thing with RAC is that it requires a lot of pre-req. I have already shown in my previous blog Series on how to get those pre-req done. 
You can find the details of pre-req completion in below 2 Parts. 

Part 1
Part 2 

Now @ this stage I will be working only on the silent installation steps 

In this blog I will be using a newer version of Oracle Linux (7.5 - Kernel 4.1.12-112.16.4.el7uek.x86_64)

This assumes
1. You have setup your linux machine
2. Made a clone and changed the hostname
3. Assigned IP addresses to Public and private interfaces
4. Unzipped grid software in /u01/app/190/grid

(unzip -qq LINUX.X64_193000_grid_home.zip -d /u01/app/190/grid)



Cluster Pre-install
1. Setup passwordless ssh between users
cd 

[As grid Node 1]
$ cd /u01/app/190/grid/deinstall
$ ./sshUserSetup.sh -user grid -hosts "oelrac19c01 oelrac19c02" -noPromptPassphrase -confirm -advanced

2. Do the Pre-checks. Ensure they pass successfully. 
[As grid Node 1]
$ ./runcluvfy.sh stage -pre crsinst -n oelrac19c01,oelrac19c02 -r 19 -osdba dba  -orainv oinstall -asm -presence local -asmgrp asmadmin -asmdev /dev/oracleasm/disks/OCR_VOTE1,/dev/oracleasm/disks/OCR_VOTE2,/dev/oracleasm/disks/OCR_VOTE3  -crshome /u01/app/190/grid -fixup -verbose

Response File 
Response File is the key for silent installation. You can find the response file 
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v19.0.0
INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=oinstall
oracle.install.asm.OSOPER=
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.gpnp.scanName=rac-scan
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=rac-cluster
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.clusterNodes=oelrac19c01.novalocal:oelrac19c01-vip.novalocal,oelrac19c02.novalocal:oelrac19c02-vip.novalocal
oracle.install.crs.config.networkInterfaceList=eth0:192.168.10.0:1,eth1:192.168.30.0:5
oracle.install.crs.configureGIMR=true
oracle.install.asm.configureGIMRDataDG=false
oracle.install.crs.config.storageOption=FLEX_ASM_STORAGE
oracle.install.asm.SYSASMPassword=Oracle123
oracle.install.asm.diskGroup.name=OCR
oracle.install.asm.diskGroup.redundancy=NORMAL
oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/oracleasm/disks/OCR_VOTE1,,/dev/oracleasm/disks/OCR_VOTE2,,/dev/oracleasm/disks/OCR_VOTE3,
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/OCR_VOTE1,/dev/oracleasm/disks/OCR_VOTE2,/dev/oracleasm/disks/OCR_VOTE3
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*
oracle.install.asm.monitorPassword=Oracle123
oracle.install.asm.gimrDG.AUSize=1
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=false


3. Execute Pre-checks with Response File
(Failure of RPM DB checks is fine)
$ /u01/app/190/grid/gridSetup.sh -silent   -waitForCompletion  -responseFile /home/grid/grid_install.rsp


4. Start with Installation.
[As grid Node 1]
$ /u01/app/190/grid/gridSetup.sh -silent   -waitForCompletion  -responseFile /home/grid/grid_install.rsp

Launching Oracle Grid Infrastructure Setup Wizard...
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
   CAUSE: Some of the optional prerequisites are not met. See logs for details. gridSetupActions2019-05-06_10-10-38AM.log
   ACTION: Identify the list of failed prerequisite checks from the log: gridSetupActions2019-05-06_10-10-38AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
 /u01/app/190/grid/install/response/grid_2019-05-06_10-10-38AM.rsp

You can find the log of this install session at:
 /tmp/GridSetupActions2019-05-06_10-10-38AM/gridSetupActions2019-05-06_10-10-38AM.log
As a root user, execute the following script(s):
        1. /u01/app/oraInventory/orainstRoot.sh
        2. /u01/app/190/grid/root.sh

Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:
[oelrac19c01, oelrac19c02]
Execute /u01/app/190/grid/root.sh on the following nodes:
[oelrac19c01, oelrac19c02]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.

Successfully Setup Software with warning(s).
As install user, execute the following command to complete the configuration.
        /u01/app/190/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid_install.rsp [-silent]


Moved the install session logs to:
 /u01/app/oraInventory/logs/GridSetupActions2019-05-06_10-10-38AM

4. Root Script Execution
[Node 1 - /u01/app/oraInventory/orainstRoot.sh]

/u01/app/oraInventory/orainstRoot.sh

Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[Node 2 - /u01/app/oraInventory/orainstRoot.sh]


/u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.

The execution of the script is complete.

[Node 1 - /u01/app/190/grid/root.sh]
Check /u01/app/190/grid/install/root_oelrac19c01.novalocal_2019-05-06_10-16-38-092662119.log for the output of root script
The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/190/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/190/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/oelrac19c01/crsconfig/rootcrs_oelrac19c01_2019-05-06_10-16-52AM.log
2019/05/06 10:17:04 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2019/05/06 10:17:04 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2019/05/06 10:17:04 CLSRSC-363: User ignored prerequisites during installation
2019/05/06 10:17:04 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2019/05/06 10:17:06 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2019/05/06 10:17:07 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2019/05/06 10:17:07 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2019/05/06 10:17:07 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2019/05/06 10:17:26 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2019/05/06 10:17:31 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/05/06 10:17:32 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2019/05/06 10:17:43 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2019/05/06 10:17:43 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2019/05/06 10:17:49 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2019/05/06 10:17:49 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/05/06 10:18:51 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2019/05/06 10:18:57 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2019/05/06 10:19:03 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2019/05/06 10:19:08 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

ASM has been created and started successfully.

[DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-190506AM101940.log for details.

2019/05/06 10:20:33 CLSRSC-482: Running command: '/u01/app/190/grid/bin/ocrconfig -upgrade grid oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk 2ff9dffc35ee4fccbf62a447d521467a.
Successful addition of voting disk c4627f0308db4f2fbf72661f27dd0237.
Successful addition of voting disk 1d28aff2edd44fe6bf4ec93c523991c5.
Successfully replaced voting disk group with +OCR.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   2ff9dffc35ee4fccbf62a447d521467a (/dev/oracleasm/disks/OCR_VOTE1) [OCR]
 2. ONLINE   c4627f0308db4f2fbf72661f27dd0237 (/dev/oracleasm/disks/OCR_VOTE2) [OCR]
 3. ONLINE   1d28aff2edd44fe6bf4ec93c523991c5 (/dev/oracleasm/disks/OCR_VOTE3) [OCR]
Located 3 voting disk(s).
2019/05/06 10:21:57 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2019/05/06 10:22:58 CLSRSC-343: Successfully started Oracle Clusterware stack
2019/05/06 10:22:58 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2019/05/06 10:24:33 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.

2019/05/06 10:25:02 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded


[Node 2 - /u01/app/190/grid/root.sh]

Check /u01/app/190/grid/install/root_oelrac19c02.novalocal_2019-05-06_10-25-39-630791710.log for the output of root script
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/190/grid
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/190/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/oelrac19c02/crsconfig/rootcrs_oelrac19c02_2019-05-06_10-25-52AM.log
2019/05/06 10:25:59 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2019/05/06 10:25:59 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2019/05/06 10:25:59 CLSRSC-363: User ignored prerequisites during installation
2019/05/06 10:25:59 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2019/05/06 10:26:00 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2019/05/06 10:26:00 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2019/05/06 10:26:00 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2019/05/06 10:26:01 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2019/05/06 10:26:02 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2019/05/06 10:26:02 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2019/05/06 10:26:07 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2019/05/06 10:26:07 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2019/05/06 10:26:08 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2019/05/06 10:26:09 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/05/06 10:26:24 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/05/06 10:27:09 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2019/05/06 10:27:11 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2019/05/06 10:27:13 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2019/05/06 10:27:14 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2019/05/06 10:27:22 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2019/05/06 10:28:12 CLSRSC-343: Successfully started Oracle Clusterware stack
2019/05/06 10:28:12 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2019/05/06 10:28:25 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.

2019/05/06 10:28:31 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded


[Node 1 - Run Post Config] 
$ /u01/app/190/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid_install.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...
You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2019-05-06_10-30-51AM

You can find the log of this install session at:
 /u01/app/oraInventory/logs/UpdateNodeList2019-05-06_10-30-51AM.log


Successfully Configured Software.


This completes silent installation of Oracle Cluster 2 Node 19c