Advertisement

Thursday, December 20, 2018

Oracle Database - 12c/18c - Enabling/Disabling Database Options (chopt)

In this blog I am going to cover on how to enable and disable database options in 12c/18c of Database.


You can get help by running the utility without any options
(Set the environment prior)

Ensure that all databases and listeners running from Oracle Home are stopped before running this. 


$ORACLE_HOME/bin/chopt

usage:

chopt <enable|disable> <option>

options:
                 oaa = Oracle Advanced Analytics
                olap = Oracle OLAP
        partitioning = Oracle Partitioning
                 rat = Oracle Real Application Testing


e.g. chopt enable rat


To Disable an option use as below 

$chopt disable rat

Writing to /opt/oracle/product/180/db/install/disable_rat_2018-12-20_05-15-13AM.log...
/usr/bin/make -f /opt/oracle/product/180/db/rdbms/lib/ins_rdbms.mk rat_off ORACLE_HOME=/opt/oracle/product/180/db
/usr/bin/make -f /opt/oracle/product/180/db/rdbms/lib/ins_rdbms.mk ioracle ORACLE_HOME=/opt/oracle/product/180/db


Note - you can run the utility twice and it will not tell you if the option is enabled or not. 

$ chopt disable oaa

Writing to /opt/oracle/product/180/db/install/disable_oaa_2018-12-20_05-16-24AM.log...
/usr/bin/make -f /opt/oracle/product/180/db/rdbms/lib/ins_rdbms.mk dm_off ORACLE_HOME=/opt/oracle/product/180/db

/usr/bin/make -f /opt/oracle/product/180/db/rdbms/lib/ins_rdbms.mk ioracle ORACLE_HOME=/opt/oracle/product/180/db


Finally start a instance in no-mount / mount mode to ensure your changes are applied

set lines 500 pages 500
col parameter for a50

col value for a50

  1* select * from V$option where parameter like '%Real%' or parameter like '%Analytics%'

PARAMETER                                          VALUE                                                  CON_ID
-------------------------------------------------- -------------------------------------------------- ----------
Real Application Clusters                          FALSE                                                       0
Advanced Analytics                                 FALSE                                                       0
Real Application Testing                           FALSE                                                       0
Real Application Security                          TRUE                                                        0

Similarly to enable use
$chopt enable oaa

Reference: https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ssdbi/chopt-tool.html#GUID-057E4EFC-74ED-43B3-B03B-C83C5A5D3C7F

Tuesday, December 18, 2018

Oracle Database - Exadata and Exadata Cloud - Important MoS Doc ID's

In this blog  I will write on all the important links for Exadata which will be useful to you. 


This are very helpful links, which can help you keep updated with Exadata DB Machine in premise or if you are servicing one for a customer. 

So keep them handy.. :)

  1. Exadata Database Machine and Exadata Storage Server Supported Versions (Doc ID 888828.1)
  2. Engineered Systems Resource Center - Doc ID 1392174.1 
  3. Information Center: Oracle Exadata Database Machine - Doc ID 1306791.1  
  4. Information Center: Using Oracle Exadata Database Machine - Doc ID 1460198.2 
  5. Information Center: Upgrading Oracle Exadata Database Machine - Doc ID 1364356.2
  6. Exadata System Software Certification - Doc ID 2075007.1 
  7. Exadata Storage Software Versions Supported by the Oracle Enterprise Manager Exadata Plug-in - Doc ID 1626579.1 
  8. Exadata Software and Hardware Support Lifecycle- Doc ID 1570460.1 
  9. Oracle Exadata Best Practices – Doc ID 757552.1 
  10. Oracle Exadata Database Machine Setup/Configuration Best Practices - Doc ID 1274318.1
  11. Exadata Critial Issues - Doc ID 1270094.1
  12.  How To Collect Diagpack Diagnostic Package In Exadata - Doc ID 2226173.1
  13. Oracle Exadata Database Machine exachk or HealthCheck – Doc ID 1070954.1

    Exadata Cloud
  14. Information Center: Oracle Database Exadata Cloud - Doc ID 2334729.2
  15. Information Center: Patching and Maintaining Oracle Database Exadata Cloud - Doc ID 2334779.2
  16. Known Issues for Oracle Database Exadata Cloud Service - Doc ID 2249093.1
  17. Exadata Cloud Service Software Versions - Doc ID 2333222.1
  18. Known Issues for Oracle Database Exadata Cloud Machine - Doc ID 2252305.1
  19. Technology Cloud Services (PaaS and IaaS) Maintenance Schedule - Doc ID 2131053.2

Oracle Database - 18c Important Documentation and Patch Information

In this blog I discuss important documentation on Database 18c. 
It lists the important links which you require when thinking of Database 18c. 


This list is quite handy for Database Administrators of 18c when planning new installation, upgrade or finding about new features of 18c.

The Patch information in the end is a quick link to find and download the Patches for this release. 



  1. Oracle Database 18c Documentation
  2. Oracle Database 18c Tutorials
  3. Information Center: Oracle Database 18c - Doc ID 2446877.2
  4. Oracle 18c - Complete Checklist for Upgrading to Oracle Database 18c (18.x) using DBUA - Doc ID 2418576.1
  5. Oracle Database Install 18c FAQ : Changes,New Features(RPM Based Install & Read Only Oracle Home & Other Features) - Doc ID 2438532.1
  6. Oracle Warehouse Management Cloud - Documentation for Update 18C - Doc ID 2438264.1
  7. Database 18 Release Updates and Revisions Bugs Fixed Lists - Doc ID 2369471.1
  8. How To Configure Authentication For The Centrally Managed Users In An 18c Database - Doc ID 2462012.1
  9. Updated - Database 18 Proactive Patch Information - Doc ID 2369376.1
  10. Updated - Oracle Database / Grid Infrastructure / OJVM Release Update and Release Update Revision R18 Oct 2018 Known Issues - Doc ID 2433586.1
  11. Assistant: Download Reference for Oracle Database/GI Update, Revision, PSU, SPU(CPU), Bundle Patches, Patchsets and Base Releases (Doc ID 2118136.2)


Oracle Database - Release Support Summary and Alert Status

In this blog, I am writing about important MoS DoC ID's which help you in good navigation to better understand the support and status of different release starting from Oracle 9i. 

It also lists a lot of other information as you browse through. 

The key document is the Release support summary which helps you to know the supported dates of different releases.

It is a good idea to follow these documents time to time as per the release you are working on.


  1. Oracle Server (RDBMS) Releases Support Status Summary – Doc ID 161818.1
  2. ALERT: Oracle 9i Release 2 (9.2) Support Status and Alerts – Doc ID 189908.1
  3. ALERT: Oracle 10g Release1 (10.1) Support Status and Alerts - Doc ID 263719.1
  4. ALERT: Oracle 10g Release 2 (10.2) Support Status and Alerts - Doc ID 316900.1
  5. ALERT: Oracle 11g Release 1 (11.1) Support Status and Alerts - Doc ID 454507.1
  6. ALERT: Oracle 11g Release 2 (11.2) Support Status and Alerts - Doc ID 880782.1
  7. ALERT: Oracle 12c Release1 (12.1) Support Status and Alerts - Doc ID 1565065.1
  8. ALERT: Oracle 12c Release 2 (12.2) Support Status and Alerts - Doc ID 2239821.1




Thursday, November 22, 2018

Oracle Database: RAC - OCR and OLR Backup Scripts (18c / 12c / 11g)

In this blog I present to you OCR and OLR backup scripts for RAC.
OCR and OLR backups must be maintained manually as well and if possible on a shared mount. 

There are 2 scripts below one which backs up OCR and other OLR.
The key thing common in the scripts is to define Oracle Home, Backup location and configure retention. 
If you open the script is is pretty straightforward that way. 
You can take these scripts and use it in your environment.

OCR Backup
#!/bin/bash
#

TIMESTAMP=`date +"%d.%m.%Y_%H:%M:%S"`
BACKUP_LOC=/data/OCR



echo "Activity time "${TIMESTAMP}

if [ -d $BACKUP_LOC ]; then

echo "Backup destination $BACKUP_LOC does exist on this server.........:"

else

mkdir -p $BACKUP_LOC

#chown -R root:root $BACKUP_LOC

#chmod 755 $BACKUP_LOC

BACKUP_LOC=$BACKUP_LOC

fi


get_ocr()
{
#-------------------------------------------------------------------------------------------------
#Get OCR Location
#-------------------------------------------------------------------------------------------------
echo "Running get_ocr script to display ocr location :"
for i in `${ORACLE_HOME}/bin/ocrcheck |grep -i "Device/File Name"|grep -v grep|awk '{print $4}'`
do
echo "Ocr Disk location is ......................................:${i}"
done
}
bck_ocr()
{
#--------------------------------------------------------------
#Take Backup of OCR
#--------------------------------------------------------------
echo "Running bck_ocr script to backup the ocr files :"

for i in `${ORACLE_HOME}/bin/ocrcheck |grep -i "Device/File Name"|grep -v grep|awk 'NR==1{print $4}'`
do
echo "OCR backup file will be .........:$BACKUP_LOC/backup_manual_${TIMESTAMP}.ocr"
${ORACLE_HOME}/bin/ocrconfig -export $BACKUP_LOC/backup_manual_${TIMESTAMP}.ocr

if [ $? -eq 0 ]; then
echo "OCR file successfully backed up.............................:"
else
echo " Error: While backing up the OCR file"
fi
done
}
perf_ocr_back()
{
#--------------------------------------------------------------------------------------------------
#Check CRS Status
#--------------------------------------------------------------------------------------------------
crsstatus=`$ORACLE_HOME/bin/crsctl check crs|grep -i "Cluster Ready Services is online"|tr -s '\n'`
echo $crsstatus
if [ "$crsstatus" = "CRS-4537: Cluster Ready Services is online" ];
then
echo " CRS is up and running"
get_ocr
bck_ocr
else
echo " CRS is not avalible it should be up and running for backup."
fi
}

remove_old_backup()
{
#--------------------------------------------------------------------------------------------------
#Remove 90 days old backup
#--------------------------------------------------------------------------------------------------
if [ -n "$BACKUP_LOC" ] ; then
RESULTS=`find $BACKUP_LOC -type f -ctime +90|wc -l`
if [ $RESULTS = 0 ] ; then
echo " There are no old backup files to delete this time."
else
for i in `find $BACKUP_LOC -type f -ctime +90`
do
echo " Remove old backup file..................................................:${i} "
rm -f $i
done
fi
else
echo "Backup destination is not set on this server .............................: "
fi
}

#----------------------------------------------------------------------------------
# Main Function - Shell Execution Starts here.
#----------------------------------------------------------------------------------
ORACLE_HOME=/opt/oracle/product/180/grid
export ORACLE_HOME
PATH=$PATH:$ORACLE_HOME/bin
export PATH

echo " OCR disk Information and backup ocr disk ....................................:"
perf_ocr_back
echo " Running program to remove old backups of ocr disks older than three months..............:"
remove_old_backup




OLR Backup

#!/bin/bash
#

TIMESTAMP=`date +"%d.%m.%Y_%H:%M:%S"`
BACKUP_LOC=/data/OLR # Where backup happens

#--------------------------------------------------------------------------------------------------
#Locate and check BACKUP_LOC exists or not else it will create the location
#--------------------------------------------------------------------------------------------------

echo "Activity time "${TIMESTAMP}

if [ -d $BACKUP_LOC ]; then

echo "Backup destination $BACKUP_LOC does exist on this server.........:"

else

mkdir -p $BACKUP_LOC

BACKUP_LOC=$BACKUP_LOC

fi


get_OLR()
{
#-------------------------------------------------------------------------------------------------
#Get OLR Location
#-------------------------------------------------------------------------------------------------
echo "Running get_OLR script to display OLR location :"
for i in `${ORACLE_HOME}/bin/ocrcheck -local |grep -i "Device/File Name"|grep -v grep|awk '{print$4}'`
do
echo "OLR Disk location is ......................................:${i}"
done
}
bck_OLR()
{
#--------------------------------------------------------------
#Take Backup of OLR
#--------------------------------------------------------------
echo "Running bck_OLR script to backup the OLR files :"

for i in `${ORACLE_HOME}/bin/ocrcheck -local |grep -i "Device/File Name"|grep -v grep|awk '{print$4}'`
do
echo "OLR backup file will be .........:$BACKUP_LOC/backup_manual_${TIMESTAMP}.olr"
${ORACLE_HOME}/bin/ocrconfig -local -export $BACKUP_LOC/backup_manual_${TIMESTAMP}.olr

if [ $? -eq 0 ]; then
echo "OLR file successfully backed up.............................:"
else
echo " Error: While backing up the OLR file"
fi
done
}
perf_OLR_back()
{
#--------------------------------------------------------------------------------------------------
#Check CRS Status
#--------------------------------------------------------------------------------------------------
crsstatus=`$ORACLE_HOME/bin/crsctl check crs|grep -i "Cluster Ready Services is online"|tr -s '\n'`
echo $crsstatus
if [ "$crsstatus" = "CRS-4537: Cluster Ready Services is online" ];
then
echo " CRS is up and running"
get_OLR
bck_OLR
else
echo " CRS is not avalible it should be up and running for backup."
fi
}

remove_old_backup()
{
#--------------------------------------------------------------------------------------------------
#Remove 90 days old backup
#--------------------------------------------------------------------------------------------------
if [ -n "$BACKUP_LOC" ] ; then
RESULTS=`find $BACKUP_LOC -type f -ctime +90|wc -l`
if [ $RESULTS = 0 ] ; then
echo " There are no old backup files to delete this time."
else
for i in `find $BACKUP_LOC -type f -ctime +90`
do
echo " Remove old backup file..................................................:${i} "
rm -f $i
done
fi
else
echo "Backup destination is not set on this server .............................: "
fi
}

#----------------------------------------------------------------------------------
# Main Function - Shell Execution Starts here
#----------------------------------------------------------------------------------
ORACLE_HOME=/opt/oracle/product/180/grid
export ORACLE_HOME
PATH=$PATH:$ORACLE_HOME/bin
export PATH

echo " OLR disk Information and backup OLR disk ....................................:"
perf_OLR_back
echo " Running program to remove old backups of OLR disks older than three months..............:"
remove_old_backup

Thursday, September 27, 2018

Oracle Database: 18c dbca Segmentation Fault

In this short blog I talk about Segmentation fault (core dumped) error when launching dbca.

[oracle@18cbox ~]$ dbca

Segmentation fault (core dumped)

whenever dbca was launched - in silent or on in UI mode - it will not start and give the error as above.


On some diligence it was found out that the problem was with the environment variable which was set as below - 
NLS_DATE_FORMAT="Mon  DD/MM/YYYY HH24:MI:SS"

On fixing the value of this environment variable dbca ran fine. 

NLS_DATE_FORMAT="DD/MM/YYYY HH24:MI:SS"


So if dbca is not working and you get segmentation fault, unset all unrequited environment variables and try again.

Hope it Helps :).


Wednesday, September 26, 2018

Oracle Database 18c: Oracle Restart Silent DB Creation

In this short blog, I am going to cover on how to create a database using dbca command line using silent method


export ORACLE_BASE=/u01/app/oracle # Set it as per your environment
export ORACLE_HOME=/u01/app/oracle/product/180/db
export PATH=$ORACLE_HOME/bin:$PATH

Use dbca silent method to create Database



#####
dbca -silent -createDatabase -gdbName ORCL -sid ORCL \
-templateName /opt/oracle/product/180/db/assistants/dbca/templates/General_Purpose.dbc \
-characterSet WE8MSWIN1252 -nationalCharacterSet AL16UTF16 \
-databaseConfigType SI \
-databaseType MULTIPURPOSE \
-asmsnmpPassword Oracle123 -sysPassword Oracle123 -systemPassword Oracle123  \
-redoLogFileSize 300 \
-sampleSchema false -storageType ASM \
-datafileDestination DG_DATA  -archiveLogDest DG_ARCH \
-enableArchive false  \
-automaticMemoryManagement false  \

 -initParams 'undo_retention=900,db_block_size=8K,processes=450,use_large_pages=ONLY,sga_target=2048MB,pga_aggregate_target=512M,db_create_online_log_dest_1=+DG_REDO1,db_create_online_log_dest_2=+DG_REDO2' 


Prepare for db operation
10% complete
Registering database with Oracle Restart
14% complete
Copying database files
43% complete
Creating and starting Oracle instance
45% complete
49% complete
53% complete
56% complete
62% complete
Completing Database Creation
68% complete
70% complete
71% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/base/cfgtoollogs/dbca/ORCL.
Database Information:
Global Database Name:ORCL
System Identifier(SID):ORCL

Look at the log file "/u01/app/oracle/base/cfgtoollogs/dbca/ORCL/ORCL.log" for further details.

Monday, September 17, 2018

Oracle Cloud (OCI) - Creating Compartments

In this blog I discuss on how to create compartments in Oracle Cloud.

Compartments are an essential component and one of the key differentiators of OCI compared to other cloud vendors in market. 

These act like containers of the resources within the same tenancy ID.

They help in separating resources and policies on resources which can be a real useful feature in a big environment.

Now let's see how to create a compartment (and how simple it is indeed to create one)

They can be simply created in 4 steps as you can see below.

Step 1 - Go to Compartments (Identity --> Compartments)


Step 2 - Click Create Compartment

Step 3 - Enter Details of the compartment and Click "Create Compartment"


Step 4  -Verify the name of the compartment and details.
You must note a compartment cannot be deleted as per this version of OCI, so unless you have a test account, do not end up creating junk :)


Wednesday, September 12, 2018

Oracle Database 18c: Oracle Restart DeInstallation

In this blog we are going to understand how to de-install an Oracle restart configuration.

You must de-install all linked Database binaries - before de-installing Oracle Restart or Grid Infrastructure.

This blog assumes that you have already de-installed the DB binaries and are de-installing the Grid Infra.

Step 1
Got to Oracle home
cd /u01/app/180/grid/deinstall

Step 2 
Run the de-install utility (Ensure has is up and running)
./deinstall
You will be prompted for Inputs
1. Name of Listeners
2. Name and configuration of Disks and Diskgroups
3. if you want to continue the deinstall 

You must select them at appropriate stages of deinstall,if you have has up and running then these will probably  be auto selected for you and all you have to do is validate that it is allright.
All above points are highlighted in the window below


[grid@oelrestart18c deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2018-09-12_10-01-24AM/logs/

############ ORACLE DECONFIG TOOL START ############


######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/180/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/180/grid

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2018-09-12_10-01-24AM/logs//crsdc_2018-09-12_10-01-36-AM.log

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/netdc_check2018-09-12_10-01-37AM.log

Specify all Oracle Restart enabled listeners that are to be de-configured. Enter .(dot) to deselect all. [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/asmcadc_check2018-09-12_10-01-37AM.log

Automatic Storage Management (ASM) instance is detected in this Oracle home /u01/app/180/grid.
ASM Diagnostic Destination : /u01/app/grid
ASM Diskgroups : +DATA,+REDO
ASM diskstring : /dev/oracleasm/disks/*
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and their contents at cleanup time. This will affect all of the databases and ACFS that use this ASM instance(s).
If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. Do you want to modify above information (y|n) [n]:
Database Check Configuration START

Database de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/databasedc_check2018-09-12_10-01-37AM.log

Database Check Configuration END

######################### DECONFIG CHECK OPERATION END #########################


####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/180/grid
Oracle Home selected for deinstall is: /u01/app/180/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: LISTENER
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2018-09-12_10-01-24AM/logs/deinstall_deconfig2018-09-12_10-01-35-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2018-09-12_10-01-24AM/logs/deinstall_deconfig2018-09-12_10-01-35-AM.err'

######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/databasedc_clean2018-09-12_10-01-37AM.log
ASM de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/asmcadc_clean2018-09-12_10-01-37AM.log
ASM Clean Configuration START
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/netdc_clean2018-09-12_10-01-37AM.log

De-configuring Oracle Restart enabled listener(s): LISTENER

De-configuring listener: LISTENER
Stopping listener: LISTENER
Listener stopped successfully.
Unregistering listener: LISTENER
Listener unregistered successfully.
Deleting listener: LISTENER
Listener deleted successfully.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

Run the following command as the root user or the administrator on node "oelrestart18c".

/u01/app/180/grid/crs/install/roothas.sh -force -deconfig -paramfile "/tmp/deinstall2018-09-12_10-01-24AM/response/deinstall_OraGI18Home1.rsp"

Press Enter after you finish running the above commands

Run the de-configuration script as root user

[root@oelrestart18c ~]# /u01/app/180/grid/crs/install/roothas.sh -force -deconfig -paramfile "/tmp/deinstall2018-09-12_10-01-24AM/response/deinstall_OraGI18Home1.rsp"
Using configuration parameter file: /tmp/deinstall2018-09-12_10-01-24AM/response/deinstall_OraGI18Home1.rsp
The log of current session can be found at:
/tmp/deinstall2018-09-12_10-01-24AM/logs/hadeconfig.log
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oelrestart18c'
CRS-2673: Attempting to stop 'ora.evmd' on 'oelrestart18c'
CRS-2677: Stop of 'ora.evmd' on 'oelrestart18c' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'oelrestart18c'
CRS-2677: Stop of 'ora.cssd' on 'oelrestart18c' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oelrestart18c' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/09/12 10:03:25 CLSRSC-337: Successfully deconfigured Oracle Restart stack

Press enter on the main window.
this completes the de-installation of Oracle Restart
You can delete the contents of /u01/app if there is nothing else installed under this direcotyr

Oracle Database 18c: Oracle Restart Silent Installation

In this blog I am going to install Oracle Restart using Silent Method. 

UI installations are not always supported for Oracle installations, so understanding to install in Silent is an important aspect.
This blog picks up from one of my previous blogs where I have mentioned all the pre-requisites for 18c Restart install

Assuming you have completed all the pre-reqs, let's continue from there - 

cd /u01/app/180/grid
unzip -qq LINUX.X64_180000_grid_home.zip 

Run the Pre-check first to ensure everything is fine 


[grid@oelrestart18c grid]$ ./runcluvfy.sh stage -pre crsinst -n oelrestart18c

Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: oelrestart18c:/usr,oelrestart18c:/var,oelrestart18c:/etc,oelrestart18c:/sbin,oelrestart18c:/tmp ...PASSED
Verifying User Existence: grid ...
Verifying Users With Same UID: 54232 ...PASSED
Verifying User Existence: grid ...PASSED
Verifying Group Existence: asmadmin ...PASSED
Verifying Group Existence: asmdba ...FAILED (PRVG-10461)
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: asmdba ...FAILED (PRVG-10460)
Verifying Group Membership: asmadmin ...PASSED
Verifying Group Membership: oinstall(Primary) ...PASSED
Verifying Run Level ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.23.52.0.1 ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED
Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Host name ...PASSED
Verifying Node Connectivity ...
Verifying Hosts File ...PASSED
Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast or broadcast check ...PASSED
Verifying ASMLib installation and configuration verification. ...
Verifying '/etc/init.d/oracleasm' ...PASSED
Verifying '/dev/oracleasm' ...PASSED
Verifying '/etc/sysconfig/oracleasm' ...PASSED
Verifying ASMLib installation and configuration verification. ...PASSED
Verifying Network Time Protocol (NTP) ...
Verifying '/etc/chrony.conf' ...PASSED
Verifying '/var/run/chronyd.pid' ...PASSED
Verifying Daemon 'chronyd' ...PASSED
Verifying NTP daemon or service using UDP port 123 ...PASSED
Verifying chrony daemon is synchronized with at least one external time source ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying Same core file name pattern ...PASSED
Verifying User Mask ...PASSED
Verifying User Not In Group "root": grid ...PASSED
Verifying Time zone consistency ...PASSED
Verifying resolv.conf Integrity …PASSED
Verifying DNS/NIS name service ...PASSED
Verifying Domain Sockets ...PASSED
Verifying /boot mount ...PASSED
Verifying Daemon "avahi-daemon" not configured and running ...PASSED
Verifying Daemon "proxyt" not configured and running ...PASSED
Verifying User Equivalence ...PASSED
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying zeroconf check ...PASSED
Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Group Existence: asmdba ...FAILED
oelrestart18c: PRVG-10461 : Group "asmdba" selected for privileges "OSDBA" does
not exist on node "oelrestart18c".

Verifying Group Membership: asmdba ...FAILED
oelrestart18c: PRVG-10460 : User "grid" does not belong to group "asmdba"
selected for privileges "OSDBA" on node "oelrestart18c".

Verifying resolv.conf Integrity ...FAILED
oelrestart18c: PRVG-13159 : On node "oelrestart18c" the file "/etc/resolv.conf"
could not be parsed because the file is empty.


CVU operation performed: stage -pre crsinst
Date: Sep 12, 2018 5:44:03 AM
CVU home: /u01/app/180/grid/
User: grid


The Next Step is actually to prepare a response file.  You can see my detailed response file below.


[grid@oelrestart18c ~]$ cat grid.rsp
###############################################################################
## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ##
## ##
## Specify values for the variables listed below to customize ##
## your installation. ##
## ##
## Each variable is associated with a comment. The comment ##
## can help to populate the variables with the appropriate ##
## values. ##
## ##
## IMPORTANT NOTE: This file contains plain text passwords and ##
## should be secured to have read permission only by oracle user ##
## or db administrator who owns this installation. ##
## ##
###############################################################################

###############################################################################
## ##
## Instructions to fill this response file ##
## To register and configure 'Grid Infrastructure for Cluster' ##
## - Fill out sections A,B,C,D,E,F and G ##
## - Fill out section G if OCR and voting disk should be placed on ASM ##
## ##
## To register and configure 'Grid Infrastructure for Standalone server' ##
## - Fill out sections A,B and G ##
## ##
## To register software for 'Grid Infrastructure' ##
## - Fill out sections A,B and D ##
## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ##
## installation option in section A ##
## ##
## To upgrade clusterware and/or Automatic storage management of earlier ##
## releases ##
## - Fill out sections A,B,C,D and H ##
## ##
## To add more nodes to the cluster ##
## - Fill out sections A and D ##
## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ##
## installation option in section A ##
## ##
###############################################################################

#------------------------------------------------------------------------------
# Do not change the following system generated value.
#------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0

###############################################################################
# #
# SECTION A - BASIC #
# #
###############################################################################


#-------------------------------------------------------------------------------
# Specify the location which holds the inventory files.
# This is an optional parameter if installing on
# Windows based Operating System.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/u01/app/oraInventory

#-------------------------------------------------------------------------------
# Specify the installation option.
# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY
# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster
# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server
# - UPGRADE : To register home and upgrade clusterware software of earlier release
# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster
# or stand alone server later)
# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand
# alone server later. This is only supported on Windows.)
# - CRS_ADDNODE : To add more nodes to the cluster
# - CRS_DELETE_NODE : To delete nodes to the cluster
#-------------------------------------------------------------------------------
oracle.install.option=HA_CONFIG

#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Base.
#-------------------------------------------------------------------------------
ORACLE_BASE=/u01/app/grid

################################################################################
# #
# SECTION B - GROUPS #
# #
# The following three groups need to be assigned for all GI installations. #
# OSDBA and OSOPER can be the same or different. OSASM must be different #
# than the other two. #
# The value to be specified for OSDBA, OSOPER and OSASM group is only for #
# Unix based Operating System. #
# These groups are not required for upgrades, as they will be determined #
# from the Oracle home to upgrade. #
# #
################################################################################
#-------------------------------------------------------------------------------
# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges.
#-------------------------------------------------------------------------------
oracle.install.asm.OSDBA=oinstall

#-------------------------------------------------------------------------------
# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges.
# The value to be specified for OSOPER group is optional.
# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE.
#-------------------------------------------------------------------------------
oracle.install.asm.OSOPER=

#-------------------------------------------------------------------------------
# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This
# must be different than the previous two.
#-------------------------------------------------------------------------------
oracle.install.asm.OSASM=asmadmin

################################################################################
# #
# SECTION C - SCAN #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify the type of SCAN configuration for the cluster
# Allowed values : LOCAL_SCAN and SHARED_SCAN
#-------------------------------------------------------------------------------
oracle.install.crs.config.scanType=LOCAL_SCAN

#-------------------------------------------------------------------------------
# Applicable only if SHARED_SCAN is being configured for cluster
# Specify the path to the SCAN client data file
#-------------------------------------------------------------------------------
oracle.install.crs.config.SCANClientDataFile=

#-------------------------------------------------------------------------------
# Specify a name for SCAN
# Applicable if LOCAL_SCAN is being configured for the cluster
# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain'
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.scanName=

#-------------------------------------------------------------------------------
# Specify a unused port number for SCAN service
#-------------------------------------------------------------------------------

oracle.install.crs.config.gpnp.scanPort=


################################################################################
# #
# SECTION D - CLUSTER & GNS #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure the cluster as Extended, else
# specify 'false'
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.configureAsExtendedCluster=false


#-------------------------------------------------------------------------------
# Specify the Member Cluster Manifest file
#
# Applicable only for MEMBERDB and MEMBERAPP cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.memberClusterManifestFile=

#-------------------------------------------------------------------------------
# Specify a name for the Cluster you are creating.
#
# The maximum length allowed for clustername is 15 characters. The name can be
# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-)
# and underscore(_).
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterName=

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration.
# Specify 'true' if you would like to configure Grid Naming Service(GNS), else
# specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.configureGNS=false

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS.
# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP
# , else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.autoConfigureClusterNodeVIP=false

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure GNS.
# Specify the type of GNS configuration for cluster
# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS
# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration.
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS

#-------------------------------------------------------------------------------
# Applicable only if SHARED_GNS is being configured for cluster
# Specify the path to the GNS client data file
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsClientDataFile=

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to
# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
# Specify the GNS subdomain and an unused virtual hostname for GNS service
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=

#-------------------------------------------------------------------------------
# Specify the list of sites - only if configuring an Extended Cluster
#-------------------------------------------------------------------------------
oracle.install.crs.config.sites=

#-------------------------------------------------------------------------------
# Specify the list of nodes that have to be configured to be part of the cluster.
#
# The list should a comma-separated list of tuples. Each tuple should be a
# colon-separated string that contains
# - 1 field if you have chosen CRS_SWONLY as installation option, or
# - 1 field if configuring an Application Cluster, or
# - 3 fields if configuring a Flex Cluster
# - 3 fields if adding more nodes to the configured cluster, or
# - 4 fields if configuring an Extended Cluster
#
# The fields should be ordered as follows:
# 1. The first field should be the public node name.
# 2. The second field should be the virtual host name
# (Should be specified as AUTO if you have chosen 'auto configure for VIP'
# i.e. autoConfigureClusterNodeVIP=true)
# 3. The third field indicates the role of node (HUB,LEAF). This has to
# be provided only if Flex Cluster is being configured.
# For Extended Cluster only HUB should be specified for all nodes
# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster.
# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option
# The 2nd and 3rd fields are not applicable if configuring an Application Cluster
#
# Examples
# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2
# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2
# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2
# You can specify a range of nodes in the tuple using colon separated fields of format
# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterNodes=

#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetAddress:InterfaceType
# where InterfaceType can be either "1", "2", "3", "4", or "5"
# InterfaceType stand for the following values
# - 1 : PUBLIC
# - 2 : PRIVATE
# - 3 : DO NOT USE
# - 4 : ASM
# - 5 : ASM & PRIVATE
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=

#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup to store GIMR data.
# Specify 'true' if you would like to separate GIMR data with clusterware data,
# else specify 'false'
# Value should be 'true' for DOMAIN cluster configurations
# Value can be true/false for STANDALONE cluster configurations.
#------------------------------------------------------------------------------
oracle.install.asm.configureGIMRDataDG=false

################################################################################
# #
# SECTION E - STORAGE #
# #
################################################################################

#-------------------------------------------------------------------------------
# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting
# Disks files
# - FLEX_ASM_STORAGE
# - CLIENT_ASM_STORAGE
#
# Applicable only for MEMBERDB cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.storageOption=
################################################################################
# #
# SECTION F - IPMI #
# #
################################################################################

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Intelligent Power Management interface
# (IPMI), else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.useIPMI=false

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure IPMI
# i.e. oracle.install.crs.config.useIPMI=true
# Specify the username and password for using IPMI service
#-------------------------------------------------------------------------------
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
################################################################################
# #
# SECTION G - ASM #
# #
################################################################################

#-------------------------------------------------------------------------------
# ASM Storage Type
# Allowed values are : ASM and ASM_ON_NAS
# ASM_ON_NAS applicable only if
# oracle.install.crs.config.ClusterConfiguration=STANDALONE
#-------------------------------------------------------------------------------
oracle.install.asm.storageOption=ASM

#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing OCR/VDSK
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store OCR/VDSK files
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.ocrLocation=
#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup on NAS to store GIMR data
# Specify 'true' if you would like to separate GIMR data with clusterware data, else
# specify 'false'
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#------------------------------------------------------------------------------
oracle.install.asmOnNAS.configureGIMRDataDG=false

#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing GIMR data
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store the GIMR database
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
# and oracle.install.asmOnNAS.configureGIMRDataDG=true
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.gimrLocation=

#-------------------------------------------------------------------------------
# Password for SYS user of Oracle ASM
#-------------------------------------------------------------------------------
oracle.install.asm.SYSASMPassword=Oracle123

#-------------------------------------------------------------------------------
# The ASM DiskGroup
#
# Example: oracle.install.asm.diskGroup.name=data
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.name=DATA

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following
# - NORMAL
# - HIGH
# - EXTERNAL
# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.diskGroup.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.redundancy=NORMAL

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
# - 1
# - 2
# - 4
# - 8
# - 16
# Example: oracle.install.asm.diskGroup.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.AUSize=4

#-------------------------------------------------------------------------------
# Failure Groups for the disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.FailureGroups=

#-------------------------------------------------------------------------------
# List of disks and their failure groups to create a ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL
# Example:
# For Unix based Operating System:
# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
# For Windows based Operating System:
# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/oracleasm/disks/OCR_VOTE1,,/dev/oracleasm/disks/OCR_VOTE2,,/dev/oracleasm/disks/OCR_VOTE3,

#-------------------------------------------------------------------------------
# List of disks to create a ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
# For Unix based Operating System:
# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2
# For Windows based Operating System:
# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/OCR_VOTE1,/dev/oracleasm/disks/OCR_VOTE2,/dev/oracleasm/disks/OCR_VOTE3

#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.quorumFailureGroupNames=
#-------------------------------------------------------------------------------
# The disk discovery string to be used to discover the disks used create a ASM DiskGroup
#
# Example:
# For Unix based Operating System:
# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/*
# For Windows based Operating System:
# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK*
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*

#-------------------------------------------------------------------------------
# Password for ASMSNMP account
# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances
#-------------------------------------------------------------------------------
oracle.install.asm.monitorPassword=Oracle123

#-------------------------------------------------------------------------------
# GIMR Storage data ASM DiskGroup
# Applicable only when
# oracle.install.asm.configureGIMRDataDG=true
# Example: oracle.install.asm.GIMRDG.name=MGMT
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.name=

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following
# - NORMAL
# - HIGH
# - EXTERNAL
# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.gimrDG.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.redundancy=

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
# - 1
# - 2
# - 4
# - 8
# - 16
# Example: oracle.install.asm.gimrDG.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.AUSize=1

#-------------------------------------------------------------------------------
# Failure Groups for the GIMR storage data ASM disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.FailureGroups=

#-------------------------------------------------------------------------------
# List of disks and their failure groups to create GIMR data ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL
# Example:
# For Unix based Operating System:
# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
# For Windows based Operating System:
# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disksWithFailureGroupNames=

#-------------------------------------------------------------------------------
# List of disks to create GIMR data ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
# For Unix based Operating System:
# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2
# For Windows based Operating System:
# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disks=

#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.quorumFailureGroupNames=

#-------------------------------------------------------------------------------
# Configure AFD - ASM Filter Driver
# Applicable only for FLEX_ASM_STORAGE option
# Specify 'true' if you want to configure AFD, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.asm.configureAFD=false
#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

################################################################################
# #
# SECTION H - UPGRADE #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify whether to ignore down nodes during upgrade operation.
# Value should be 'true' to ignore down nodes otherwise specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.ignoreDownNodes=false
################################################################################
# #
# MANAGEMENT OPTIONS #
# #
################################################################################

#-------------------------------------------------------------------------------
# Specify the management option to use for managing Oracle Grid Infrastructure
# Options are:
# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
#-------------------------------------------------------------------------------
oracle.install.config.managementOption=NONE

#-------------------------------------------------------------------------------
# Specify the OMS host to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsHost=

#-------------------------------------------------------------------------------
# Specify the OMS port to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsPort=0

#-------------------------------------------------------------------------------
# Specify the EM Admin user name to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminUser=

#-------------------------------------------------------------------------------
# Specify the EM Admin password to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminPassword=
################################################################################
# #
# Root script execution configuration #
# #
################################################################################

#-------------------------------------------------------------------------------------------------------
# Specify the root script execution mode.
#
# - true : To execute the root script automatically by using the appropriate configuration methods.
# - false : To execute the root script manually.
#
# If this option is selected, password should be specified on the console.
#-------------------------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.executeRootScript=false

#--------------------------------------------------------------------------------------
# Specify the configuration method to be used for automatic root script execution.
#
# Following are the possible choices:
# - ROOT
# - SUDO
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.configMethod=
#--------------------------------------------------------------------------------------
# Specify the absolute path of the sudo program.
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoPath=

#--------------------------------------------------------------------------------------
# Specify the name of the user who is in the sudoers list.
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoUserName=
#--------------------------------------------------------------------------------------
# Specify the nodes batch map.
#
# This should be a comma separated list of node:batch pairs.
# During upgrade, you can sequence the automatic execution of root scripts
# by pooling the nodes into batches.
# A maximum of three batches can be specified.
# Installer will execute the root scripts on all the nodes in one batch before
# proceeding to next batch.
# Root script execution on the local node must be in Batch 1.
# Only one type of node role can be used for each batch.
# Root script execution should be done first in all HUB nodes and then, when
# existent, in all the LEAF nodes.
#
# Examples:
# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3
# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2
# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3
#
# Applicable only for UPGRADE install option.
#--------------------------------------------------------------------------------------
oracle.install.crs.config.batchinfo=
################################################################################
# #
# APPLICATION CLUSTER OPTIONS #
# #
################################################################################

#-------------------------------------------------------------------------------
# Specify the Virtual hostname to configure virtual access for your Application
# The value to be specified for Virtual hostname is optional.
#-------------------------------------------------------------------------------
oracle.install.crs.app.applicationAddress=
#################################################################################
# #
# DELETE NODE OPTIONS #
# #
#################################################################################

#--------------------------------------------------------------------------------
# Specify the node names to delete nodes from cluster.
# Delete node will be performed only for the remote nodes from the cluster.
#--------------------------------------------------------------------------------
oracle.install.crs.deleteNode.nodes=

However, the key parameters which are to be worked upon are as below.


INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=HA_CONFIG
ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=oinstall
oracle.install.asm.OSOPER=
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.asm.SYSASMPassword=Oracle123
oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.redundancy=NORMAL
oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.FailureGroups=
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/oracleasm/disks/OCR_VOTE1,,/dev/oracleasm/disks/OCR_VOTE2,,/dev/oracleasm/disks/OCR_VOTE3,
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/OCR_VOTE1,/dev/oracleasm/disks/OCR_VOTE2,/dev/oracleasm/disks/OCR_VOTE3
oracle.install.asm.diskGroup.quorumFailureGroupNames=
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*
oracle.install.asm.monitorPassword=Oracle123
oracle.install.crs.rootconfig.executeRootScript=false



Now we do the installation.
[grid@oelrestart18c grid]$ ./gridSetup.sh -silent -responseFile /home/grid/grid.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
/u01/app/180/grid/install/response/grid_2018-09-12_05-52-12AM.rsp

You can find the log of this install session at:
/tmp/GridSetupActions2018-09-12_05-52-12AM/gridSetupActions2018-09-12_05-52-12AM.log

As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/180/grid/root.sh

Execute /u01/app/180/grid/root.sh on the following nodes:
[oelrestart18c]



Successfully Setup Software.
As install user, execute the following command to complete the configuration.
/u01/app/180/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid.rsp [-silent]


Moved the install session logs to:
/u01/app/oraInventory/logs/GridSetupActions2018-09-12_05-52-12AM

Now run the root scripts mentioned as part of install output

As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/180/grid/root.sh



[root@oelrestart18c rpm]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@oelrestart18c rpm]# /u01/app/180/grid/root.sh
Check /u01/app/180/grid/install/root_oelrestart18c.novalocal_2018-09-12_05-56-15-243031217.log for the output of root script

[grid@oelrestart18c grid]$ cat /u01/app/180/grid/install/root_oelrestart18c.novalocal_2018-09-12_05-56-15-243031217.log
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/180/grid
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/180/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/oelrestart18c/crsconfig/roothas_2018-09-12_05-56-15AM.log
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oelrestart18c successfully pinned.
2018/09/12 05:56:24 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oelrestart18c'
CRS-2673: Attempting to stop 'ora.evmd' on 'oelrestart18c'
CRS-2677: Stop of 'ora.evmd' on 'oelrestart18c' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oelrestart18c' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.

oelrestart18c 2018/09/12 05:57:24 /u01/app/180/grid/cdata/oelrestart18c/backup_20180912_055724.olr 70732493
2018/09/12 05:57:24 CLSRSC-327: Successfully configured Oracle Restart for a standalone server


Finally run the Standalone Server Configuration to finalize the installation as oracle owner - grid user here.

/u01/app/180/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid.rsp -silent
[grid@oelrestart18c grid]$ /u01/app/180/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2018-09-12_09-56-00AM

Successfully Configured Software.


This completes silent installation of Oracle Restart