Advertisement

Thursday, September 27, 2018

Oracle Database: 18c dbca Segmentation Fault

In this short blog I talk about Segmentation fault (core dumped) error when launching dbca.

[oracle@18cbox ~]$ dbca

Segmentation fault (core dumped)

whenever dbca was launched - in silent or on in UI mode - it will not start and give the error as above.


On some diligence it was found out that the problem was with the environment variable which was set as below - 
NLS_DATE_FORMAT="Mon  DD/MM/YYYY HH24:MI:SS"

On fixing the value of this environment variable dbca ran fine. 

NLS_DATE_FORMAT="DD/MM/YYYY HH24:MI:SS"


So if dbca is not working and you get segmentation fault, unset all unrequited environment variables and try again.

Hope it Helps :).


Wednesday, September 26, 2018

Oracle Database 18c: Oracle Restart Silent DB Creation

In this short blog, I am going to cover on how to create a database using dbca command line using silent method


export ORACLE_BASE=/u01/app/oracle # Set it as per your environment
export ORACLE_HOME=/u01/app/oracle/product/180/db
export PATH=$ORACLE_HOME/bin:$PATH

Use dbca silent method to create Database



#####
dbca -silent -createDatabase -gdbName ORCL -sid ORCL \
-templateName /opt/oracle/product/180/db/assistants/dbca/templates/General_Purpose.dbc \
-characterSet WE8MSWIN1252 -nationalCharacterSet AL16UTF16 \
-databaseConfigType SI \
-databaseType MULTIPURPOSE \
-asmsnmpPassword Oracle123 -sysPassword Oracle123 -systemPassword Oracle123  \
-redoLogFileSize 300 \
-sampleSchema false -storageType ASM \
-datafileDestination DG_DATA  -archiveLogDest DG_ARCH \
-enableArchive false  \
-automaticMemoryManagement false  \

 -initParams 'undo_retention=900,db_block_size=8K,processes=450,use_large_pages=ONLY,sga_target=2048MB,pga_aggregate_target=512M,db_create_online_log_dest_1=+DG_REDO1,db_create_online_log_dest_2=+DG_REDO2' 


Prepare for db operation
10% complete
Registering database with Oracle Restart
14% complete
Copying database files
43% complete
Creating and starting Oracle instance
45% complete
49% complete
53% complete
56% complete
62% complete
Completing Database Creation
68% complete
70% complete
71% complete
Executing Post Configuration Actions
100% complete
Database creation complete. For details check the logfiles at:
 /u01/app/oracle/base/cfgtoollogs/dbca/ORCL.
Database Information:
Global Database Name:ORCL
System Identifier(SID):ORCL

Look at the log file "/u01/app/oracle/base/cfgtoollogs/dbca/ORCL/ORCL.log" for further details.

Monday, September 17, 2018

Oracle Cloud (OCI) - Creating Compartments

In this blog I discuss on how to create compartments in Oracle Cloud.

Compartments are an essential component and one of the key differentiators of OCI compared to other cloud vendors in market. 

These act like containers of the resources within the same tenancy ID.

They help in separating resources and policies on resources which can be a real useful feature in a big environment.

Now let's see how to create a compartment (and how simple it is indeed to create one)

They can be simply created in 4 steps as you can see below.

Step 1 - Go to Compartments (Identity --> Compartments)


Step 2 - Click Create Compartment

Step 3 - Enter Details of the compartment and Click "Create Compartment"


Step 4  -Verify the name of the compartment and details.
You must note a compartment cannot be deleted as per this version of OCI, so unless you have a test account, do not end up creating junk :)


Wednesday, September 12, 2018

Oracle Database 18c: Oracle Restart DeInstallation

In this blog we are going to understand how to de-install an Oracle restart configuration.

You must de-install all linked Database binaries - before de-installing Oracle Restart or Grid Infrastructure.

This blog assumes that you have already de-installed the DB binaries and are de-installing the Grid Infra.

Step 1
Got to Oracle home
cd /u01/app/180/grid/deinstall

Step 2 
Run the de-install utility (Ensure has is up and running)
./deinstall
You will be prompted for Inputs
1. Name of Listeners
2. Name and configuration of Disks and Diskgroups
3. if you want to continue the deinstall 

You must select them at appropriate stages of deinstall,if you have has up and running then these will probably  be auto selected for you and all you have to do is validate that it is allright.
All above points are highlighted in the window below


[grid@oelrestart18c deinstall]$ ./deinstall
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /tmp/deinstall2018-09-12_10-01-24AM/logs/

############ ORACLE DECONFIG TOOL START ############


######################### DECONFIG CHECK OPERATION START #########################
## [START] Install check configuration ##


Checking for existence of the Oracle home location /u01/app/180/grid
Oracle Home type selected for deinstall is: Oracle Grid Infrastructure for a Standalone Server
Oracle Base selected for deinstall is: /u01/app/grid
Checking for existence of central inventory location /u01/app/oraInventory
Checking for existence of the Oracle Grid Infrastructure home /u01/app/180/grid

## [END] Install check configuration ##

Traces log file: /tmp/deinstall2018-09-12_10-01-24AM/logs//crsdc_2018-09-12_10-01-36-AM.log

Network Configuration check config START

Network de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/netdc_check2018-09-12_10-01-37AM.log

Specify all Oracle Restart enabled listeners that are to be de-configured. Enter .(dot) to deselect all. [LISTENER]:

Network Configuration check config END

Asm Check Configuration START

ASM de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/asmcadc_check2018-09-12_10-01-37AM.log

Automatic Storage Management (ASM) instance is detected in this Oracle home /u01/app/180/grid.
ASM Diagnostic Destination : /u01/app/grid
ASM Diskgroups : +DATA,+REDO
ASM diskstring : /dev/oracleasm/disks/*
Diskgroups will be dropped
De-configuring ASM will drop all the diskgroups and their contents at cleanup time. This will affect all of the databases and ACFS that use this ASM instance(s).
If you want to retain the existing diskgroups or if any of the information detected is incorrect, you can modify by entering 'y'. Do you want to modify above information (y|n) [n]:
Database Check Configuration START

Database de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/databasedc_check2018-09-12_10-01-37AM.log

Database Check Configuration END

######################### DECONFIG CHECK OPERATION END #########################


####################### DECONFIG CHECK OPERATION SUMMARY #######################
Oracle Grid Infrastructure Home is: /u01/app/180/grid
Oracle Home selected for deinstall is: /u01/app/180/grid
Inventory Location where the Oracle home registered is: /u01/app/oraInventory
Following Oracle Restart enabled listener(s) will be de-configured: LISTENER
ASM instance will be de-configured from this Oracle home
Do you want to continue (y - yes, n - no)? [n]: y
A log of this session will be written to: '/tmp/deinstall2018-09-12_10-01-24AM/logs/deinstall_deconfig2018-09-12_10-01-35-AM.out'
Any error messages from this session will be written to: '/tmp/deinstall2018-09-12_10-01-24AM/logs/deinstall_deconfig2018-09-12_10-01-35-AM.err'

######################## DECONFIG CLEAN OPERATION START ########################
Database de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/databasedc_clean2018-09-12_10-01-37AM.log
ASM de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/asmcadc_clean2018-09-12_10-01-37AM.log
ASM Clean Configuration START
ASM Clean Configuration END

Network Configuration clean config START

Network de-configuration trace file location: /tmp/deinstall2018-09-12_10-01-24AM/logs/netdc_clean2018-09-12_10-01-37AM.log

De-configuring Oracle Restart enabled listener(s): LISTENER

De-configuring listener: LISTENER
Stopping listener: LISTENER
Listener stopped successfully.
Unregistering listener: LISTENER
Listener unregistered successfully.
Deleting listener: LISTENER
Listener deleted successfully.
Listener de-configured successfully.

De-configuring Naming Methods configuration file...
Naming Methods configuration file de-configured successfully.

De-configuring backup files...
Backup files de-configured successfully.

The network configuration has been cleaned up successfully.

Network Configuration clean config END


---------------------------------------->

Run the following command as the root user or the administrator on node "oelrestart18c".

/u01/app/180/grid/crs/install/roothas.sh -force -deconfig -paramfile "/tmp/deinstall2018-09-12_10-01-24AM/response/deinstall_OraGI18Home1.rsp"

Press Enter after you finish running the above commands

Run the de-configuration script as root user

[root@oelrestart18c ~]# /u01/app/180/grid/crs/install/roothas.sh -force -deconfig -paramfile "/tmp/deinstall2018-09-12_10-01-24AM/response/deinstall_OraGI18Home1.rsp"
Using configuration parameter file: /tmp/deinstall2018-09-12_10-01-24AM/response/deinstall_OraGI18Home1.rsp
The log of current session can be found at:
/tmp/deinstall2018-09-12_10-01-24AM/logs/hadeconfig.log
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oelrestart18c'
CRS-2673: Attempting to stop 'ora.evmd' on 'oelrestart18c'
CRS-2677: Stop of 'ora.evmd' on 'oelrestart18c' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'oelrestart18c'
CRS-2677: Stop of 'ora.cssd' on 'oelrestart18c' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oelrestart18c' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2018/09/12 10:03:25 CLSRSC-337: Successfully deconfigured Oracle Restart stack

Press enter on the main window.
this completes the de-installation of Oracle Restart
You can delete the contents of /u01/app if there is nothing else installed under this direcotyr

Oracle Database 18c: Oracle Restart Silent Installation

In this blog I am going to install Oracle Restart using Silent Method. 

UI installations are not always supported for Oracle installations, so understanding to install in Silent is an important aspect.
This blog picks up from one of my previous blogs where I have mentioned all the pre-requisites for 18c Restart install

Assuming you have completed all the pre-reqs, let's continue from there - 

cd /u01/app/180/grid
unzip -qq LINUX.X64_180000_grid_home.zip 

Run the Pre-check first to ensure everything is fine 


[grid@oelrestart18c grid]$ ./runcluvfy.sh stage -pre crsinst -n oelrestart18c

Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: oelrestart18c:/usr,oelrestart18c:/var,oelrestart18c:/etc,oelrestart18c:/sbin,oelrestart18c:/tmp ...PASSED
Verifying User Existence: grid ...
Verifying Users With Same UID: 54232 ...PASSED
Verifying User Existence: grid ...PASSED
Verifying Group Existence: asmadmin ...PASSED
Verifying Group Existence: asmdba ...FAILED (PRVG-10461)
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: asmdba ...FAILED (PRVG-10460)
Verifying Group Membership: asmadmin ...PASSED
Verifying Group Membership: oinstall(Primary) ...PASSED
Verifying Run Level ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.23.52.0.1 ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED
Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED
Verifying Package: cvuqdisk-1.0.10-1 ...PASSED
Verifying Host name ...PASSED
Verifying Node Connectivity ...
Verifying Hosts File ...PASSED
Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast or broadcast check ...PASSED
Verifying ASMLib installation and configuration verification. ...
Verifying '/etc/init.d/oracleasm' ...PASSED
Verifying '/dev/oracleasm' ...PASSED
Verifying '/etc/sysconfig/oracleasm' ...PASSED
Verifying ASMLib installation and configuration verification. ...PASSED
Verifying Network Time Protocol (NTP) ...
Verifying '/etc/chrony.conf' ...PASSED
Verifying '/var/run/chronyd.pid' ...PASSED
Verifying Daemon 'chronyd' ...PASSED
Verifying NTP daemon or service using UDP port 123 ...PASSED
Verifying chrony daemon is synchronized with at least one external time source ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying Same core file name pattern ...PASSED
Verifying User Mask ...PASSED
Verifying User Not In Group "root": grid ...PASSED
Verifying Time zone consistency ...PASSED
Verifying resolv.conf Integrity …PASSED
Verifying DNS/NIS name service ...PASSED
Verifying Domain Sockets ...PASSED
Verifying /boot mount ...PASSED
Verifying Daemon "avahi-daemon" not configured and running ...PASSED
Verifying Daemon "proxyt" not configured and running ...PASSED
Verifying User Equivalence ...PASSED
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying zeroconf check ...PASSED
Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Group Existence: asmdba ...FAILED
oelrestart18c: PRVG-10461 : Group "asmdba" selected for privileges "OSDBA" does
not exist on node "oelrestart18c".

Verifying Group Membership: asmdba ...FAILED
oelrestart18c: PRVG-10460 : User "grid" does not belong to group "asmdba"
selected for privileges "OSDBA" on node "oelrestart18c".

Verifying resolv.conf Integrity ...FAILED
oelrestart18c: PRVG-13159 : On node "oelrestart18c" the file "/etc/resolv.conf"
could not be parsed because the file is empty.


CVU operation performed: stage -pre crsinst
Date: Sep 12, 2018 5:44:03 AM
CVU home: /u01/app/180/grid/
User: grid


The Next Step is actually to prepare a response file.  You can see my detailed response file below.


[grid@oelrestart18c ~]$ cat grid.rsp
###############################################################################
## Copyright(c) Oracle Corporation 1998,2018. All rights reserved. ##
## ##
## Specify values for the variables listed below to customize ##
## your installation. ##
## ##
## Each variable is associated with a comment. The comment ##
## can help to populate the variables with the appropriate ##
## values. ##
## ##
## IMPORTANT NOTE: This file contains plain text passwords and ##
## should be secured to have read permission only by oracle user ##
## or db administrator who owns this installation. ##
## ##
###############################################################################

###############################################################################
## ##
## Instructions to fill this response file ##
## To register and configure 'Grid Infrastructure for Cluster' ##
## - Fill out sections A,B,C,D,E,F and G ##
## - Fill out section G if OCR and voting disk should be placed on ASM ##
## ##
## To register and configure 'Grid Infrastructure for Standalone server' ##
## - Fill out sections A,B and G ##
## ##
## To register software for 'Grid Infrastructure' ##
## - Fill out sections A,B and D ##
## - Provide the cluster nodes in section D when choosing CRS_SWONLY as ##
## installation option in section A ##
## ##
## To upgrade clusterware and/or Automatic storage management of earlier ##
## releases ##
## - Fill out sections A,B,C,D and H ##
## ##
## To add more nodes to the cluster ##
## - Fill out sections A and D ##
## - Provide the cluster nodes in section D when choosing CRS_ADDNODE as ##
## installation option in section A ##
## ##
###############################################################################

#------------------------------------------------------------------------------
# Do not change the following system generated value.
#------------------------------------------------------------------------------
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0

###############################################################################
# #
# SECTION A - BASIC #
# #
###############################################################################


#-------------------------------------------------------------------------------
# Specify the location which holds the inventory files.
# This is an optional parameter if installing on
# Windows based Operating System.
#-------------------------------------------------------------------------------
INVENTORY_LOCATION=/u01/app/oraInventory

#-------------------------------------------------------------------------------
# Specify the installation option.
# Allowed values: CRS_CONFIG or HA_CONFIG or UPGRADE or CRS_SWONLY or HA_SWONLY
# - CRS_CONFIG : To register home and configure Grid Infrastructure for cluster
# - HA_CONFIG : To register home and configure Grid Infrastructure for stand alone server
# - UPGRADE : To register home and upgrade clusterware software of earlier release
# - CRS_SWONLY : To register Grid Infrastructure Software home (can be configured for cluster
# or stand alone server later)
# - HA_SWONLY : To register Grid Infrastructure Software home (can be configured for stand
# alone server later. This is only supported on Windows.)
# - CRS_ADDNODE : To add more nodes to the cluster
# - CRS_DELETE_NODE : To delete nodes to the cluster
#-------------------------------------------------------------------------------
oracle.install.option=HA_CONFIG

#-------------------------------------------------------------------------------
# Specify the complete path of the Oracle Base.
#-------------------------------------------------------------------------------
ORACLE_BASE=/u01/app/grid

################################################################################
# #
# SECTION B - GROUPS #
# #
# The following three groups need to be assigned for all GI installations. #
# OSDBA and OSOPER can be the same or different. OSASM must be different #
# than the other two. #
# The value to be specified for OSDBA, OSOPER and OSASM group is only for #
# Unix based Operating System. #
# These groups are not required for upgrades, as they will be determined #
# from the Oracle home to upgrade. #
# #
################################################################################
#-------------------------------------------------------------------------------
# The OSDBA_GROUP is the OS group which is to be granted SYSDBA privileges.
#-------------------------------------------------------------------------------
oracle.install.asm.OSDBA=oinstall

#-------------------------------------------------------------------------------
# The OSOPER_GROUP is the OS group which is to be granted SYSOPER privileges.
# The value to be specified for OSOPER group is optional.
# Value should not be provided if configuring Client Cluster - i.e. storageOption=CLIENT_ASM_STORAGE.
#-------------------------------------------------------------------------------
oracle.install.asm.OSOPER=

#-------------------------------------------------------------------------------
# The OSASM_GROUP is the OS group which is to be granted SYSASM privileges. This
# must be different than the previous two.
#-------------------------------------------------------------------------------
oracle.install.asm.OSASM=asmadmin

################################################################################
# #
# SECTION C - SCAN #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify the type of SCAN configuration for the cluster
# Allowed values : LOCAL_SCAN and SHARED_SCAN
#-------------------------------------------------------------------------------
oracle.install.crs.config.scanType=LOCAL_SCAN

#-------------------------------------------------------------------------------
# Applicable only if SHARED_SCAN is being configured for cluster
# Specify the path to the SCAN client data file
#-------------------------------------------------------------------------------
oracle.install.crs.config.SCANClientDataFile=

#-------------------------------------------------------------------------------
# Specify a name for SCAN
# Applicable if LOCAL_SCAN is being configured for the cluster
# If you choose to configure the cluster with GNS with Auto assigned Node VIPs(DHCP),then the scanName should be specified in the format of 'SCAN name.Cluster name.GNS sub-domain'
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.scanName=

#-------------------------------------------------------------------------------
# Specify a unused port number for SCAN service
#-------------------------------------------------------------------------------

oracle.install.crs.config.gpnp.scanPort=


################################################################################
# #
# SECTION D - CLUSTER & GNS #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify the required cluster configuration
# Allowed values: STANDALONE, DOMAIN, MEMBERDB, MEMBERAPP
#-------------------------------------------------------------------------------
oracle.install.crs.config.ClusterConfiguration=STANDALONE

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure the cluster as Extended, else
# specify 'false'
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.configureAsExtendedCluster=false


#-------------------------------------------------------------------------------
# Specify the Member Cluster Manifest file
#
# Applicable only for MEMBERDB and MEMBERAPP cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.memberClusterManifestFile=

#-------------------------------------------------------------------------------
# Specify a name for the Cluster you are creating.
#
# The maximum length allowed for clustername is 15 characters. The name can be
# any combination of lower and uppercase alphabets (A - Z), (0 - 9), hyphen(-)
# and underscore(_).
#
# Applicable only for STANDALONE and DOMAIN cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterName=

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE, DOMAIN, MEMBERDB cluster configuration.
# Specify 'true' if you would like to configure Grid Naming Service(GNS), else
# specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.configureGNS=false

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to configure GNS.
# Specify 'true' if you would like to assign SCAN name VIP and Node VIPs by DHCP
# , else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.autoConfigureClusterNodeVIP=false

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure GNS.
# Specify the type of GNS configuration for cluster
# Allowed values are: CREATE_NEW_GNS and USE_SHARED_GNS
# Only USE_SHARED_GNS value is allowed for MEMBERDB cluster configuration.
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS

#-------------------------------------------------------------------------------
# Applicable only if SHARED_GNS is being configured for cluster
# Specify the path to the GNS client data file
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsClientDataFile=

#-------------------------------------------------------------------------------
# Applicable only for STANDALONE and DOMAIN cluster configuration if you choose to
# configure GNS for this cluster oracle.install.crs.config.gpnp.gnsOption=CREATE_NEW_GNS
# Specify the GNS subdomain and an unused virtual hostname for GNS service
#-------------------------------------------------------------------------------
oracle.install.crs.config.gpnp.gnsSubDomain=
oracle.install.crs.config.gpnp.gnsVIPAddress=

#-------------------------------------------------------------------------------
# Specify the list of sites - only if configuring an Extended Cluster
#-------------------------------------------------------------------------------
oracle.install.crs.config.sites=

#-------------------------------------------------------------------------------
# Specify the list of nodes that have to be configured to be part of the cluster.
#
# The list should a comma-separated list of tuples. Each tuple should be a
# colon-separated string that contains
# - 1 field if you have chosen CRS_SWONLY as installation option, or
# - 1 field if configuring an Application Cluster, or
# - 3 fields if configuring a Flex Cluster
# - 3 fields if adding more nodes to the configured cluster, or
# - 4 fields if configuring an Extended Cluster
#
# The fields should be ordered as follows:
# 1. The first field should be the public node name.
# 2. The second field should be the virtual host name
# (Should be specified as AUTO if you have chosen 'auto configure for VIP'
# i.e. autoConfigureClusterNodeVIP=true)
# 3. The third field indicates the role of node (HUB,LEAF). This has to
# be provided only if Flex Cluster is being configured.
# For Extended Cluster only HUB should be specified for all nodes
# 4. The fourth field indicates the site designation for the node. To be specified only if configuring an Extended Cluster.
# The 2nd and 3rd fields are not applicable if you have chosen CRS_SWONLY as installation option
# The 2nd and 3rd fields are not applicable if configuring an Application Cluster
#
# Examples
# For registering GI for a cluster software: oracle.install.crs.config.clusterNodes=node1,node2
# For adding more nodes to the configured cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Application Cluster: oracle.install.crs.config.clusterNodes=node1,node2
# For configuring Flex Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB,node2:node2-vip:LEAF
# For configuring Extended Cluster: oracle.install.crs.config.clusterNodes=node1:node1-vip:HUB:site1,node2:node2-vip:HUB:site2
# You can specify a range of nodes in the tuple using colon separated fields of format
# hostnameprefix:lowerbound-upperbound:hostnamesuffix:vipsuffix:role of node
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.clusterNodes=

#-------------------------------------------------------------------------------
# The value should be a comma separated strings where each string is as shown below
# InterfaceName:SubnetAddress:InterfaceType
# where InterfaceType can be either "1", "2", "3", "4", or "5"
# InterfaceType stand for the following values
# - 1 : PUBLIC
# - 2 : PRIVATE
# - 3 : DO NOT USE
# - 4 : ASM
# - 5 : ASM & PRIVATE
#
# For example: eth0:140.87.24.0:1,eth1:10.2.1.0:2,eth2:140.87.52.0:3
#
#-------------------------------------------------------------------------------
oracle.install.crs.config.networkInterfaceList=

#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup to store GIMR data.
# Specify 'true' if you would like to separate GIMR data with clusterware data,
# else specify 'false'
# Value should be 'true' for DOMAIN cluster configurations
# Value can be true/false for STANDALONE cluster configurations.
#------------------------------------------------------------------------------
oracle.install.asm.configureGIMRDataDG=false

################################################################################
# #
# SECTION E - STORAGE #
# #
################################################################################

#-------------------------------------------------------------------------------
# Specify the type of storage to use for Oracle Cluster Registry(OCR) and Voting
# Disks files
# - FLEX_ASM_STORAGE
# - CLIENT_ASM_STORAGE
#
# Applicable only for MEMBERDB cluster configuration
#-------------------------------------------------------------------------------
oracle.install.crs.config.storageOption=
################################################################################
# #
# SECTION F - IPMI #
# #
################################################################################

#-------------------------------------------------------------------------------
# Specify 'true' if you would like to configure Intelligent Power Management interface
# (IPMI), else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.useIPMI=false

#-------------------------------------------------------------------------------
# Applicable only if you choose to configure IPMI
# i.e. oracle.install.crs.config.useIPMI=true
# Specify the username and password for using IPMI service
#-------------------------------------------------------------------------------
oracle.install.crs.config.ipmi.bmcUsername=
oracle.install.crs.config.ipmi.bmcPassword=
################################################################################
# #
# SECTION G - ASM #
# #
################################################################################

#-------------------------------------------------------------------------------
# ASM Storage Type
# Allowed values are : ASM and ASM_ON_NAS
# ASM_ON_NAS applicable only if
# oracle.install.crs.config.ClusterConfiguration=STANDALONE
#-------------------------------------------------------------------------------
oracle.install.asm.storageOption=ASM

#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing OCR/VDSK
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store OCR/VDSK files
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.ocrLocation=
#------------------------------------------------------------------------------
# Create a separate ASM DiskGroup on NAS to store GIMR data
# Specify 'true' if you would like to separate GIMR data with clusterware data, else
# specify 'false'
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
#------------------------------------------------------------------------------
oracle.install.asmOnNAS.configureGIMRDataDG=false

#-------------------------------------------------------------------------------
# NAS location to create ASM disk group for storing GIMR data
# Specify the NAS location where you want the ASM disk group to be created
# to be used to store the GIMR database
# Applicable only if oracle.install.asm.storageOption=ASM_ON_NAS
# and oracle.install.asmOnNAS.configureGIMRDataDG=true
#-------------------------------------------------------------------------------
oracle.install.asmOnNAS.gimrLocation=

#-------------------------------------------------------------------------------
# Password for SYS user of Oracle ASM
#-------------------------------------------------------------------------------
oracle.install.asm.SYSASMPassword=Oracle123

#-------------------------------------------------------------------------------
# The ASM DiskGroup
#
# Example: oracle.install.asm.diskGroup.name=data
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.name=DATA

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following
# - NORMAL
# - HIGH
# - EXTERNAL
# - FLEX# - EXTENDED (required if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.diskGroup.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.redundancy=NORMAL

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
# - 1
# - 2
# - 4
# - 8
# - 16
# Example: oracle.install.asm.diskGroup.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.AUSize=4

#-------------------------------------------------------------------------------
# Failure Groups for the disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.FailureGroups=

#-------------------------------------------------------------------------------
# List of disks and their failure groups to create a ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.diskGroup.redundancy=EXTERNAL
# Example:
# For Unix based Operating System:
# oracle.install.asm.diskGroup.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
# For Windows based Operating System:
# oracle.install.asm.diskGroup.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/oracleasm/disks/OCR_VOTE1,,/dev/oracleasm/disks/OCR_VOTE2,,/dev/oracleasm/disks/OCR_VOTE3,

#-------------------------------------------------------------------------------
# List of disks to create a ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
# For Unix based Operating System:
# oracle.install.asm.diskGroup.disks=/oracle/asm/disk1,/oracle/asm/disk2
# For Windows based Operating System:
# oracle.install.asm.diskGroup.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/OCR_VOTE1,/dev/oracleasm/disks/OCR_VOTE2,/dev/oracleasm/disks/OCR_VOTE3

#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
# oracle.install.asm.diskGroup.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.quorumFailureGroupNames=
#-------------------------------------------------------------------------------
# The disk discovery string to be used to discover the disks used create a ASM DiskGroup
#
# Example:
# For Unix based Operating System:
# oracle.install.asm.diskGroup.diskDiscoveryString=/oracle/asm/*
# For Windows based Operating System:
# oracle.install.asm.diskGroup.diskDiscoveryString=\\.\ORCLDISK*
#
#-------------------------------------------------------------------------------
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*

#-------------------------------------------------------------------------------
# Password for ASMSNMP account
# ASMSNMP account is used by Oracle Enterprise Manager to monitor Oracle ASM instances
#-------------------------------------------------------------------------------
oracle.install.asm.monitorPassword=Oracle123

#-------------------------------------------------------------------------------
# GIMR Storage data ASM DiskGroup
# Applicable only when
# oracle.install.asm.configureGIMRDataDG=true
# Example: oracle.install.asm.GIMRDG.name=MGMT
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.name=

#-------------------------------------------------------------------------------
# Redundancy level to be used by ASM.
# It can be one of the following
# - NORMAL
# - HIGH
# - EXTERNAL
# - FLEX# - EXTENDED (only if oracle.install.crs.config.ClusterConfiguration=EXTENDED)
# Example: oracle.install.asm.gimrDG.redundancy=NORMAL
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.redundancy=

#-------------------------------------------------------------------------------
# Allocation unit size to be used by ASM.
# It can be one of the following values
# - 1
# - 2
# - 4
# - 8
# - 16
# Example: oracle.install.asm.gimrDG.AUSize=4
# size unit is MB
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.AUSize=1

#-------------------------------------------------------------------------------
# Failure Groups for the GIMR storage data ASM disk group
# If configuring for Extended cluster specify as list of "failure group name:site"
# tuples.
# Else just specify as list of failure group names
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.FailureGroups=

#-------------------------------------------------------------------------------
# List of disks and their failure groups to create GIMR data ASM DiskGroup
# (Use this if each of the disks have an associated failure group)
# Failure Groups are not required if oracle.install.asm.gimrDG.redundancy=EXTERNAL
# Example:
# For Unix based Operating System:
# oracle.install.asm.gimrDG.disksWithFailureGroupNames=/oracle/asm/disk1,FGName,/oracle/asm/disk2,FGName
# For Windows based Operating System:
# oracle.install.asm.gimrDG.disksWithFailureGroupNames=\\.\ORCLDISKDATA0,FGName,\\.\ORCLDISKDATA1,FGName
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disksWithFailureGroupNames=

#-------------------------------------------------------------------------------
# List of disks to create GIMR data ASM DiskGroup
# (Use this variable only if failure groups configuration is not required)
# Example:
# For Unix based Operating System:
# oracle.install.asm.gimrDG.disks=/oracle/asm/disk1,/oracle/asm/disk2
# For Windows based Operating System:
# oracle.install.asm.gimrDG.disks=\\.\ORCLDISKDATA0,\\.\ORCLDISKDATA1
#
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.disks=

#-------------------------------------------------------------------------------
# List of failure groups to be marked as QUORUM.
# Quorum failure groups contain only voting disk data, no user data is stored
# Example:
# oracle.install.asm.gimrDG.quorumFailureGroupNames=FGName1,FGName2
#-------------------------------------------------------------------------------
oracle.install.asm.gimrDG.quorumFailureGroupNames=

#-------------------------------------------------------------------------------
# Configure AFD - ASM Filter Driver
# Applicable only for FLEX_ASM_STORAGE option
# Specify 'true' if you want to configure AFD, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.asm.configureAFD=false
#-------------------------------------------------------------------------------
# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.configureRHPS=false

################################################################################
# #
# SECTION H - UPGRADE #
# #
################################################################################
#-------------------------------------------------------------------------------
# Specify whether to ignore down nodes during upgrade operation.
# Value should be 'true' to ignore down nodes otherwise specify 'false'
#-------------------------------------------------------------------------------
oracle.install.crs.config.ignoreDownNodes=false
################################################################################
# #
# MANAGEMENT OPTIONS #
# #
################################################################################

#-------------------------------------------------------------------------------
# Specify the management option to use for managing Oracle Grid Infrastructure
# Options are:
# 1. CLOUD_CONTROL - If you want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
# 2. NONE -If you do not want to manage your Oracle Grid Infrastructure with Enterprise Manager Cloud Control.
#-------------------------------------------------------------------------------
oracle.install.config.managementOption=NONE

#-------------------------------------------------------------------------------
# Specify the OMS host to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsHost=

#-------------------------------------------------------------------------------
# Specify the OMS port to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.omsPort=0

#-------------------------------------------------------------------------------
# Specify the EM Admin user name to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminUser=

#-------------------------------------------------------------------------------
# Specify the EM Admin password to use to connect to Cloud Control.
# Applicable only when oracle.install.config.managementOption=CLOUD_CONTROL
#-------------------------------------------------------------------------------
oracle.install.config.emAdminPassword=
################################################################################
# #
# Root script execution configuration #
# #
################################################################################

#-------------------------------------------------------------------------------------------------------
# Specify the root script execution mode.
#
# - true : To execute the root script automatically by using the appropriate configuration methods.
# - false : To execute the root script manually.
#
# If this option is selected, password should be specified on the console.
#-------------------------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.executeRootScript=false

#--------------------------------------------------------------------------------------
# Specify the configuration method to be used for automatic root script execution.
#
# Following are the possible choices:
# - ROOT
# - SUDO
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.configMethod=
#--------------------------------------------------------------------------------------
# Specify the absolute path of the sudo program.
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoPath=

#--------------------------------------------------------------------------------------
# Specify the name of the user who is in the sudoers list.
#
# Applicable only when SUDO configuration method was chosen.
#--------------------------------------------------------------------------------------
oracle.install.crs.rootconfig.sudoUserName=
#--------------------------------------------------------------------------------------
# Specify the nodes batch map.
#
# This should be a comma separated list of node:batch pairs.
# During upgrade, you can sequence the automatic execution of root scripts
# by pooling the nodes into batches.
# A maximum of three batches can be specified.
# Installer will execute the root scripts on all the nodes in one batch before
# proceeding to next batch.
# Root script execution on the local node must be in Batch 1.
# Only one type of node role can be used for each batch.
# Root script execution should be done first in all HUB nodes and then, when
# existent, in all the LEAF nodes.
#
# Examples:
# 1. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:2,HUBNode3:2,LEAFNode4:3
# 2. oracle.install.crs.config.batchinfo=HUBNode1:1,LEAFNode2:2,LEAFNode3:2,LEAFNode4:2
# 3. oracle.install.crs.config.batchinfo=HUBNode1:1,HUBNode2:1,LEAFNode3:2,LEAFNode4:3
#
# Applicable only for UPGRADE install option.
#--------------------------------------------------------------------------------------
oracle.install.crs.config.batchinfo=
################################################################################
# #
# APPLICATION CLUSTER OPTIONS #
# #
################################################################################

#-------------------------------------------------------------------------------
# Specify the Virtual hostname to configure virtual access for your Application
# The value to be specified for Virtual hostname is optional.
#-------------------------------------------------------------------------------
oracle.install.crs.app.applicationAddress=
#################################################################################
# #
# DELETE NODE OPTIONS #
# #
#################################################################################

#--------------------------------------------------------------------------------
# Specify the node names to delete nodes from cluster.
# Delete node will be performed only for the remote nodes from the cluster.
#--------------------------------------------------------------------------------
oracle.install.crs.deleteNode.nodes=

However, the key parameters which are to be worked upon are as below.


INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=HA_CONFIG
ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=oinstall
oracle.install.asm.OSOPER=
oracle.install.asm.OSASM=asmadmin
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.asm.SYSASMPassword=Oracle123
oracle.install.asm.diskGroup.name=DATA
oracle.install.asm.diskGroup.redundancy=NORMAL
oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.FailureGroups=
oracle.install.asm.diskGroup.disksWithFailureGroupNames=/dev/oracleasm/disks/OCR_VOTE1,,/dev/oracleasm/disks/OCR_VOTE2,,/dev/oracleasm/disks/OCR_VOTE3,
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/OCR_VOTE1,/dev/oracleasm/disks/OCR_VOTE2,/dev/oracleasm/disks/OCR_VOTE3
oracle.install.asm.diskGroup.quorumFailureGroupNames=
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*
oracle.install.asm.monitorPassword=Oracle123
oracle.install.crs.rootconfig.executeRootScript=false



Now we do the installation.
[grid@oelrestart18c grid]$ ./gridSetup.sh -silent -responseFile /home/grid/grid.rsp
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:
/u01/app/180/grid/install/response/grid_2018-09-12_05-52-12AM.rsp

You can find the log of this install session at:
/tmp/GridSetupActions2018-09-12_05-52-12AM/gridSetupActions2018-09-12_05-52-12AM.log

As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/180/grid/root.sh

Execute /u01/app/180/grid/root.sh on the following nodes:
[oelrestart18c]



Successfully Setup Software.
As install user, execute the following command to complete the configuration.
/u01/app/180/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid.rsp [-silent]


Moved the install session logs to:
/u01/app/oraInventory/logs/GridSetupActions2018-09-12_05-52-12AM

Now run the root scripts mentioned as part of install output

As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/180/grid/root.sh



[root@oelrestart18c rpm]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.
[root@oelrestart18c rpm]# /u01/app/180/grid/root.sh
Check /u01/app/180/grid/install/root_oelrestart18c.novalocal_2018-09-12_05-56-15-243031217.log for the output of root script

[grid@oelrestart18c grid]$ cat /u01/app/180/grid/install/root_oelrestart18c.novalocal_2018-09-12_05-56-15-243031217.log
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/180/grid
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/180/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/oelrestart18c/crsconfig/roothas_2018-09-12_05-56-15AM.log
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node oelrestart18c successfully pinned.
2018/09/12 05:56:24 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'oelrestart18c'
CRS-2673: Attempting to stop 'ora.evmd' on 'oelrestart18c'
CRS-2677: Stop of 'ora.evmd' on 'oelrestart18c' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'oelrestart18c' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.

oelrestart18c 2018/09/12 05:57:24 /u01/app/180/grid/cdata/oelrestart18c/backup_20180912_055724.olr 70732493
2018/09/12 05:57:24 CLSRSC-327: Successfully configured Oracle Restart for a standalone server


Finally run the Standalone Server Configuration to finalize the installation as oracle owner - grid user here.

/u01/app/180/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid.rsp -silent
[grid@oelrestart18c grid]$ /u01/app/180/grid/gridSetup.sh -executeConfigTools -responseFile /home/grid/grid.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:
/u01/app/oraInventory/logs/GridSetupActions2018-09-12_09-56-00AM

Successfully Configured Software.


This completes silent installation of Oracle Restart

Monday, September 10, 2018

Oracle Database - Backup Status (Check) Query

This query is to check on Database backups history from v$ views. 
It provides below

1. BACKUP NAME 
2. STATUS 
3. START TIME                
4. TIME TAKEN TYPE            
5. OUTPUT DEVICE
6. INPUT SIZE   
7. OUTPUT SIZE 


SET ECHO ON
set linesize 222
column "START TIME" format a40
column "END TIME" format a40
col "BACKUP NAME" for a20
col STATUS for a21
col "START TIME" for a25
col "END TIME" for a25
col "TIME TAKEN" for a10
col "TYPE" for a15
col "OUTPUT DEVICES" for a10
col "INPUT SIZE" for a12
col "OUTPUT SIZE" for a12
col "OUTPUT BYTES PER SECOND" for a10

select command_id "BACKUP NAME",
STATUS,
to_char(start_time,'Mon DD,YYYY HH24:MI:SS') "START TIME",
time_taken_display "TIME TAKEN",
input_type "TYPE",
output_device_type "OUTPUT DEVICES",
input_bytes_display "INPUT SIZE",
output_bytes_display "OUTPUT SIZE",
output_bytes_per_sec_display "OUTPUT BYTES PER SECOND"
FROM V$RMAN_BACKUP_JOB_DETAILS where trunc(start_time) between trunc(sysdate-4) and  trunc(sysdate) ORDER BY END_TIME DESC;

Query Output

SQL> r
1 select command_id "BACKUP NAME",
2 STATUS,
3 to_char(start_time,'Mon DD,YYYY HH24:MI:SS') "START TIME",
4 time_taken_display "TIME TAKEN",
5 input_type "TYPE",
6 output_device_type "OUTPUT DEVICES",
7 input_bytes_display "INPUT SIZE",
8 output_bytes_display "OUTPUT SIZE",
9 output_bytes_per_sec_display "OUTPUT BYTES PER SECOND"
10* FROM V$RMAN_BACKUP_JOB_DETAILS where trunc(start_time) between trunc(sysdate-4) and trunc(sysdate) ORDER BY END_TIME DESC

BACKUP NAME STATUS START TIME TIME TAKEN TYPE OUTPUT DEV INPUT SIZE OUTPUT SIZE OUTPUT BYT
-------------------- --------------------- ------------------------- ---------- --------------- ---------- ------------ ------------ ----------
ARCH_BKP COMPLETED Sept. 10,2018 12:00:03 00:12:40 ARCHIVELOG DISK 27.65G 7.83G 10.55M
ARCH_BKP FAILED Sept. 10,2018 08:00:03 00:01:39 ARCHIVELOG DISK 3.72G 997.12M 10.07M
ARCH_BKP FAILED Sept. 10,2018 04:00:02 00:01:39 ARCHIVELOG DISK 3.72G 997.12M 10.07M
ARCH_BKP FAILED Sept. 10,2018 00:00:02 00:01:49 ARCHIVELOG DISK 3.72G 997.12M 9.15M
DB_INCREMENTAL_L0 FAILED Sept. 09,2018 01:00:02 04:37:20 DB INCR DISK 971.06G 140.76G 8.66M
ARCH_BKP FAILED Sept. 08,2018 20:00:02 00:00:40 ARCHIVELOG DISK 1015.20M 301.32M 7.53M
ARCH_BKP FAILED Sept. 08,2018 16:00:03 00:02:01 ARCHIVELOG DISK 3.82G 1.19G 10.07M
ARCH_BKP FAILED Sept. 08,2018 12:00:02 00:04:33 ARCHIVELOG DISK 10.04G 2.98G 11.19M
ARCH_BKP FAILED Sept. 08,2018 08:00:02 00:03:43 ARCHIVELOG DISK 7.98G 2.36G 10.85M
ARCH_BKP FAILED Sept. 08,2018 04:00:02 00:02:10 ARCHIVELOG DISK 4.71G 1.29G 10.18M
ARCH_BKP FAILED Sept. 08,2018 00:00:03 00:01:01 ARCHIVELOG DISK 1.42G 440.86M 7.23M
ARCH_BKP FAILED Sept. 07,2018 20:00:03 00:01:30 ARCHIVELOG DISK 3.04G 969.58M 10.77M
ARCH_BKP FAILED Sept. 07,2018 16:00:03 00:03:03 ARCHIVELOG DISK 6.25G 1.95G 10.94M
ARCH_BKP FAILED Sept. 07,2018 12:00:02 00:02:21 ARCHIVELOG DISK 5.11G 1.43G 10.38M
ARCH_BKP FAILED Sept. 07,2018 08:00:03 00:04:03 ARCHIVELOG DISK 8.85G 2.59G 10.93M
ARCH_BKP COMPLETED Sept. 07,2018 04:00:02 00:02:08 ARCHIVELOG DISK 4.73G 1.29G 10.36M
ARCH_BKP COMPLETED Sept. 07,2018 00:00:02 00:01:03 ARCHIVELOG DISK 1.82G 618.67M 9.82M
ARCH_BKP COMPLETED Sept. 06,2018 20:00:02 00:01:33 ARCHIVELOG DISK 2.74G 927.82M 9.98M
ARCH_BKP COMPLETED Sept. 06,2018 16:00:02 00:02:17 ARCHIVELOG DISK 4.32G 1.39G 10.38M
ARCH_BKP COMPLETED Sept. 06,2018 12:00:02 00:11:19 ARCHIVELOG DISK 24.07G 7.07G 10.66M

20 rows selected.

Friday, September 7, 2018

Oracle Database 18c: Oracle Restart Installation Part 2/2 - Install

This blog is in continuation with my previous blog of Pre-req completion for Restart Installation on OEL 7.3

In this blog we do the installation. 


unzip -qq LINUX.X64_180000_grid_home.zip 

Run pre-req check

cd /u01/app/180/grid

[grid@asm18cbox grid]$ ./runcluvfy.sh stage -pre crsinst -n asm18cbox

Verifying Physical Memory ...PASSED
Verifying Available Physical Memory ...PASSED
Verifying Swap Size ...PASSED
Verifying Free Space: asm18cbox:/usr,asm18cbox:/var,asm18cbox:/etc,asm18cbox:/sbin,asm18cbox:/tmp ...PASSED
Verifying User Existence: grid ...
Verifying Users With Same UID: 54232 ...PASSED
Verifying User Existence: grid ...PASSED
Verifying Group Existence: asmadmin ...PASSED
Verifying Group Existence: asmdba ...FAILED (PRVG-10461)
Verifying Group Existence: oinstall ...PASSED
Verifying Group Membership: asmdba ...FAILED (PRVG-10460)
Verifying Group Membership: asmadmin ...PASSED
Verifying Group Membership: oinstall(Primary) ...PASSED
Verifying Run Level ...PASSED
Verifying Architecture ...PASSED
Verifying OS Kernel Version ...PASSED
Verifying OS Kernel Parameter: semmsl ...PASSED
Verifying OS Kernel Parameter: semmns ...PASSED
Verifying OS Kernel Parameter: semopm ...PASSED
Verifying OS Kernel Parameter: semmni ...PASSED
Verifying OS Kernel Parameter: shmmax ...PASSED
Verifying OS Kernel Parameter: shmmni ...PASSED
Verifying OS Kernel Parameter: shmall ...PASSED
Verifying OS Kernel Parameter: file-max ...PASSED
Verifying OS Kernel Parameter: ip_local_port_range ...PASSED
Verifying OS Kernel Parameter: rmem_default ...PASSED
Verifying OS Kernel Parameter: rmem_max ...PASSED
Verifying OS Kernel Parameter: wmem_default ...PASSED
Verifying OS Kernel Parameter: wmem_max ...PASSED
Verifying OS Kernel Parameter: aio-max-nr ...PASSED
Verifying OS Kernel Parameter: panic_on_oops ...PASSED
Verifying Package: binutils-2.23.52.0.1 ...PASSED
Verifying Package: compat-libcap1-1.10 ...PASSED
Verifying Package: libgcc-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-4.8.2 (x86_64) ...PASSED
Verifying Package: libstdc++-devel-4.8.2 (x86_64) ...PASSED
Verifying Package: sysstat-10.1.5 ...PASSED
Verifying Package: ksh ...PASSED
Verifying Package: make-3.82 ...PASSED
Verifying Package: glibc-2.17 (x86_64) ...PASSED
Verifying Package: glibc-devel-2.17 (x86_64) ...PASSED
Verifying Package: libaio-0.3.109 (x86_64) ...PASSED
Verifying Package: libaio-devel-0.3.109 (x86_64) ...PASSED
Verifying Package: nfs-utils-1.2.3-15 ...PASSED
Verifying Package: smartmontools-6.2-4 ...PASSED
Verifying Package: net-tools-2.0-0.17 ...PASSED
Verifying Port Availability for component "Oracle Notification Service (ONS)" ...PASSED
Verifying Port Availability for component "Oracle Cluster Synchronization Services (CSSD)" ...PASSED
Verifying Users With Same UID: 0 ...PASSED
Verifying Current Group ID ...PASSED
Verifying Root user consistency ...PASSED
Verifying Host name ...PASSED
Verifying Node Connectivity ...
Verifying Hosts File ...PASSED
Verifying Check that maximum (MTU) size packet goes through subnet ...PASSED
Verifying Node Connectivity ...PASSED
Verifying Multicast or broadcast check ...PASSED
Verifying ASMLib installation and configuration verification. ...
Verifying '/etc/init.d/oracleasm' ...PASSED
Verifying '/dev/oracleasm' ...PASSED
Verifying '/etc/sysconfig/oracleasm' ...PASSED
Verifying ASMLib installation and configuration verification. ...PASSED
Verifying Network Time Protocol (NTP) ...
Verifying '/etc/chrony.conf' ...PASSED
Verifying '/var/run/chronyd.pid' ...PASSED
Verifying Daemon 'chronyd' ...PASSED
Verifying NTP daemon or service using UDP port 123 ...PASSED
Verifying chrony daemon is synchronized with at least one external time source ...PASSED
Verifying Network Time Protocol (NTP) ...PASSED
Verifying Same core file name pattern ...PASSED
Verifying User Mask ...PASSED
Verifying User Not In Group "root": grid ...PASSED
Verifying Time zone consistency ...PASSED
Verifying resolv.conf Integrity ...PASSED
Verifying DNS/NIS name service ...PASSED
Verifying Domain Sockets ...PASSED
Verifying /boot mount ...PASSED
Verifying Daemon "avahi-daemon" not configured and running ...PASSED
Verifying Daemon "proxyt" not configured and running ...PASSED
Verifying User Equivalence ...PASSED
Verifying /dev/shm mounted as temporary file system ...PASSED
Verifying File system mount options for path /var ...PASSED
Verifying zeroconf check ...PASSED
Verifying ASM Filter Driver configuration ...PASSED

Pre-check for cluster services setup was unsuccessful on all the nodes.


Failures were encountered during execution of CVU verification request "stage -pre crsinst".

Verifying Group Existence: asmdba ...FAILED
asm18cbox: PRVG-10461 : Group "asmdba" selected for privileges "OSDBA" does not
exist on node "asm18cbox".

Verifying Group Membership: asmdba ...FAILED
asm18cbox: PRVG-10460 : User "grid" does not belong to group "asmdba" selected
for privileges "OSDBA" on node "asm18cbox".


CVU operation performed: stage -pre crsinst
Date: Sep 7, 2018 8:41:00 AM
CVU home: /u01/app/180/grid/
User: grid

I generally ignore the osdba and asmdba, as the installation will go fine even without that.

Connect as grid user and run the installer 

[grid@asm18cbox ~]$ cd /u01/app/180/grid/

[grid@asm18cbox grid]$ ./gridSetup.sh

Step 1 - Select to install Standalone Server (Oracle Restart)

Step 2 - Select the disks, change the path to "/dev/oracleasm/disks/*" to search for previously created disks.



Step 3 - Set password for ASM users

Step 4 - Select management option if required.

Step 5 - Select OS Groups as given - 



Step 6 - Select Oracle base as "/u01/app/grid"



Step 7 - Select oraInventory location - default is fine - /u01/app/oraInventory 



Step 8 - Uncheck root script execution 



Step 9 - If all prechecks are fine then you will move automatically step 10 



Step 10 - Installation will continue 

Step 11.2 - Run root scripts as when prompted 



Run the oraInst script /u01/app/oraInventory/orainstRoot.sh
[root@asm18cbox ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

Now Run the root script /u01/app/180/grid/root.sh
[root@asm18cbox ~]# /u01/app/180/grid/root.sh
Performing root user operation.

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/180/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/180/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/asm18cbox/crsconfig/roothas_2018-09-07_08-55-09AM.log
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node asm18cbox successfully pinned.
2018/09/07 08:55:20 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'asm18cbox'
CRS-2673: Attempting to stop 'ora.evmd' on 'asm18cbox'
CRS-2677: Stop of 'ora.evmd' on 'asm18cbox' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'asm18cbox' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.

asm18cbox 2018/09/07 08:56:31 /u01/app/180/grid/cdata/asm18cbox/backup_20180907_085631.olr 70732493
2018/09/07 08:56:32 CLSRSC-327: Successfully configured Oracle Restart for a standalone server

Step 11.3 - Press Okay after root script execution and the installation continues

This completes the installation of Oracle Restart.