Advertisement

Thursday, February 28, 2019

Oracle Database 19c: Grid Infrastructure / RAC Installation - Part 3/3

In this part we do the real installation. 
This is a 3 Blog Series and this is Blog 3 (final)
Of this Blog series - 
Part 1
Part 2 


This assumes
1. You have setup your linux machine
2. Made a clone and changed the hostname
3. Assigned IP addresses to Public and private interfaces
4. Unzipped grid software in /u01/app/190/grid
(unzip -qq V981627-01.zip -d /u01/app/190/grid)

Cluster Pre-install
1. Setup passwordless ssh between users
cd /u01/app/190/grid

2. 
[As grid]

$ cd deinstall

$ ./sshUserSetup.sh -user grid -hosts "rac19c01 rac19c02" -noPromptPassphrase -confirm -advanced

2. run Cluster Verification and resolve issues if any 

$ cd /u01/app/190/grid
$ ./runcluvfy.sh stage -pre crsinst -n rac19c01,rac19c02  -orainv oinstall  -osdba dba -verbose

Ensure Cluster Verification is successfull. 

3. Start with Installation
[Next login using grid user]
$ cd /u01/app/190/grid/

$ ./gridSetup.sh

Step 1 - 

Step 2 - 

Step 3 - 

Step 4 - 


Step 5 - 

Step 6 - 


Step 7 -  (I have configured not to configure Grid Repo - but you should do so in production) 

Step 8 -  Change Discover String to "/dev/oracleasm/disks/*"

Step 9 - 

Step 9.1 - Set default password - an warning will come if you have not met oracle standards for password. you can ignore and continue. 

Step 10 - 

Step 11 - 

Step 12 - 

Step 13 - 

Step 14 - 

Step 15 - 

Step 16 - 


Step 16.1 - Ignore RPM DB Check and Continue 

Step 17 - 



Step 18 - 


Step 18.1 - 

Step 18.2
Run Root Scripts as in order 
[Node 1 ] 
$ /u01/app/oraInventory/orainstRoot.sh
[Node 2 ]
$ /u01/app/oraInventory/orainstRoot.sh

[Node 1 ]
$ /u01/app/190/grid/root.sh

[Node 2 ]

$ /u01/app/190/grid/root.sh


Logs of each can be found in the end of the blog page.

Click "OK" after all the root scripts are executed.

Step 18.3 


Step 19 
Click Close and Complete the Installation 



Logs
[Node 1 - orainstRoot.sh]

[root@rac19c01 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[Node 2 - orainstRoot.sh]
[root@rac19c02 ~]# /u01/app/oraInventory/orainstRoot.sh
Changing permissions of /u01/app/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/oraInventory to oinstall.
The execution of the script is complete.

[Node 1 - root.sh]
[root@rac19c01 ~]# /u01/app/190/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/190/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/190/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac19c01/crsconfig/rootcrs_rac19c01_2019-02-28_10-15-19AM.log
2019/02/28 10:15:31 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2019/02/28 10:15:31 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2019/02/28 10:15:31 CLSRSC-363: User ignored prerequisites during installation
2019/02/28 10:15:31 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2019/02/28 10:15:33 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2019/02/28 10:15:34 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2019/02/28 10:15:34 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2019/02/28 10:15:34 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2019/02/28 10:15:56 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2019/02/28 10:16:00 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/02/28 10:16:01 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2019/02/28 10:16:11 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2019/02/28 10:16:11 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2019/02/28 10:16:16 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2019/02/28 10:16:17 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/02/28 10:17:22 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2019/02/28 10:17:28 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2019/02/28 10:17:35 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2019/02/28 10:17:41 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.

ASM has been created and started successfully.

[DBT-30001] Disk groups created successfully. Check /u01/app/grid/cfgtoollogs/asmca/asmca-190228AM101813.log for details.

2019/02/28 10:19:05 CLSRSC-482: Running command: '/u01/app/190/grid/bin/ocrconfig -upgrade grid oinstall'
CRS-4256: Updating the profile
Successful addition of voting disk d7a8be4444884f37bf1445998e475b4d.
Successful addition of voting disk 5f2c49eeb35d4f71bfa64279bfaa8cb9.
Successful addition of voting disk 93dbea13c3be4f11bf14e05a0556cfac.
Successfully replaced voting disk group with +OCR_VOTE.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   d7a8be4444884f37bf1445998e475b4d (/dev/oracleasm/disks/OCR_VOTE1) [OCR_VOTE]
 2. ONLINE   5f2c49eeb35d4f71bfa64279bfaa8cb9 (/dev/oracleasm/disks/OCR_VOTE2) [OCR_VOTE]
 3. ONLINE   93dbea13c3be4f11bf14e05a0556cfac (/dev/oracleasm/disks/OCR_VOTE3) [OCR_VOTE]
Located 3 voting disk(s).
2019/02/28 10:20:25 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2019/02/28 10:21:27 CLSRSC-343: Successfully started Oracle Clusterware stack
2019/02/28 10:21:27 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2019/02/28 10:23:05 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2019/02/28 10:23:38 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

[Node 2 - root.sh]
[root@rac19c02 ~]# /u01/app/190/grid/root.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/190/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Relinking oracle with rac_on option
Using configuration parameter file: /u01/app/190/grid/crs/install/crsconfig_params
The log of current session can be found at:
  /u01/app/grid/crsdata/rac19c02/crsconfig/rootcrs_rac19c02_2019-02-28_10-24-21AM.log
2019/02/28 10:24:27 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'.
2019/02/28 10:24:27 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'.
2019/02/28 10:24:27 CLSRSC-363: User ignored prerequisites during installation
2019/02/28 10:24:27 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'.
2019/02/28 10:24:28 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.
2019/02/28 10:24:29 CLSRSC-594: Executing installation step 5 of 19: 'SetupOSD'.
2019/02/28 10:24:29 CLSRSC-594: Executing installation step 6 of 19: 'CheckCRSConfig'.
2019/02/28 10:24:29 CLSRSC-594: Executing installation step 7 of 19: 'SetupLocalGPNP'.
2019/02/28 10:24:30 CLSRSC-594: Executing installation step 8 of 19: 'CreateRootCert'.
2019/02/28 10:24:30 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'.
2019/02/28 10:24:33 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'.
2019/02/28 10:24:33 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.
2019/02/28 10:24:35 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'.
2019/02/28 10:24:35 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
2019/02/28 10:24:52 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
2019/02/28 10:25:40 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'.
2019/02/28 10:25:41 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'.
2019/02/28 10:25:43 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.
2019/02/28 10:25:44 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'.
2019/02/28 10:25:52 CLSRSC-594: Executing installation step 17 of 19: 'StartCluster'.
2019/02/28 10:26:43 CLSRSC-343: Successfully started Oracle Clusterware stack
2019/02/28 10:26:43 CLSRSC-594: Executing installation step 18 of 19: 'ConfigNode'.
2019/02/28 10:26:56 CLSRSC-594: Executing installation step 19 of 19: 'PostConfig'.
2019/02/28 10:27:02 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

Oracle Database 19c: Grid Infrastructure / RAC Installation - Part 2/3

In this blog I discuss from the part where you have setup your machine and ready to do your Linux configuration
This is a 3 Blog Series and this is Blog 2
Of this Blog series - 
Part 1
Part 3 

(Make sure you have internet connectivity)
Linux Configuration (on Both nodes)
1. Install rpm's via yum


  • yum install oracle-database-preinstall-19c.x86_64 -y
  • yum install oracleasm-support -y
  • yum install kmod-oracleasm.x86_64 -y
  • yum install bind -y
  • yum install kmod -y
  • yum install kmod-libs -y

2. Groups and User Addition

  • groupadd -g 54331 asmadmin
  • useradd -g oinstall -G asmadmin,dba -u 54232 grid
  • Set password for root, oracle and grid user

3. Copy secure/limits.d

  • cd /etc/security/limits.d/
  • In file oracle-database-preinstall-19c.conf duplicate all entries and change the user to grid.

4. Add huge pages

  • vi /etc/sysctl.conf
  • vm.nr_hugepages = 10000
  • sysctl --system

5. Swap Creation - Refer Blog - Swap Creation

6. Update Contents of /etc/profile 

if [ $USER = "oracle" ]; then
  if [ $SHELL = "/bin/ksh" ]; then
    ulimit -p 16384
    ulimit -n 65536 
  ulimit -s 32768
  else
    ulimit -u 16384 -n 65536
    ulimit -s 32768
  fi
fi

if [ $USER = "grid" ]; then
  if [ $SHELL = "/bin/ksh" ]; then
    ulimit -p 16384
    ulimit -n 65536 
  ulimit -s 32768
  else
    ulimit -u 16384 -n 65536
    ulimit -s 32768
  fi
fi

Network Configuration (on both nodes)

7.  Contents of /etc/hosts file 
(The key thing here is to configure your machines as per below addresses. 
There will be 3 interfaces on Virtual box One public, 2 private to Cluster - One for ASM communcation and one Private for Cluster Communcation

192.168.10.51 rac19c01.novalocal rac18c01
192.168.10.52 rac19c02.novalocal rac18c02
192.168.10.61 rac19c01-vip.novalocal rac18c01-vip
192.168.10.62 rac19c02-vip.novalocal rac18c02-vip

192.168.30.21 rac19c01-priv01.novalocal rac18c01-priv01
192.168.30.22 rac19c02-priv01.novalocal rac18c02-priv01



8. DNS Configuration (On Node 1 only)
(After you clone the machine - make sure you disable named service on the cloned machine 

File - /etc/named.conf - Make the changes as given below

Change 1  - Add highlighted entry
options {
        listen-on port 53 { 127.0.0.1;192.168.50.51; };  

Change 1.1
        allow-query     { 192.168.10.0/24; };

Change 2 - remove the below section
zone "." IN {
type hint;
file "named.ca";
};

Change 3 -  and add this in the end of the file

zone "novalocal" IN {
type master;
file "novalocal.zone";
allow-update { none; };
};


Create file /var/named/localdomain.zone with contents as below

$ cat /var/named/novalocal.zone

$TTL  86400
@ IN SOA      novalocal. novalocal.(
42          ; serial (d. adams)
3H          ; refresh
15M         ; retry
1W          ; expiry
1D )        ; minimum

                                 IN NS   rac19c01.novalocal.
localhost                        IN A    127.0.0.1
rac19c01.novalocal.                IN A    192.168.10.51
rac19c02.novalocal.                IN A    192.168.10.52
rac19c01-vip.novalocal.            IN A    192.168.10.61
rac19c02-vip.novalocal.            IN A    192.168.10.62
rac-scan.novalocal.            IN A    192.168.10.71
rac-scan.novalocal.            IN A    192.168.10.72
rac-scan.novalocal.            IN A    192.168.10.73


File - /etc/resolv.conf - create this file as below (On both nodes)

$ cat /etc/resolv.conf

nameserver 192.168.10.51
search novalocal
options attempts:1
options timeout:1

Finally enable the named.service and restart it
Note - the systemctl utiltity used instead of conventional chkconfig and service utility

$ systemctl enable named.service
$ systemctl restart named.service


9.  ORACLEASM Configuration
[As root on All Nodes  ]
$ oracleasm configure -i
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver.  The following questions will determine whether the driver is
loaded on boot and what permissions it will have.  The current values
will be shown in brackets ('[]').  Hitting <ENTER> without typing an
answer will keep that current value.  Ctrl-C will abort.

Default user to own the driver interface []: grid
Default group to own the driver interface []: asmadmin
Start Oracle ASM library driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration: done

$ oracleasm init

[As root on Node 1]
(Ensure the disks are partitioned, no FS required though)

$ oracleasm createdisk OCR_VOTE1 /dev/xvdc1
Writing disk header: done
Instantiating disk: done

$ oracleasm createdisk OCR_VOTE2 /dev/xvdd1
Writing disk header: done
Instantiating disk: done

$ oracleasm createdisk OCR_VOTE3 /dev/xvde1
Writing disk header: done
Instantiating disk: done

[As root on All nodes]
$  oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
Instantiating disk "OCR_VOTE1"
Instantiating disk "OCR_VOTE2"
Instantiating disk "OCR_VOTE3"

10.  Create Directories and Unzip Software 
[As root user on both/all nodes] 

mkdir /u01 
mkdir /u01/app
chown root:oinstall /u01 /u01/app
chmod 755 /u01 /u01/app

mkdir /u01/app/190
chown grid:oinstall /u01/app/190
chmod 755 /u01/app/190

mkdir /u01/app/grid
chown grid:oinstall /u01/app/grid
chmod 755 /u01/app/grid

mkdir /u01/app/oraInventory
chown grid:oinstall /u01/app/oraInventory
chmod 755 /u01/app/oraInventory

[As grid user]
mkdir -p /u01/app/190/grid


The next part of the blog discusses on the real installation

Oracle Database 19c: Grid Infrastructure / RAC Installation - Part 1/3


In this blog series I am going to demonstrate installation of RAC 18c on on premise Linux.

I have distributed this series into multiple parts, referring few parts from my previous 12c/18c installation series and then few parts are the new way to go.

This is a 3 Blog Series and this is Blog 1 
Of this Blog series - 
Part 2


Part 3


For reference 18c installation is described here


Part 1 - Linux Install 
This is an old reference on installing Oracle Linux.

In this version I do use a newer version which is 7.4, however the steps are essential the same, the key difference lies in getting latest ISO from edelivery.oracle.com

Part 2.1 - Linux and Network Configuration
This blog essentially discusses on OS and network configuration. 
, I discuss on how to configure the network (specially DNS). 


You can still refer to my previous blog for Network Configuration of 12c Install to detail out how network configuration is done for each node. (Note the IP addresses I use this time are bit different from earlier)

Part 2.2 is essentially similar to my previous blog for 12c install - where I discuss on how to attach disks and clone a machine . However I do have provided some extra details on how to get the ORACLEASM up and running in 18c as part of Part 2 and 3.

Part 3 - Installation
This part discusses on installation of Oracle Grid Infra on the Servers. 

To detail out - here are my machine configurations
4 vCPU * 32GB RAM


Oracle Linux Server release 7.4
NAME="Oracle Linux Server"
VERSION="7.4"
ID="ol"
VERSION_ID="7.4"
PRETTY_NAME="Oracle Linux Server 7.4"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:4:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"

ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.4
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.4
Red Hat Enterprise Linux Server release 7.4 (Maipo)
Oracle Linux Server release 7.4

Oracle Database: 19c - Knowledge Base

Oracle Database: 18c Grid Infra (RAC) Silent Installation

In this blog I am going to discuss about silent installation of 18c RAC

Here the pre-reqs are already complete as described in my previous blogs 

Pre-req 1/2
Pre-req 2/2


Let's pick up now - 


1. Setup passwordless ssh between users
cd /u01/app/180/grid

[As root]
$ ./sshUserSetup.sh -user root -hosts "rac18c01 rac18c02" -noPromptPassphrase -confirm -advanced

[As grid]
$ ./sshUserSetup.sh -user grid -hosts "rac18c01 rac18c02" -noPromptPassphrase -confirm -advanced

2. run Cluster Verification and resolve issues if any 
$ ./runcluvfy.sh stage -pre crsinst -n rac18c01,rac18c02 -verbose | tee /tmp/cluvfy.out

3. Final Pre-req check 
(Response file can be found in the end)
/u01/app/180/grid/gridSetup.sh -silent  -executePrereqs  -waitForCompletion  -responseFile /tmp/Silent_18cGrid.rsp

Ensure checks complete successfully. 

4. Start the installation
$  /u01/app/180/grid/gridSetup.sh -silent  -waitForCompletion  -responseFile /tmp/Silent_18cGrid.rsp

As a root user, execute the following script(s):
        1. /u01/app/180/grid/root.sh 

Execute /opt/oracle/product/180/grid/root.sh on the following nodes:
[rac18c01, rac18c02]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes.

Successfully Setup Software with warning(s).
As install user, execute the following command to complete the configuration.
        /u01/app/180/grid/gridSetup.sh  -executeConfigTools -responseFile /tmp/OBS_18cGrid.rsp [-silent]

Step 4.1 Run the Root Script
[As root on Node 1]
$ /u01/app/180/grid/root.sh
[As root on Node 2]
$ /u01/app/180/grid/root.sh

Step 4.2 - Run Post Script 
[As grid on Node 1]
$  /u01/app/180/grid/gridSetup.sh  -silent -executeConfigTools -responseFile /tmp/OBS_18cGrid.rsp 

Installation is complete. 


Response File for 18c RAC
oracle.install.responseFileVersion=/oracle/install/rspfmt_crsinstall_response_schema_v18.0.0
INVENTORY_LOCATION=/opt/oracle/oraInventory
oracle.install.option=CRS_CONFIG
ORACLE_BASE=/u01/app/grid
oracle.install.asm.OSDBA=dba
oracle.install.asm.OSOPER=
oracle.install.asm.OSASM=sysasm
oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.gpnp.scanName=rac18c-scan.novalocal
oracle.install.crs.config.gpnp.scanPort=1521
oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.clusterName=prod-bss
oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.crs.config.clusterNodes=rac18c01:rac18c01-vip:HUB,rac18c02:rac18c02-vip:HUB
oracle.install.crs.config.networkInterfaceList=eth0:192.168.10.0:1,eth1:192.168.30.0:5
oracle.install.crs.config.useIPMI=false
oracle.install.asm.storageOption=ASM
oracle.install.asmOnNAS.configureGIMRDataDG=false
oracle.install.asm.SYSASMPassword=Oracle123
oracle.install.asm.diskGroup.name=OCR_VOTE
oracle.install.asm.diskGroup.redundancy=NORMAL
oracle.install.asm.diskGroup.AUSize=4
oracle.install.asm.diskGroup.disks=/dev/oracleasm/disks/OCR_VOTE1p1,/dev/oracleasm/disks/OCR_VOTE2p1,/dev/oracleasm/disks/OCR_VOTE3p1
oracle.install.asm.diskGroup.diskDiscoveryString=/dev/oracleasm/disks/*
oracle.install.asm.monitorPassword=Oracle123
oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=NONE

oracle.install.crs.rootconfig.executeRootScript=false