Advertisement

Tuesday, April 16, 2019

Oracle Database: RAC 2 Nodes - Standby and DG Broker Build - Part 2

This blog is in continuation with my previous blog to build Standby and DG broker. 
In this blog I do the setup of Dataguard and DG Broker.


Step 1 - Add Standby Redo log files (Optional and on both primary and Standby)
Standby redo logfiles' count is n+1 where n = total redo logfiles per thread
Syntax is 
alter database add standby logfile thread <num> group <num> ('+DATA','+FRA' ) size 1024M;

Step 2 - Create Directories for DG Broker Configuration
[Node 1 - grid user - Primary Cluster]
asmcmd mkdir +FRA/MYDBPROD/DATAGUARDCONFIG
asmcmd mkdir +DATA/MYDBPROD/DATAGUARDCONFIG

[Node 1 - grid user - StandbyCluster ]

asmcmd mkdir +FRA/MYDBSTDBY/DATAGUARDCONFIG
asmcmd mkdir +DATA/MYDBSTDBY/DATAGUARDCONFIG


Step 3 - Setup Dataguard configuration
[Node 1 - oracle user - Standby Cluster]
alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(MYDBPROD,MYDBSTDBY)' scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_2='SERVICE=MYDBPROD ASYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=MYDBPROD' scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_STATE_2='ENABLE' scope=both sid='*';
alter system set fal_server=MYDBPROD scope=both sid='*';

[Node 1 - oracle user - Primary Cluster]
alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(MYDBPROD,MYDBSTDBY)' scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_2='SERVICE=MYDBSTDBY ASYNC NOAFFIRM VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=MYDBSTDBY' scope=both sid='*';
alter system set fal_server=MYDBSTDBY scope=both sid='*';

alter system set LOG_ARCHIVE_DEST_STATE_2='ENABLE' scope=both sid='*';


Step 4 - Setup DG Broker configuration
[Node 1 - oracle user - Primary Cluster]
dgmgrl /
CREATE CONFIGURATION 'DG_Config' AS PRIMARY DATABASE IS 'MYDBPROD' CONNECT IDENTIFIER IS MYDBPROD;
show configuration;
ADD DATABASE 'MYDBSTDBY' AS CONNECT IDENTIFIER IS 'MYDBSTDBY';
show configuration;

Enable configuration;


Step 5 - Verify Configuration and Status
[Node 1 - oracle user - Primary Cluster]
dgmgrl /
show configuration;
show database verbose 'MYDBPROD';
show database verbose 'MYDBSTDBY';
show instance verbose 'MYDBPR11' on database 'MYDBPROD';
show instance verbose 'MYDBPR12' on database 'MYDBPROD';
show instance verbose 'MYDBPR11' on database 'MYDBSTDBY';

show instance verbose 'MYDBPR12' on database 'MYDBSTDBY';

This completes the setup of Dataguard and DG Broker

Oracle Database: RAC 2 Nodes - Standby and DG Broker Build - Part 1

In this blog I show how to configure Dataguard Broker configuration for a 2-node RAC cluster to a 2-Node Standby.
This is a 2 blog series - you can find the second blog here - which only talks about configuration of Dataguard and Broker.

Here is my configuration
Production DB
Unique Name - MYDBPROD
 - Instance 1 - MYDBPR11
 - Insance 2 - MYDBPR12


Unqiue Name - MYDBSTDBY
 - Instance 1 - MYDBPR11
 - Insance 2 - MYDBPR12


Step 1 - Create tnsnames.ora entries in all nodes
MYDBPROD =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = MYPRODDB-PRD-SCAN)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = MYDBPROD)
        (UR = A)
    )
  )

MYDBSTDBY =

  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = MYSTDBYDB-PRD-SCAN)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = MYDBSTDBY)
        (UR = A)
    )
  )


Step 2 - Static Listener Entries for DG broker operations
[Node 1 - Grid user Standby Cluster]
File - $ORACLE_HOME/network/admin/listener.ora
SID_LIST_LISTENER =
 (SID_LIST =
  (SID_DESC =
    (GLOBAL_DBNAME = MYDBSTDBY_DGMGRL)
    (ORACLE_HOME = <OH>)
    (SID_NAME = MYDBPR11)
  )
  (SID_DESC =
    (GLOBAL_DBNAME = MYDBSTDBY)
    (ORACLE_HOME = <OH>)
    (SID_NAME = MYDBPR11)
  )
 )


lsnrctl stop

lsnrctl start

[Node 1 - Grid user Standby Cluster]

File - $ORACLE_HOME/network/admin/listener.ora


SID_LIST_LISTENER =

 (SID_LIST =
  (SID_DESC =
    (GLOBAL_DBNAME = MYDBSTDBY_DGMGRL)
    (ORACLE_HOME = <OH>)
    (SID_NAME = MYDBPR12)
  )
 )



lsnrctl stop 

lsnrcl start

[Node 1- Grid user - Primary Cluster]

File - $ORACLE_HOME/network/admin/listener.ora
SID_LIST_LISTENER =
 (SID_LIST =
  (SID_DESC =
    (GLOBAL_DBNAME = MYDBPROD_DGMGRL)
    (ORACLE_HOME = <OH>)
    (SID_NAME = MYDBPR11)
  )
 )


lsnrctl stop

lsnrctl start

[Node 2 - Grid user - Primary Cluster]

File - $ORACLE_HOME/network/admin/listener.ora
SID_LIST_LISTENER =
 (SID_LIST =
  (SID_DESC =
    (GLOBAL_DBNAME = MYDBPROD_DGMGRL)
    (ORACLE_HOME = <OH>)
    (SID_NAME = MYDBPR12)
  )
 )

lsnrctl stop

lsnrctl start


Step 3 - Reset sys and system password and copy password file from Node 1 of Primary to other 3 nodes (in $ORACLE_HOME/dbs), also change the name as per the name of the instance

Step 4 -  Add Database and Instance in OCR registry 
[Node 1 - oracle user - Standby Cluster]
srvctl add database -d MYDBSTDBY -o /opt/oracle/product/112/db -c RAC \
-p +DATA/MYDBSTDBY/spfileMYDBSTDBY.ora -r PHYSICAL_STANDBY -s MOUNT -t IMMEDIATE -n <DB_NAME> -a DATA,REDO1,FRA

srvctl add instance -d MYDBSTDBY -i MYDBPR11 -n Node1

srvctl add instance -d MYDBSTDBY -i MYDBPR12 -n Node2

Step 5 - Create init files
[Node 1 - oracle user - Standby Cluster]
cd /opt/oracle/product/112/db/dbs
cat initMYDBPR11.ora
SPFILE='+DATA/MYDBSTDBY/spfileMYDBSTDBY.ora'

[Node 2 - oracle user - Standby Cluster]

cd /opt/oracle/product/112/db/dbs
cat initMYDBPR12.ora
SPFILE='+DATA/MYDBSTDBY/spfileMYDBSTDBY.ora'


Step 6 - Create Audit Directories
[Node 1 - oracle user - Standby Cluster]
mkdir -p /opt/oracle/base/admin/MYDBSTDBY/adump

[Node 2 - oracle user - Standby Cluster]

mkdir -p /opt/oracle/base/admin/MYDBSTDBY/adump

Step 7 - Start the instance, mount it and recover standby database. 
You can use any method to recover/restore.
One of the methods which I recommend is to use active database duplication. It's generally faster. 

rman << EOF
connect TARGET sys/<pwd>@MYDBPROD ;
connect AUXILIARY sys/<pwd>@MYDBSTDBY;
DUPLICATE TARGET DATABASE
FOR STANDBY
FROM ACTIVE DATABASE
DORECOVER
NOFILENAMECHECK;


In the next blog I discuss on DG Broker Build

Oracle Database: How to Know if Real Time Apply is Happening or Not (11g,12c,18c,19c)

In this blog I tell how to know if your Managed Standby is using Real time apply or not. 
Note  - for Real time apply - you must have standby redo logs configured (n + 1 standby logs per thread)


Case 1 - Idle  When no Recovery is Happening 
On Production 

select DEST_ID,dest_name,status,type,srl,recovery_mode from v$archive_dest_status where dest_id=2

   DEST_ID DEST_NAME                           STATUS    TYPE           SRL RECOVERY_MODE
---------- ----------------------------------- --------- -------------- --- -----------------------

         2 LOG_ARCHIVE_DEST_2                  VALID     PHYSICAL       YES IDLE

Case 2 - Recovery (no Real time apply) 
On Production 
 select DEST_ID,dest_name,status,type,srl,recovery_mode from v$archive_dest_status where dest_id=2 ;

   DEST_ID DEST_NAME                           STATUS    TYPE           SRL RECOVERY_MODE
---------- ----------------------------------- --------- -------------- --- -----------------------
         2 LOG_ARCHIVE_DEST_2                  VALID     PHYSICAL       YES MANAGED

3. Case 3 - Real Time Apply 
On Production 
select DEST_ID,dest_name,status,type,srl,recovery_mode from v$archive_dest_status where dest_id=2

   DEST_ID DEST_NAME                           STATUS    TYPE           SRL RECOVERY_MODE
---------- ----------------------------------- --------- -------------- --- -----------------------
         2 LOG_ARCHIVE_DEST_2                  VALID     PHYSICAL       YES MANAGED REAL TIME APPLY

I have highlighted the changes. Make note of your dest_id, i have shown here for 2, as my standby is configured using dest id 2. 

You can start a real time apply using. 
alter database recover managed standby database using current logfile disconnect;

Wednesday, April 10, 2019

Oracle Database: kggpnpInit: failed to init gpnp (Only one Instance running) 11g,12c,18c,19c

Symptom - 
1.Only one instance of Oracle DB can run on a RAC node (1 or 2)
2. Public interface used as interconnect interface as seen in gv$cluster_interconnects

In Alert log you can see - 
[USER(153710)]CRS-2316:Fatal error: cannot initialize GPnP, CLSGPNP_ERR (Generic GPnP error).
kggpnpInit: failed to init gpnp
  WARNING: No cluster interconnect has been specified. Depending on
           the communication driver configured Oracle cluster traffic
           may be directed to the public interface of this machine.
           Oracle recommends that RAC clustered databases be configured
           with a private interconnect for enhanced security and

           performance.


Reason - From October 2018 PSU of Oracle RAC you must apply OCW component of the GI patch as well to the Oracle Home. 

Example - 
Current  Patch Level 
28790660;OJVM PATCH SET UPDATE 11.2.0.4.190115
28729262;Database Patch Set Update : 11.2.0.4.190115 (28729262)

Solution - Apply OCW Patch
OCW Patch is available as part of GI PSU Bundle, you cannot download it individually (in general)
But you can apply it individually to DB Home
This can be really an issue if you have done newer versions of Oracle install on higher version of Cluster

Patch Apply Process
Step 1 - Stop all Database instances running from the home

Step 2 - Create the ocm file 
$ cd $ORACLE_HOME/ccr/bin
$ ./emocmrsp
(respond <Enter> and then 'Y'>
$ cp ocm.rsp /home/oracle/

Step 3 - Patch Apply 
PrePatch
<Patch Unzipped Location>/<patch_number>/custom/server/<patch_number>/custom/scripts/prepatch.sh -dbhome <oh>
Patch
<oh>/OPatch/opatch apply -oh <oh> -local <Patch Unzipped Location>/<patch_number>/custom/server/<patch_number>
Post Patch
<Patch Unzipped Location>/<patch_number>/custom/server/<patch_number>/custom/scripts/postpatch.sh -dbhome <oh>

Example for Jan 2019 Patch
$ /images/oracle/11.2.0.4/p28729234/28729234/custom/server/28729234/custom/scripts/prepatch.sh -dbhome /opt/oracle/product/112/db
$ /opt/oracle/product/112/db/OPatch/opatch napply -oh /opt/oracle/product/112/db -local /images/oracle/11.2.0.4/p28729234/28729234/custom/server/28729234 -ocmrf /home/oracle/ocm.rsp
$ /images/oracle/11.2.0.4/p28729234/28729234/custom/server/28729234/custom/scripts/postpatch.sh -dbhome /opt/oracle/product/112/db


Final Patch Level
28729234;OCW Patch Set Update : 11.2.0.4.190115 (28729234)
28790660;OJVM PATCH SET UPDATE 11.2.0.4.190115
28729262;Database Patch Set Update : 11.2.0.4.190115 (28729262

Reference Doc ID - 
Customer Recommended11.2.0.4 RAC Database Instance Fails to Start with "No connectivity to other instances in the cluster during startup" After Applying 11.2.0.4 OCT 2018 DB PSU (11.2.0.4.181016 DB PSU) (Patch# 28204707) (Doc ID 2471441.1)

Saturday, April 6, 2019

Oracle Database: Lower Compatibility / Recreate Disk Groups RAC

In this blog, I talk about recreation of Diskgroups, the reason can be any  like you want to change the name, or any other.
(remember data will be lost, so if you are doing this just for fun, don't on a production cluster)

Below are the Steps 

Step 1 - Free the Diskgroup
Shutdown all the Databases and remove any open files from the diskgroup (oracle user)
You can lsof command to check open files in asmcmd prompt. (grid user)

Relocate any OCR copy if there (root user)

Step 2 - Dismount the Diskgroup from Node 1 (grid user)
alter diskgroup <name> dismount;

Step 3- Drop the Diskgroup from Node 2 (grid user)
drop diskgroup <name>;

Step 4 - Free the disks to system (root user)
oracleasm delete disk <name>

Step 5 - Rescan the disk on the other node (root user)
oracleasm scandisks

Step 6  - Recreate the disk (root user)
oracleasm create disk <path> <name>

Step 7 - Rescan the disk on the other node (root user)
oracleasm scandisks

Step 8 - Recreate the diskgroup (grid user)
asmca -silent -sysAsmPassword mypassword -asmsnmpPassword mypassword -createDiskGroup -diskGroupName DATA -diskList  /dev/oracleasm/disks/DISK_DATA -redundancy EXTERNAL -au_size 4 -compatible.asm 18.0.0.0.0 -compatible.rdbms 11.2.0.0.0


Step 9 - Add ocr if required
ocrconfig add <DG_NAME>


Friday, April 5, 2019

Oracle Database: Archive Log Repository

Archive log repository is one of the seldom used and known concepts in Oracle Database. 

An Oracle Database can be configured to send archive logs to remote destination without database being present in that site. 

To explain
1. Database Instance is running
2. Database Control File is present
3. The CF has to be standby CF (ensure this)
4. Database is in mount state
5. No Datafiles present 

If you configure your source database similar to a dataguard configuration, your primary will start sending archive logs to the Archive log repostiory site. 

What are the use case? 
1. Backup of archive logs on remote site 
2. Remote Site can be used for tape backup etc
3. Setup during dataguard setup - the time spent to backup and transfer the archivelogs can be used by sending the logs using the archive log repo. 
So when you have your database restored, you will have your archives already there. 

For More Information : https://docs.oracle.com/cd/E11882_01/server.112/e41134/log_transport.htm#SBYDB4745

Tuesday, April 2, 2019

Oracle Database: ASRU Utility

In this blog I detail what is ASRU and where it can be used. 

ASRU is Oracle Automatic Storage Management and Thin Reclamation Utility. 

It is used for thin storages
Details of utility and its stages can be found here 
https://www.oracle.com/technetwork/database/oracle-automatic-storage-management-132797.pdf
https://www.oracle.com/technetwork/database/oracle-asru-3par.pdf

ASRU is a very old utility from time of 11g, however it is still valid for 12c/18c/19c version of cluster 

You can download AS
http://www.oracle.com/technetwork/database/database-technologies/cloud-storage/asru-175568.zip

ASRU can be very useful in lot of vases

Doc ID's for reference
How To Validate ASM Diskgroup Consistency/State After ASM Reclamation Utility (ASRU) Execution Aborted. (Doc ID 1668673.1)

To run ASRU
As Grid Infra Ower
ASRU <DG_NAME>