Advertisement

Tuesday, October 27, 2015

Oracle Database - 12c - Real-Time Database Operation Monitoring

 
In the latest release of Oracle - 12c,  Oracle has come up with another brilliant feature for performance monitoring  which is Real-Time Database Operation Monitoring.

Real time Sql monitoring was introduced in 11g and helped monitoring sql's in real time.
However, only individual sql's could be monitored. Suppose there is a batch operation or a pl/sql block having multiple calls inside it which might not trigger real-time monitoring by default though it takes a lot of DB and CPU time. 

This is where real time database operation monitoring comes in.

The concept and the way to do is straight forward, let me demonstrate. 

Execute Start of Operation
-- Do Action ----
End of Operation.


Below is my script which I will be using in this context. I have highlighted in green  start of operation and end of operation.

I will be using the infamous scott schema and a cartersian join on dba_objects to have a long running query. 

var db_opid number;

EXEC :db_opid  := dbms_sql_monitor.begin_operation ('Scott.Batch.Select', forced_tracking => 'Y');

select * from emp a, dept d;
selet * from emp;
select /*+ MONITOR */ * from dba_objects a, dba_objects  b;


EXEC dbms_sql_monitor.end_operation('Scott.Batch.Select', :db_opid );

Note - 
I have purposefully inserted the monitor hint because I wanted to monitor the sql separately in real time as well. 
If you skip the monitor hint then the db decides (on the 5 second and parallel rule) that if the sql has to be monitored or not.
I have used forced_tracking as well to force monitoring of the operation.
Scott.Batch.Select is the name given to the operation which we would see in the performance page.

I am using EM Express but you can find the same stuff in Cloud Control as well.

From the Image 1 below - you can see 2 elements being monitored
One is the sql because of the monitor hint and other is the database operation. 

From the Image 2 you can see new options added for filtering the type of operation in the Performance Page






Thursday, October 22, 2015

Oracle Database - 12c - Data Pump Error during ORA-1031 during export of SYS tables

 
Issue occurs when you do a full export of your 12c (mine is 12.1.0.2) database with export of few SYS tables. 
This is when you do a consistent export using flashback_time or flashback_scn parameter
Even if you have the role DATAPUMP_EXP_FULL_DATABASE

when you scan your log file, you will see multiple errors similar to below -

ORA-31693: Table data object "SYS"."KU$_USER_MAPPING_VIEW" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."FGA_LOG$FOR_EXPORT" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."AUDTAB$TBS$FOR_EXPORT" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."DBA_SENSITIVE_DATA" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."DBA_TSDP_POLICY_PROTECTION" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."NACL$_ACE_EXP" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."NACL$_HOST_EXP" failed to load/unload and is being skipped due to error:
ORA-01031: insufficient privileges
ORA-31693: Table data object "SYS"."NACL$_WALLET_EXP" failed to load/unload and is being skipped due to error:

ORA-01031: insufficient privileges

The workaround is to grant flashback to the tables reporting errors 

SQL> grant flashback on "SYS"."KU$_USER_MAPPING_VIEW"  to system;
Grant succeeded.

SQL> grant flashback on "SYS"."FGA_LOG$FOR_EXPORT" to system;
Grant succeeded.

SQL> grant flashback on  "SYS"."AUDTAB$TBS$FOR_EXPORT" to system;
Grant succeeded.

SQL> grant flashback on "SYS"."DBA_SENSITIVE_DATA"  to system;
Grant succeeded.

SQL> grant flashback on  "SYS"."DBA_TSDP_POLICY_PROTECTION" to system;
Grant succeeded.

SQL> grant flashback on  "SYS"."NACL$_ACE_EXP" to system;
Grant succeeded.

SQL> grant flashback on  "SYS"."NACL$_HOST_EXP"to system;
Grant succeeded.

SQL> grant flashback on "SYS"."NACL$_WALLET_EXP"  to system;
Grant succeeded.


The issue and workaround is reported in Bug - 18844843 

Saturday, October 17, 2015

12c - Histogram-Buckets Increased in

 
Oracle significantly increased the number of histogram buckets in 12c to 2048, that is an increase by 8 folds.

In this blog, let's just prove that - 

I will use max_histogram table and insert 2048 rows into it

SQL> create table max_histograms (id number, v_number number);

Table created.

Let's now insert 2048 rows into it 

SQL>  begin
  for i in 1..2048
  loop
  insert into max_histograms values (1,i);
  end loop;
  commit;
  end;
/  

PL/SQL procedure successfully completed.

now let's gather stats with 2048 as the bucket size 

SQL> exec DBMS_STATS.GATHER_TABLE_STATS('SCOTT','MAX_HISTOGRAMS', method_opt => 'FOR COLUMNS v_number size 2048');

PL/SQL procedure successfully completed.

Wow!, it captured with bucket size 2048 (try it in 11g)
Now let's prove it

SQL> select count(*), max(endpoint_value), min(endpoint_value) 
from dba_histograms
where owner = 'SCOTT' and table_name = 'MAX_HISTOGRAMS';  

  COUNT(*) MAX(ENDPOINT_VALUE) MIN(ENDPOINT_VALUE)
---------- ------------------- -------------------
      2048  2048 1


if you want to see all the buckets individually and their values you can use the below query

select owner, table_name, column_name, column_name, endpoint_value, endpoint_repeat_count
from dba_histograms
where owner = 'SCOTT' and table_name = 'MAX_HISTOGRAMS'
order by 1,5;

Note - I have removed sql prompts at places so that it is easy in case you want to just copy/paste to see this in your machine.

Tuesday, October 13, 2015

12c - Enable Unified Auditing

 
Oracle introduced Unified Auditing in 12c
This however requires you to relink the binaries (oops downtime)..

So if you are planning to get 12c and are going to use Unified Auditing then, this is how you should enable it.

[oracle@OEL lib]$ sqlplus / as sysdbaSQL*Plus: Release 12.1.0.2.0 Production on Tue Oct 13 12:38:31 2015Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to:Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit ProductionWith the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

I need not to come back to this output, once I relink the binaries and show you the difference. 

- Stop all the Oracle processes running from Oracle Home and relink
- Start again

This how you do it - 

[oracle@OEL lib]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0.2/dbhome_1
[oracle@OEL lib]$ cd $ORACLE_HOME/rdbms/lib
[oracle@OEL lib]$ make -f ins_rdbms.mk uniaud_on ioracle ORACLE_HOME=$ORACLE_HOME
/usr/bin/ar d /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/libknlopt.a kzanang.o
/usr/bin/ar cr /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/kzaiang.o 
chmod 755 /u01/app/oracle/product/12.1.0.2/dbhome_1/bin
.
.
.
/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle
chmod 6751 /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle

At this point unified Auditing is enabled. I have purposefully omitted some console outputs (scroll below to see the full message)
Now how do you see it, just logon to sqlplus (after staring the db_instance)

[oracle@OEL lib]$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Oct 13 12:45:00 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

Now Unified Auditing is turned on - It provides quite a lot of good features in auditing and security arena.

Execute the below query now -

SQL> /select value from V$option where parameter = 'Unified Auditing'

VALUE
----------------------------------------------------------------
TRUE


Unified Auditing is available for PDB architecture as well,.

Full output - re-linking binaries - 

[oracle@OEL lib]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0.2/dbhome_1
[oracle@OEL lib]$ cd $ORACLE_HOME/rdbms/lib
[oracle@OEL lib]$ make -f ins_rdbms.mk uniaud_on ioracle ORACLE_HOME=$ORACLE_HOME
/usr/bin/ar d /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/libknlopt.a kzanang.o
/usr/bin/ar cr /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/kzaiang.o 
chmod 755 /u01/app/oracle/product/12.1.0.2/dbhome_1/bin

 - Linking Oracle 
rm -f /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/oracle
/u01/app/oracle/product/12.1.0.2/dbhome_1/bin/orald  -o /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/oracle -m64 -z noexecstack -Wl,--disable-new-dtags -L/u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/ -L/u01/app/oracle/product/12.1.0.2/dbhome_1/lib/ -L/u01/app/oracle/product/12.1.0.2/dbhome_1/lib/stubs/   -Wl,-E /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/opimai.o /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/ssoraed.o /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv12 -Wl,--no-whole-archive /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/nautab.o /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/naeet.o /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/naect.o /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/naedhs.o /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/config.o  -lserver12 -lodm12 -lcell12 -lnnet12 -lskgxp12 -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 -lknlopt `if /usr/bin/ar tv /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap12" ; fi` -lskjcx12 -lslax12 -lpls12  -lrt -lplp12 -lserver12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 `if [ -f /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libavserver12.a ] ; then echo "-lavserver12" ; else echo "-lavstub12"; fi` `if [ -f /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/libavclient12.a ] ; then echo "-lavclient12" ; fi` -lknlopt -lslax12 -lpls12  -lrt -lplp12 -ljavavm12 -lserver12  -lwwg  `cat /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnnzst12 -lzt12 -lztkg12 -lmm -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lztkg12 `cat /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnnzst12 -lzt12 -lztkg12   -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `if /usr/bin/ar tv /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo12 -lserver12"; fi` -L/u01/app/oracle/product/12.1.0.2/dbhome_1/ctx/lib/ -lctxc12 -lctx12 -lzx12 -lgx12 -lctx12 -lzx12 -lgx12 -lordimt12 -lclsra12 -ldbcfg12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12 -locr12 -locrb12 -locrutl12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12  -lgeneric12 -loraz -llzopro -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged  -lippsmerged -lippcore  -lippcpemerged -lippcpmerged  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lsnls12 -lunls12  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lasmclnt12 -lcommon12 -lcore12  -laio -lons    `cat /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/12.1.0.2/dbhome_1/lib -lm    `cat /u01/app/oracle/product/12.1.0.2/dbhome_1/lib/sysliblist` -ldl -lm   -L/u01/app/oracle/product/12.1.0.2/dbhome_1/lib
test ! -f /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle ||\
   mv -f /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracleO
mv /u01/app/oracle/product/12.1.0.2/dbhome_1/rdbms/lib/oracle /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle
chmod 6751 /u01/app/oracle/product/12.1.0.2/dbhome_1/bin/oracle

Friday, October 9, 2015

RMAN vs ZFS Compression

 
Continuing from my Last blog on ZFS compression, this is the RMAN day.

I wanted to bring about few statistics in performance (time and space) but this time -
RMAN versus ZFS compression.

RMAN does compresses on the fly (cliche is it??)

I did a test of backing up 2 datafiles size 2*30GB using RMAN basic compression on ZFS with no compression enabled and then RMAN no compression with ZFS compression enabled to LZJB.

I have already shown that compressing above lzjb certainly requires you to monitor controller's cpu utilization carefully.

(My Setup is same as last time ZFS - 7330, exadata - quarter rack, with both connected using infiniband network)

Here are few statistics for your for 2 datafiles to get the maths running in the head  -
I will give for the full database next.


Raw DF size
Backup Type
Time Taken
Ratio  
Size (raw)
Size (ZFS)
Ratio (Size, appx)
(Time, appx)
61440 GB -2 Datafiles  
Normal
55 secs
1
15GB
4.0GB
1
61440 GB -2 Datafiles  
Compressed backupset
5:45 minutes
6
2.2GB
2.2GB
0.5


So now you can start doing the math in your head. Pretty neat isn't it.
Remember the compression was LZJB in the first case.  ZFS compressed it nicely.
Now here are the stats for the full databases, I have rounded out few figures for better calculation, the key idea is that you should get the broader picture..


Database
Backup Type
Time Taken
Ratio  
Size (raw)
Size (ZFS)
Ratio (Size, appx)
(Time, appx)
6.0TB
Normal
2 hours
1
2.8T
1.8T
1.5
6.0TB
Compressed backupset
10 hours
5
1.2T
1.2T
1

Note - Am not really good in formatting, so you see some stuff moving out of the ruler lines, pardon me for that :).

Wednesday, October 7, 2015

Remote Listener Cross Registration (across clusters)

Remote Listener Cross Registration (across clusters) can be really confusing to figure out.
Let me explain -

We were to migrate to a new engineered system environment from the old in house 11g cluster.

When the a new DB instance was created on the new hardware, spfile was copied (from the old hardware), but the DBA missed to change the scan listener.

So what happened is that the DB registered itself across.

It's a bit confusing but let me explain.

Hardware1 - Hard1, Scan - SC1
Hardware2 - H2, Scan - SC2

So
- telnet/ping/tnsping all will work fine
and you would get an error like this


SQL> conn sys@MYPRDDB_PR as sysdba
Enter password:
ERROR:
ORA-12545: Connect failed because target host or object does not exist

So if you do a lsnrctl on the Hardware 1, you will find an output similar to below

[grid@Hard1 ~]$ lsnrctl services listener_scan1

LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 07-OCT-2015 06:21:59

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_SCAN1)))
Services Summary...
Service "MYPRDDB_PR" has 2 instance(s).
  Instance "MYPRDDB_1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:159 refused:0 state:ready
         REMOTE SERVER
        (ADDRESS=(PROTOCOL=TCP)(HOST=H1)(PORT=1521))
  Instance "MYPRDDB_1", status READY, has 1 handler(s) for this service...
    Handler(s):
      "DEDICATED" established:3 refused:0 state:ready
         REMOTE SERVER
         (ADDRESS=(PROTOCOL=TCP)(HOST=H2)(PORT=1521))


So, that's the mess, cross registration across cluster. This gave the clue that the remote_listener parameter of the new hardware needs to be changed to the right scan of that hardware.
Once that was fixed, this problem got fixed.


Well you can question me for why this was missed and is really basic, but the idea is for you to get help, in case someone misses a minor configuration.


ZFS - Compression

 
ZFS compression is one of a real great things I like about ZFS, (apart from it's powerful cloning and snapshot abilities).

ZFS compresses data on the fly.

So what does that means to you?  Well a lot of your data can go over ZFS. Specially if you are on Exadata, I can think of below

  1. Exa Backup  (Exadata File System Backup)
  2. Oracle Backup
  3. Exa Watcher
  4. Golden Gate

I pick the first one from the list, Exa Backup for my test.  I backed up 2 compute nodes of a quarter rack and noted few metrics. 
I'll derive conclusion next, after you have a look at the statistics. 


Compression
Avg CPU
Raw Size
Compressed Size
Compression Ratio
Duration
Avg Throughput (GB / min)
LZJB
12%
180GB
81 GB
2.22
16 min
11.25
GZIP Fast
25%
180GB
68GB
2.65
18 min

10
GZIP Default
50%
180GB
66 GB
2.73
24 min
7.5




ZFS offers one more level of compression but I skipped it. 
If you notice carefully the gzip fast offers good level of compression with a throughput of 10GB/minute. (Note the test was on infiniband so the max therotical throughput level is 40Gb/sec )

Well now you might be tricked into using Gzip fast, but note I tested this without any load on the system, this was the only load. 

The take away is to monitor the load on the ZFS controller cpu utilization.
(my config is 7330, clustered Active-Passive configuration)

So gzip fast is good, as long as you have the controller sitting idle most of the times, else the other mounts will become virtually unusable, (believe me!!! )

the throughput is great, so if you can plan smartly, you can plan 20 minutes when the ZFS is least utilized. Try not to do parallel writing with compression, this increases the CPU overhead. 

I will be posting ZFS vs rman compression very soon, till then below is the screenshot of cpu utilization from ZFS controller. 
There are 3 periods -
And I leave it to you the intelligent to figure out which peak is for which compression level. 





Exadata - Two Seldom used attributes

 
Exadata is a one of the most powerful product from Oracle, probably the most powerful engineered system available for consumer business on the planet.

Today I talk about few features which I have seen Administrators missing or not using them a lot.


Appliance mode 
appliance.mode is a disk group attribute which improves the disk rebalancing times so redundancy is restored much faster after a disk drop operation or post failure
DMA who have upgraded the release 11.2.3.3.0 (yes it is available from this release) can use this attribute. This needs to be set at the ASM level and can be set using below SQL statement
ALTER DISKGROUP <disk_group> SET ATTRIBUTE 'appliance.mode' = ['TRUE'|'FALSE'];

As with all the Oracle Features there are restrictions when setting this 2 key ones are – compatible.asm is set to 11.2.0.4 or higher on the disk group and cell.smart_scan_capable is set to True.

Content Type
Oracle introduced with 11.2.0.3 a new parameter content.type for ASM which takes 3 different values data, recovery and system. Each type of setting modified the adjacency measure used by secondary extent placement algorithm.
Using these settings, the likelihood of double failure  (in Normal redundancy DG) causing Data loss reduced.

Having this attribute in effect would require rebalancing, so it would be good to rebalance in a quite time.

Alter diskgroup data set attribute 'content.type'='data';
Alter diskgroup data rebalance power <power>;

Lost Patches in Upgrade

 
I was recently working on an upgrade of the cluster from 11g - 12c and ended up in a peculiar situation.
The ora inventory did not get updated but the patches were applied.

The solution was finally found in an undocumented utility in Oracle Home kfod.

Use the command kfod op=patches

[grid@host1 ~]$ /u01/app/12.1.0.2/grid/bin/kfod op=patches
List of Patches
===============
20243804
20415006
20594149
20788771
20950328

Now if you ever end up in a situation like this and want to be sure if the patch was installed or not by looking at the home and not just the inventory xml, you can use this trick.

The best way in such situation is to rollback, the rollback procedure is what you are aware of, however since the inventory is not updated opatch might fail.

I will see if I can post a blog to handle such a situation, but for now if you install a superseded patch that should do the trick. 


12c GRID - changes to oraenv script

 
If you have moved to or planning to move to 12c-GI, then you need to be careful with your existing scripts.

Oracle changed the way environment is set using the oraenv (or coraevnv) in 12c or the right way to say is Oracle tweaked oraenv logic

If you were using oraenv to set the environment and using ASM SID (on oracle user), then you will see warnings.

See the example below for details -

11g
[oracle@host11g ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
The Oracle base has been set to /u01/app/grid
[oracle@host11g ~]$ which crsctl
/u01/app/11.2.0.4/grid/bin/crsctl

12c Grid
[oracle@host12c ~]$ . oraenv
ORACLE_SID = [oracle] ? +ASM1
ORACLE_BASE environment variable is not being set since this
information is not available for the current user ID oracle.
You can set ORACLE_BASE manually if it is required.
Resetting ORACLE_BASE to its previous value or ORACLE_HOME
The Oracle base has been set to /u01/app/12.1.0.2/grid
[oracle@host12c ~]$ which crsctl
/u01/app/12.1.0.2/grid/bin/crsctl

To solve this problem and to set oracle base properly you can use the built in utility in Oracle home, which has the information of Oracle base.

Assuming that ORACLE_HOME has been already set using oraenv.
Below are the steps you should follow

[oracle@host12c ~]$ echo $ORACLE_HOME/bin/orabase
/u01/app/12.1.0.2/grid/bin/orabase

[oracle@host12c ~]$ echo `$ORACLE_HOME/bin/orabase`
/u01/app/grid
[oracle@host12c ~]$ export ORACLE_BASE=`$ORACLE_HOME/bin/orabase`

[oracle@host12c ~]$ echo $ORACLE_BASE
/u01/app/grid
[oracle@host12c ~]$ echo $ORACLE_HOME
/u01/app/12.1.0.2/grid

Now this is an important piece of information requried in your scripts, because in 12c-Grid Oracle changed the alert log location for the Cluster to Oracle base and implemented ADR for cluster (like ADR for Database)

So you need Oracle base in your scripts / development of your scripts.

Lastly, if you are looking for the reason why this happens, then you can actually look into the oraenv script itself you will find the reason where the logic is set to test if a file is writable or not, I will leave this for you to figure out.