Advertisement

Thursday, May 31, 2018

Oracle Enterprise Manager - Exploring Licensing (13cR2) - the Flowchart way

In this blog I have put up a photograph to demystify what I could learn in terms of Oracle Licensing for OEM 
(and tricks what Oracle has up it's sleeve ;) )

 

Wednesday, May 30, 2018

Oracle Enterprise Manager - Exploring Licensing (13cR2) - Part 2

If you have followed my last blog on OEM Licensing, you should have been pretty clear on what all licensing requirement you do have for OEM.
In this blog I am going to highlight few more aspects of OEM licensing.

1. Every Database that needs to send notifications should have Diagnostics Pack enabled. 
Yes, you read that right, if you want to send notifications for any database, you must have Diagnostics Pack, that means you must have Enterprise Edition, else you cannot send a notification for the same. 
This means ideally you cannot send any notification for Standard Edition Databases.

However, since host based notifications are free and are included in Base Manager Functionality, you can use them.

(This is ofcourse how your licensing team gets license from Oracle ;)

2. OEM has 2 components OMR and OMS. I have made it clear on licensing aspects of OMR.
Now let me shine some light on OMS. OMS is available free as part of restricted license from Oracle for one instance only. 

What I mean is that if you want to setup a Weblogic Cluster for HA, then you need to buy that separately

Use of Oracle WebLogic Server with Oracle Enterprise Manager is restricted to the servlet functionality without clustering for the Oracle Management Server (OMS).

Use of Oracle WebLogic Server with Oracle Enterprise Manager is restricted to servlet functionality. A restricted-use license for WebLogic clustering is included, to support deployment of cluster of Oracle Business Intelligence Publisher servers, only when used with Oracle Enterprise Manager.

multiple OMS can be run for scalability but can’t be clustered unless it is purchased separately


Friday, May 25, 2018

Oracle Enterprise Manager - Exploring Licensing - What License for Each Page (13cR2)

OEM has a really good feature if you are working on understanding nuances of Oracle Licensing. 

You can actually go to any of the page by browsing to Oracle Enterprise Manager and you can know yourself what page / feature requires what kind of license. 


You can access it via 
(Setup --> Management Packs --> Packs for this Page)


Oracle Enterprise Manager - Exploring Licensing (13cR2)

As with all the tools / technology which Oracle provides, same goes for racle Enterprise Manager (OEM).
There are yes and no's, ifs and buts, here and there all of those in Licensing

Let's try to explore them one by one

1. OEM is Free - Not 100%

OEM is free of cost as long as you have any Oracle Support license with you.
You can use OEM and its Base functionality only which is documented here.

https://docs.oracle.com/cd/E73210_01/OEMLI/GUID-534AFAC0-3F0E-47D7-A538-C9A5CBC90299.htm#OEMLI157
 

Oracle provides Database which you are going to use for OEM as restricted database and it is also available free of cost. This means that you can have a single instance OEM (OMR + OMS) on a VM without any charge (obviously you should have a Oracle Support contract with you).
https://docs.oracle.com/cd/E73210_01/OEMLI/GUID-7B2095D3-4E88-4346-9566-638219FF1130.htm#OEMLI114

However if you notice carefully, if you want to setup a DR or RAC for your Oracle Database, you will have to pay for it seperately. 

2.  You can setup Incident rules / Alerts as you like - they are free of cost

3. You need licensed packs to send alerts for your database, so if you have an EE Database you should have at least diagnostics pack with it to send alerts for that databases, this means SE is out of scope of sending alerts (snmp forwards / Connectors / Emails / PL-SQL Procs) (as per documentation obviously)

You need this for each Database seperately. 


4. For any third party software you must buy a pack for OEM, these are listed with their price as of now on Oracle Pricing Site (http://www.oracle.com/us/corporate/pricing)




That is SQL Server monitoring should be licensed extra and that too on per processor based. 
That's quite costly indeed, unless your asset management strikes a deal with Oracle. 

5.  Additional Features
Now with these there are many additional features which only come using Packs and are generally enabled by default (as generally done by Oracle).  (Setup --> Management Pack --> Management Pack Access)
You must disable all packs which are not licensed by you when you have installed your OEM for first time. (Setup --> Management Pack --> Management Pack Acccess)

Details of each pack's enabled features are also available in the tool (Setup --> Management Pack --> License Information)


Oracle Database - 12c/18c - Swap : swapon: /swapfile: swapon failed: Invalid argument

As per the recent changes made in xfs system the normal procedure of fallocate does not works when you are creating a swap file 
You will get error : Invalid arguement which is quite misleading. 


The key reason is xfs does not supports allocations for swap file using fallocate.

For this  you need to use dd instead of fallocate and below is the procedure. 

1. Create Empty File 
I am creating a 16G swap, here count - 16384 and bs is 1MB
dd if=/dev/zero of=16G.swap count=16384 bs=1MiB

2. Change permissions
chmod 0600 16G.swap

3. Create Swap
mkswap 16G.swap

4. Add entry to /etc/fstab
/swp/16G.swap none  swap  sw 0  0

5. Enable Swap
swapon -a

This will create swap on xfs file system.

Thursday, May 10, 2018

Hadoop V2 - Hue Setup

In this blog I am going to discuss on how to setup Hue in your HDFS cluster.


Step 1
Download Hue on Edge Node and Extract

wget https://www.dropbox.com/s/auwpqygqgdvu1wj/hue-4.1.0.tgz?dl=0#

 tar -xzf hue-4.1.0.tgz
 

Step 2
Make Hue
 cd /tmp/hue-4.1.0
 yum install python-devel.x86_64 -y
 yum install sqlite-devel.x86_64 -y
 yum install libxml2-devel.x86_64 -y
 yum install libxslt-devel.x86_64 -y
 yum install libffi-devel.x86_64 -y
 yum install openldap-devel.x86_64 -y
 yum install mariadb-devel.x86_64 -y
 yum install gmp-devel.x86_64 -y
 make install


Step 3
Create User and change ownership
useradd -u 1014  -g hadoop hue
chown -R hue:hadoop /usr/local/hue



Step 4
Add to core-site.xml (on nn) and distribute to cluster.

Then restart both nn

<property>
    <name>hadoop.proxyuser.hue.hosts</name>
    <value>*</value>
</property>
<property>
    <name>hadoop.proxyuser.hue.groups</name>
    <value>*</value>
</property>


Step 5
Configure hue.ini. Below is what all I have configured. I always mark 'Custom' to whatever I change in order to get the changes I do to config files faster.

cd /usr/local/hue

 cat ./desktop/conf/hue.ini | grep Custom
      fs_defaultfs=hdfs://devcluster:8020 ###Custom
      webhdfs_url=http://192.168.1.71:14000/webhdfvs/v1    ###Custom
        hadoop_conf_dir=/etc/hadoop/conf   ###Custom
       resourcemanager_host=rm ###Custom
         resourcemanager_port=8032 ###Custom
       resourcemanager_api_url=http://rm:8088 ##Custom
       proxy_api_url=http://rm:8088 ##Custom
       history_server_api_url=http://rm:19888 ###Custom



Step 6
[Start hue - As hue]
/usr/local/hue/build/env/bin/supervisor


Step 7
Login to URL
http://<hostname>:8888
You will get first time user password set UI.




I generally (in test envs) set user to hue and password to hadoop

Tuesday, May 8, 2018

HDPV2 - Decomissioning a Node

In this blog I am going to discuss how to decommission a node in HDFS
Decommissioning is a graceful process in which blocks get replicated to other nodes.


Step 1
Following Properties must be set on your HDFS
cd /etc/hadoop/conf
hdfs-site.xml
<property>
        <name>dfs.hosts.exclude</name>
        <value>/etc/hadoop/conf/dfs.hosts.exclude</value>
</property>

<property>
        <name>dfs.hosts</name>
        <value>/etc/hadoop/conf/dfs.hosts.include</value>
</property>

yarn-site.xml
<property>
        <name>yarn.resourcemanager.node.include-path</name>
        <value>/etc/hadoop/conf/dfs.hosts.include</value>
</property>
<property>
        <name>yarn.resourcemanager.node.exclude-path</name>
        <value>/etc/hadoop/conf/dfs.hosts.exclude</value>
</property>



Step 2
[As root on nn]

Remove entry of new node from slaves and include file and add it to the exclude file
(I am removing d3n)


cd /etc/hadoop/conf
cat slaves
d1n
d2n
d4n

[As root on rm]

cd /etc/spark/conf
cat slaves
d1n
d2n
d3n


Step 3
[As root on nn]
Distribute slaves, dfs.hosts.include and dfs.hosts.exclude to nn, snn and rm

Step 4
[As hdfs on nn]
hdfs dfsadmin –refreshNodes

Step 5

[As yarn on rm]
yarn rmadmin -refreshNodes


Step 6
[As hdfs on nn]
Verify using
hdfs dfsadmin -report

Step 7
Stop all daemons.

Decomission the node once the output of hdfs dfsadmin -report no longer shows the name of the node.

[As hdfs on d3n]
hadoop-daemon.sh stop datanode

[As yarn on d3n]
yarn-daemon.sh stop nodemanager

[As spark on d3]
stop-slave.sh

Monday, May 7, 2018

Hadoop V2 - Adding New Node

In this blog I discuss how to add a Datanode to cluster

I am adding node d4n to the cluster

Step 1

[As root - Passwordless ssh setup on namenode and snn]
ssh-copy-id d4n
[As hdfs - Passwordless ssh setup on namenode and snn]
ssh-copy-id d4n

[As yarn,mapred,spark - Passwordless ssh setup on rm]
ssh-copy-id d4n

I now refer to one of my previous blogs to complete the pre-req setup. This includes completion of system level configurations in order to support hadoop installation.
This includes user creation, groups creation and other setup required.
Hadoop V2 - Pre-req Completion



Step 2
[As root - copy hadoop  on d4n ]
cd /usr/local
scp -r nn:/usr/local/hadoop-2.7.5 .


Step 3
[As root - conf files]
mkdir /etc/hadoop
cd /etc/hadoop
scp -r nn:/etc/hadoop/conf .
chmod -R 775 /etc/hadoop/


Step 4
[As root -  soft link creation]
ln -s /usr/local/hadoop-2.7.5 /usr/local/hadoop
ln -s /etc/hadoop/conf /usr/local/hadoop-2.7.5/etc/hadoop


Step 5
[As root - Directories creation]
mkdir -p /opt/HDPV2/logs /opt/HDPV2/pids  /opt/HDPV2/1 /opt/HDPV2/2  /opt/HDPV2/tmp
chmod 775 /opt/HDPV2/logs /opt/HDPV2/pids  /opt/HDPV2/1 /opt/HDPV2/2  /opt/HDPV2/tmp
chown hdfs:hadoop /opt/HDPV2/logs /opt/HDPV2/pids  /opt/HDPV2/1 /opt/HDPV2/2  /opt/HDPV2/tmp



@ this point your hadoop node is ready
Now comes the easy part 


Step 6
[As root - Update conf files on Namenode]

Update your hdfs-site.xml file
<property>
        <name>dfs.hosts</name>
        <value>/etc/hadoop/conf/dfs.hosts.include</value>
</property>


Similarly for yarn-site.xml
<property>
        <name>yarn.resourcemanager.node.include-path</name>
        <value>/etc/hadoop/conf/dfs.hosts.include</value>
</property>


Though I have already done this as part of my initial installation, you might want to do so to secure your installation and to allow only specific hosts to connect to nn.

Now Update your dfs.hosts.include file and slaves file in the same directory to include the new host
cat slaves
d1n
d2n
d3n
d4n

Once done distribute slaves and dfs.hosts.include on nn,snn and rm

Step 7
[As hdfs on nn and snn]
hdfs dfsadmin -refreshNodes
Note - You might need to restart your snn to take this effect.
[As yarn on rm]
yarn rmadmin -refreshNodes

Step 8
[As hdfs -  Start hadoop on d4n]
hadoop-daemon.sh start datanode

[As yarn -  start nodemanager on d4n]
yarn-daemon.sh start nodemanager

Step 9
Verify the daemons running
[As yarn - on namenode]
yarn node -all -list
[As hdfs - on namenode]
hdfs dfsadmin -report live


Step 10
To configure spark
Follow Blog for Spark Configuration, this is for complete cluster, but you can pretty much extend the same for single node addition


The key change that is required is spark slaves configuration file
[As root on rm]
cd /etc/spark/conf
Append d4n to slaves file.

Step 11    
[As spark - on d4n]
start-slave.sh spark://rm.novalocal:7077

This will start spark worker on d4n
You can verify the status from http://rm:8080 (WebUI to spark)

Step 12  
Finally it's a good idea to run your balancer utility now.
hdfs balancer -threshold 1 

HDPV2 - Yarn Administration Commands

In this blog I discuss most commonly used Yarn commands


1.    yarn top
    List applications such as top in Linux
   
2.    yarn application -list
    List all applications runnin in the cluster.
   
3.     yarn application -list -appStates running
    List application whose state is running
   
4.    yarn application -list -applStates FAILED
    List appilications whose state is failed
   
5.    yarn application -status <application ID>
    List application details by Application

6.    yarn  application -kill <application_ID>
    Kill application by application ID
   
7.    yarn node -all -list
    Check status of all yarn nodes in the yarncluster.
   
8.     yarn queue -status <queue_name>
    Status of queue running in the cluster

Hadoop V2 - Oozie Configuraton and Job Submissions

In this blog I discuss how to submit example job for Oozie


1. Shutdown oozie [As oozie]
    cd /usr/local/oozie
    bin/oozied.sh stop

   
2. Create Shared Lib [As oozie]
    cd /usr/local/oozie
    bin/ooziebin/oozie-setup.sh sharelib create -fs hdfs://192.168.2.101/users/oozie -locallib oozie-sharelib-5.0.0.tar.gz


3. Put Examples
    hdfs dfs -put examples /user/oozie

4. Update job.properties   
    Updae job.properties (/user/oozie/examples/apps/map-reduce/)  to  match your hadoop configuration and put it again to the same folder.
   

5. Update oozie-site.xml (/user/local/oozie/conf)
    See end of blog for full configuration
   
6. Resart oozie [As oozie user]
    cd /usr/local/oozie
    bin/oozied.sh start
   

7. Edit Capacity Scheduler to Match a queue name for oozie
    Here is my mapping u:oozie:oozie and a queue name oozie
   
8. Submit a workflow

bin/oozie job -oozie http://192.168.1.71:11000/oozie -config examples/apps/map-reduce/job.properties -run
job: 0000002-180507002619511-oozie-oozi-W


9. Verify Workflow Status
Oozie Web Console

 


Oozie CMD
[oozie@oem13cr2 oozie]$ bin/oozie job -oozie http://localhost:11000/oozie -info  0000002-180507002619511-oozie-oozi-W
Job ID : 0000002-180507002619511-oozie-oozi-W
------------------------------------------------------------------------------------------------------------------------------------
Workflow Name : map-reduce-wf
App Path      : hdfs://nn:8020/user/oozie/examples/apps/map-reduce/workflow.xml
Status        : RUNNING
Run           : 0
User          : oozie
Group         : -
Created       : 2018-05-07 04:39 GMT
Started       : 2018-05-07 04:39 GMT
Last Modified : 2018-05-07 04:39 GMT
Ended         : -
CoordAction ID: -

Actions
------------------------------------------------------------------------------------------------------------------------------------
ID                                                                            Status    Ext ID                 Ext Status Err Code
------------------------------------------------------------------------------------------------------------------------------------
0000002-180507002619511-oozie-oozi-W@:start:                                  OK        -                      OK         -
------------------------------------------------------------------------------------------------------------------------------------
0000002-180507002619511-oozie-oozi-W@mr-node                                  RUNNING   application_1525667692407_0002RUNNING    -
------------------------------------------------------------------------------------------------------------------------------------

10. Verify from Resource Manager UI

 
   
Appendix
oozie-site.xml


<property>
     <name>oozie.service.JPAService.jdbc.driver</name>
     <value>oracle.jdbc.driver.OracleDriver</value>
</property>

<property>
     <name>oozie.service.JPAService.jdbc.url</name>
     <value>jdbc:oracle:thin:@192.168.1.71:6633:EMPRD</value>
</property>

<property>
     <name>oozie.service.JPAService.jdbc.username</name>
     <value>oozie</value>
</property>

<property>
     <name>oozie.service.JPAService.jdbc.password</name>
     <value>oozie</value>
</property>
 <property>
        <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>
        <value>*=/etc/hadoop/conf</value>
  </property>
<property>
    <name>oozie.service.HadoopAccessorService.nameNode.whitelist</name>
    <value>nn:8020</value>
</property>

<property>
    <name>oozie.actions.default.name-node</name>
    <value>hdfs://nn:8020</value>
</property>

<property>
    <name>oozie.service.HadoopAccessorService.jobTracker.whitelist</name>
    <value>rm:8032</value>
</property>
<property>
    <name>oozie.actions.default.job-tracker</name>
    <value>rm:8032</value>
</property>

Friday, May 4, 2018

HDPV2 - Oozie Setup

In this setup I am going to discuss setting up Oozie in your cluster.

I am going to use one of the edge nodes for this and not any of the nodes which are part of my cluster.

I already have setup sqoop on this node and am I already have ojdbc8.jar for Oracle.

Setup
Step 1 - Create user oozie [As root]
1. Create User Oozie
groupadd -g 1000 hadoop
useradd -u 1013  -g hadoop oozie [This should be done on both nn]


2. Setup Java [As root]
rpm -Uvh /tmp/jdk-8u152-linux-x64.rpm

3. Download Oozie and maven [As root]
curl http://www-eu.apache.org/dist/oozie/5.0.0/oozie-5.0.0.tar.gz -o oozie-5.0.0.tar.gz
curl http://mirror.olnevhost.net/pub/apache/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.tar.gz -o apache-maven-3.0.5-bin.tar.gz


4. Unzip Maven
cd /usr/local
tar -xf /tmp/apache-maven-3.0.5-bin.tar.gz
ln -s /usr/local/apache-maven-3.0.5 /usr/local/apache-maven


4. Unzip Oozie [As root]

cd /tmp
tar -xvf oozie-5.0.0.tar.gz



Building Oozie using maven can be tricky and you are for sure to run into errors which you have no idea about.

So keep patient and resolve errors as you get on cmd.


5. Export Maven Variables (.bashrc and .bash_profile)
    Logout and Login again
    export M2_HOME=/usr/local/apache-maven
    export M2=$M2_HOME/bin
    export PATH=$M2:$PATH


Make changes as below in pom.xml (in untarred oozie directory)
If you are using Java 8 then add below in your pom.xml profiles
   <profile>
            <id>disable-doclint</id>
            <activation>
                <jdk>[1.8,)</jdk>
            </activation>
            <build>
                <plugins>
                    <plugin>
                        <groupId>org.apache.maven.plugins</groupId>
                        <artifactId>maven-javadoc-plugin</artifactId>
                        <configuration>
                            <additionalparam>-Xdoclint:none</additionalparam>
                        </configuration>
                    </plugin>
                </plugins>
            </build>
        </profile>


6. Build oozie

cd /tmp/oozie-5.0.0

bin/mkdistro.sh -DskipTests  -Puber -Dhadoop.version=2.7.5
This command requires internet connection as it will require repos to be downloaded from internet
So do this from a machine having internet connectivity


Oozie distro created, DATE[2018.05.02-09:31:53GMT] VC-REV[unavailable], available at [/tmp/oozie-5.0.0/distro/target]



7.  Make Changes required to configure
 cd /tmp/oozie-5.0.0/distro/target/oozie-5.0.0-distro/oozie-5.0.0
 mkdir libext


 cd /tmp/oozie-5.0.0/sharelib
 find -name '*.jar' -exec cp  -f '{}'  /tmp/oozie-5.0.0/distro/target/oozie-5.0.0-distro/oozie-5.0.0/libext \;

  cd /tmp/oozie-5.0.0/distro/target/oozie-5.0.0-distro/oozie-5.0.0/
 find -name '*.jar' -exec cp -f '{}'  /tmp/oozie-5.0.0/distro/target/oozie-5.0.0-distro/oozie-5.0.0/libext \;

 cd libext
 curl https://ext4all.com/ext/download/ext-2.2.zip -o ext-2.2.zip

 #cp /usr/local/sqoop/lib/ojdbc8.jar .
 zip ojdbc.zip /usr/local/sqoop/lib/ojdbc8.jar

 mkdir ../lib
 cd ../lib
 cp /usr/local/sqoop/lib/ojdbc8.jar .
 cp -n ../libext/* .

 cd ..


bin/oozie-setup.sh

INFO: Oozie is ready to be started



8. In hdfs-core-site.xml on Namenode SNN and RM.
Then restart using
hadoop-daemon.sh stop namenode
hadoop-daemon.sh start namenode

yarn-daemon.sh stop resourcemanager
yarn-daemmon.sh start resourcemanager

<property>
    <name>hadoop.proxyuser.oozie.hosts</name>
    <value>192.168.1.71</value>
</property>
<property>
    <name>hadoop.proxyuser.oozie.groups</name>
    <value>hadoop</value>
</property>

9. Copy Binaries
cd /usr/local/

cp -R /tmp/oozie-5.0.0/distro/target/oozie-5.0.0-distro/oozie-5.0.0/ .


10. Provide Permissions

chown oozie:hadoop -R oozie*

 
11. Update config files [As oozie user]
Update site.xml file (/conf/oozie-site.xml)
<property>
     <name>oozie.service.JPAService.jdbc.driver</name>
     <value>oracle.jdbc.driver.OracleDriver</value>
</property>

<property>
     <name>oozie.service.JPAService.jdbc.url</name>
     <value>jdbc:oracle:thin:@192.168.1.71:6633:EMPRD</value>
</property>

<property>
     <name>oozie.service.JPAService.jdbc.username</name>
     <value>oozie</value>
</property>

<property>
     <name>oozie.service.JPAService.jdbc.password</name>
     <value>oozie</value>
</property>


12. Create oozie user in Database (Oracle)

create user oozie identified by oozie default tablespace users temporary tablespace temp;

grant alter any index to oozie;
grant alter any table to oozie;
grant alter database link to oozie;
grant create any index to oozie;
grant create any sequence to oozie;
grant create database link to oozie;
grant create session to oozie;
grant create table to oozie;
grant drop any sequence to oozie;
grant select any dictionary to oozie;
grant drop any table to oozie;
grant create procedure to oozie;
grant create trigger to oozie;

alter user oozie default tablespace users;
alter user oozie quota unlimited on users;


13.  Validate oozie DB Connection [As oozie user]

cd /usr/local/oozie
bin/ooziedb.sh version

  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx1024m"

Oozie DB tool version: 4.2.0

Validate DB Connection
DONE
DB schema does not exist

Error: Oozie DB doesn't exist



14. Create Oozie DB Schema into sql file

 bin/ooziedb.sh create -sqlfile oozie.sql

15. Create oozie schema in database

bin/ooziedb.sh create -sqlfile oozie.sql -run

Validate DB Connection
DONE
DB schema does not exist
Check OOZIE_SYS table does not exist
DONE
Create SQL schema
DONE
Create OOZIE_SYS table
DONE

Oozie DB has been created for Oozie version '5.0.0'



The SQL commands have been written to: oozie.sql

16. Validate connection using
bin/ooziedb.sh version


17.  Finalize Installation

ln -s /usr/local/oozie-5.0.0/ /usr/local/oozie
chown -R oozie:hadoop oozie*


18. Start Oozie
su - oozie
cd /usr/local/oozie
bin/oozied.sh start


19. Validate Admin Interface
bin/oozie admin -oozie http://localhost:11000/oozie -status

Status should be NORMAL.