Advertisement

Thursday, February 22, 2018

Hadoop V1 Install - Mapred Configuration

MapReduce  - Basic Configuration to Start Hadoop Daemons
 

Configuration of mapred-site.xml
### Only Properties are mentioned ### 


<property>
        <name>mapred.job.tracker</name>
        <value>nn:8021</value>
</property>
<property>
        <name>mapred.local.dir</name>
        <value>/opt/HDPV1/1/mr1,/opt/HDPV1/1/mr2</value>
</property>


Copy the Configuration to all the nodes
[As root]
# for i in $(cat /tmp/hosts) ;do scp mapred-site.xml ${i}:/etc/hadoop/conf/ ; done

[As root - Give Permissions]
# for i in $(cat /tmp/hosts) ;do ssh ${i} chmod -R 755 /etc/hadoop ; done;


# for i in $(cat /tmp/hosts) ;do ssh ${i} chmod  775 /opt/HDPV1/1/ ; done;
# for i in $(cat /tmp/hosts) ;do ssh ${i} mkdir /opt/HDPV1/1/mr1   ; done;
# for i in $(cat /tmp/hosts) ;do ssh ${i} mkdir /opt/HDPV1/1/mr2   ; done;
# for i in $(cat /tmp/hosts) ;do ssh ${i} chown mapred:hadoop   /opt/HDPV1/1/mr1 ; done;
# for i in $(cat /tmp/hosts) ;do ssh ${i} chown mapred:hadoop   /opt/HDPV1/1/mr2 ; done;


[As mapred- on namenode]
Start mapred
start-mapred.sh


for i in $(cat /tmp/hosts) ; do ssh ${i} 'hostname; jps | grep -vi jps; echo' ;  done;


namenode.cluster.com
29378 JobTracker


d1node.cluster.com
4931 TaskTracker

d2node.cluster.com
7712 TaskTracker

d3node.cluster.com
2359 TaskTracker

d4node.cluster.com
17635 TaskTracker




stop-mapred.sh and start-mapred.sh

 To Optimize Performance You can use below configuration file for mapred-site.xml and restart the daemons using - 


MapReduce - Performance Configuration File


<property>
        <name>mapred.job.tracker</name>
        <value>nn:8021</value>
</property>
<property>
        <name>mapred.local.dir</name>
        <value>/opt/HDPV1/1/mr1,/opt/HDPV1/1/mr2</value>
</property>
<property>
        <name>mapred.java.child.opts</name>
        <value>-Xmx1024m</value>
</property>
<property>
        <name>mapred.child.ulimit</name>
        <value>1572864</value>
</property>
<property>
        <name>mapred.tasktracker.map.tasks.maximum</name>
        <value>4</value>
</property>
<property>
        <name>mapred.tasktracker.reduce.tasks.maximum</name>
        <value>2</value>
</property>
<property>
        <name>io.sort.mb</name>
        <value>200</value>
</property>

<property>
        <name>io.sort.factor</name>
        <value>32</value>
</property>
<property>
        <name>mapred.compress.map.output</name>
        <value>true</value>
</property>
<property>
        <name>mapred.map.output.compression.codec</name>
        <value>org.apache.io.compress.SnappyCodec</value>
</property>
<property>
        <name>mapred.jobtracker.taskScheduler</name>
        <value>org.apache.hadoop.mapred.FairScheduler</value>
</property>
<property>
        <name>mapred.reduce.tasks</name>
        <value>8</value>
</property>
<property>
        <name>mapred.reduce.slowstart.completed.maps</name>
        <value>0.7</value>
</property>

 

Hadoop V1 Install - Hadoop - Software Configuration

This is in continuation with my last blog Hadoop Software Setup and Environment Configuration

cd /etc/hadoop/conf
[As root or sudo hduser] 

Make changes in  hadoop-env.sh as below
cat hadoop-env.sh
export JAVA_HOME=/usr/java/latest   ## CustomSet
export HADOOP_LOG_DIR=/opt/HDPV1/logs #CustomSet
export HADOOP_PID_DIR=/opt/HDPV1/pids #CustomSet

 

Contents of core-site.xml (Only property section)
<property>
        <name>fs.default.name</name>
        <value>hdfs://nn:8020</value>
</property>
<property>
        <name>io.file.buffer.size</name>
        <value>65536</value>
</property>
<property>
        <name>fs.trash.interval</name>
        <value>600</value>
</property>

Contents of hdfs-site.xml
(Only property section)
<property>
        <name>dfs.http.address</name>
        <value>nn:50070</value>
</property>
<property>
        <name>dfs.name.dir</name>
        <value>/opt/HDPV1/1/dfs/nn,/opt/HDPV1/2/dfs/nn</value>
</property>
<property>
        <name>dfs.data.dir</name>
        <value>/opt/HDPV1/1/dfs/dn,/opt/HDPV1/2/dfs/dn</value>
</property>
<property>
        <name>dfs.secondary.http.address</name>
        <value>snn:50090</value>
</property>
<property>
        <name>fs.checkpoint.dir</name>
        <value>/opt/HDPV1/1/dfs/snn</value>
</property>
<property>
        <name>dfs.block.size</name>
        <value>134217728</value>
</property>
<property>
        <name>dfs.balance.bandwidthPerSec</name>
        <value>1048576</value>
</property>
<property>
        <name>dfs.datanode.du.reserved</name>
        <value>4294967296</value>
</property>
<property>
        <name>dfs.namenode.handler.count</name>
        <value>20</value>
</property>
<property>
        <name>dfs.hosts</name>
        <value>/etc/hadoop/conf/dfs.hosts.include</value>
</property>
<property>
        <name>dfs.hosts.exclude</name>
        <value>/etc/hadoop/conf/dfs.hosts.exclude</value>
</property>
<property>
        <name>dfs.datanode.failed.volumes.tolerated</name>
        <value>0</value>
</property>
<property>
        <name>dfs.replication</name>
        <value>3</value>
</property>



Contents of slaves file

cat slaves
d1n
d2n
d3n
d4n


## The Contents of slaves and include are purposefully let in different formats. The contents of include must be in FQDN format, as this is how hadoop datanodes daemones register themselves with NN.
However Slaves file is used for ssh usage and start of hadoop daemones by start-dfs.sh (start-all.sh)

Contens of dfs.hosts.include

 cat dfs.hosts.include
d1node.cluster.com
d2node.cluster.com
d3node.cluster.com
d4node.cluster.com


Contents of Masters file
(Remember Masters on SNN should be NN - For Failover)
[hduser@namenode conf]$ cat masters
snn



[On Name Node - As hduser]

NameNode Format
hadoop namenode -format

18/02/20 10:19:44 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/opt/HDPV1/1/dfs/nn/current/edits
18/02/20 10:19:44 INFO common.Storage: Storage directory /opt/HDPV1/1/dfs/nn has been successfully formatted.
18/02/20 10:19:44 INFO common.Storage: Image file /opt/HDPV1/2/dfs/nn/current/fsimage of size 112 bytes saved in 0 seconds.
18/02/20 10:19:44 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/opt/HDPV1/2/dfs/nn/current/edits
18/02/20 10:19:44 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/opt/HDPV1/2/dfs/nn/current/edits
18/02/20 10:19:44 INFO common.Storage: Storage directory /opt/HDPV1/2/dfs/nn has been successfully formatted.
18/02/20 10:19:44 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at namenode.cluster.com/192.168.10.51
************************************************************/

[As root]
Copy the Configuration to all the nodes
# for i in $(cat /tmp/hosts) ;do scp hadoop-env.sh dfs.hosts.include core-site.xml hdfs-site.xml slaves masters ${i}:/etc/hadoop/conf/ ; done

[As root - Give Permissions[
# for i in $(cat /tmp/hosts) ;do ssh ${i} chmod -R 755 /etc/hadoop ; done;

[As hduser - On NameNode]
start-dfs.sh

starting namenode, logging to /opt/HDPV1/logs/hadoop-hduser-namenode-namenode.cluster.com.out
d1n: starting datanode, logging to /opt/HDPV1/logs/hadoop-hduser-datanode-d1node.cluster.com.out
d3n: starting datanode, logging to /opt/HDPV1/logs/hadoop-hduser-datanode-d3node.cluster.com.out
d4n: starting datanode, logging to /opt/HDPV1/logs/hadoop-hduser-datanode-d4node.cluster.com.out
d2n: starting datanode, logging to /opt/HDPV1/logs/hadoop-hduser-datanode-d2node.cluster.com.out
snn: starting secondarynamenode, logging to /opt/HDPV1/logs/hadoop-hduser-secondarynamenode-snamenode.cluster.com.out



Verify Java processes (Hadoop Processes)
[As hduser - On NameNode]


##for i in $(cat /tmp/hosts) ; do ssh ${i} 'hostname; jps | grep -vi jps; echo' ;  done;
namenode.cluster.com
28557 NameNode

snamenode.cluster.com
13643 SecondaryNameNode

d1node.cluster.com
4476 DataNode

d2node.cluster.com
7285 DataNode

d3node.cluster.com
1928 DataNode

d4node.cluster.com
17210 DataNode


@This point Hadoop Cluster is up and running with 1 NN, 1 SNN and 4 DN.

 hadoop dfsadmin -report
Configured Capacity: 133660540928 (124.48 GB)
Present Capacity: 133660540928 (124.48 GB)
DFS Remaining: 133660311552 (124.48 GB)
DFS Used: 229376 (224 KB)
DFS Used%: 0%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 4 (4 total, 0 dead)

Name: 192.168.10.54:50010
Decommission Status : Normal
Configured Capacity: 33415135232 (31.12 GB)
DFS Used: 57344 (56 KB)
Non DFS Used: 0 (0 KB)
DFS Remaining: 33415077888(31.12 GB)
DFS Used%: 0%
DFS Remaining%: 100%
Last contact: Thu Feb 22 04:03:50 CET 2018


Name: 192.168.10.57:50010
Decommission Status : Normal
Configured Capacity: 33415135232 (31.12 GB)
DFS Used: 57344 (56 KB)
Non DFS Used: 0 (0 KB)
DFS Remaining: 33415077888(31.12 GB)
DFS Used%: 0%
DFS Remaining%: 100%
Last contact: Thu Feb 22 04:03:49 CET 2018


Name: 192.168.10.55:50010
Decommission Status : Normal
Configured Capacity: 33415135232 (31.12 GB)
DFS Used: 57344 (56 KB)
Non DFS Used: 0 (0 KB)
DFS Remaining: 33415077888(31.12 GB)
DFS Used%: 0%
DFS Remaining%: 100%
Last contact: Thu Feb 22 04:03:50 CET 2018


Name: 192.168.10.58:50010
Decommission Status : Normal
Configured Capacity: 33415135232 (31.12 GB)
DFS Used: 57344 (56 KB)
Non DFS Used: 0 (0 KB)
DFS Remaining: 33415077888(31.12 GB)
DFS Used%: 0%
DFS Remaining%: 100%
Last contact: Thu Feb 22 04:03:50 CET 2018

 
# Finally Provide permissions 777 to /tmp/ on HDFS
[hduser@namenode ~]$ hadoop fs -chmod 777 /tmp

 

Hadoop V1 Install - Hadoop Software Setup and Evironment Configuration

This is continuation from last blog of Hadoop V1 Pre-req

Step 1
[As root -  Namenode - Send Hadoop Binaries]
# for i in $(cat hosts) ; do echo "scp hadoop-1.2.1.tar.gz ${i}:/tmp &" >> /tmp/sendhdpv1.bash ; done
bash /tmp/sendhdpv1.bash


Step 2
[As root - Extract Hadoop]
#for i in $(cat hosts) ;do ssh ${i} tar -xzf /tmp/hadoop-1.2.1.tar.gz -C /usr/local; done

Step 3 

[As root - Setup sudoers configuration]
#for i in $(cat hosts) ; do ssh ${i}  echo '"hduser        ALL=(ALL)       NOPASSWD: ALL" >> /etc/sudoers'; done
Step 4
[As root - Create Conf Directory]
#for i in $(cat hosts) ;do ssh ${i} mkdir /etc/hadoop; done



Step 5 - All Other Configurations
[As root - Move the conf directory]
# for i in $(cat hosts) ;do ssh ${i} mv /usr/local/hadoop-1.2.1/conf /etc/hadoop/conf; done

[As root - Give Permissions]
# for i in $(cat hosts) ;do ssh ${i} chmod -R 755 /etc/hadoop ; done;


# for i in $(cat /tmp/hosts) ; do ssh ${i} 'hostname; jps | mkdir -p /opt/HDPV1/logs ; echo' ;  done;
# for i in $(cat /tmp/hosts) ; do ssh ${i} 'hostname; jps | chmod 777  /opt/HDPV1/logs; echo' ;  done;

# for i in $(cat /tmp/hosts) ; do ssh ${i} 'hostname; jps | mkdir -p /opt/HDPV1/pids ; echo' ;  done;
# for i in $(cat /tmp/hosts) ; do ssh ${i} 'hostname; jps | chmod 777  /opt/HDPV1/pids; echo' ;  done;


[As root - Create Soft Link to Hadoop]
# for i in $(cat hosts) ;do ssh ${i} ln -s /usr/local/hadoop-1.2.1 /usr/local/hadoop  ; done


[As root - Create Soft Link]
# for i in $(cat hosts) ;do ssh ${i} ln -s /etc/hadoop/conf /usr/local/hadoop-1.2.1/conf ; done


[As hduser and mapred - Set Environment Variables [Change hduser to mapred]]

# for i in $(cat hosts) ; do ssh ${i} echo 'export HADOOP_PREFIX=/usr/local/hadoop >> /home/hduser/.bashrc' ; done
#for i in $(cat hosts) ; do ssh ${i} echo 'export JAVA_HOME=/usr/java/latest >> /home/hduser/.bashrc' ; done

#for i in $(cat hosts) ; do ssh ${i} echo 'export LOG=/opt/HDPV1/logs >> /home/hduser/.bashrc' ; done

#for i in $(cat hosts) ; do ssh ${i} echo 'export CONF=/etc/hadoop/conf >> /home/hduser/.bashrc' ; done


#for i in $(cat hosts) ; do ssh ${i} echo 'PATH=\$JAVA_HOME/bin:\$HADOOP_PREFIX/bin:\$HADOOP_PREFIX/sbin:\$PATH >> /home/hduser/.bashrc' ; done
#for i in $(cat hosts) ; do ssh ${i} echo 'export PATH >> /home/hduser/.bashrc' ; done


 

Hadoop V1 Install - Pre-req Setup

In this blog we are going to have Pre-req  done for Hadoop V1 Cluster
 

The Pre-req are pretty much same for Hadoop V1 and Hadoop V2 we will see what changes in the later blogs.



Below is my /etc/hosts for HDP V1 Cluster. The names are pretty apparent talking about their roles and functionality. nn,snn are namenode and secondary namenode and d1n-d4n are datanodes. 

Our Controller Node is namenode or nn or namenode.cluster.com
We start by  putting below in the /etc/hosts file of namenode


127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.10.51 namenode.cluster.com      namenode nn
192.168.10.53 snamenode.cluster.com     snamenode snn
192.168.10.54 d1node.cluster.com        d1node      d1n
192.168.10.55 d2node.cluster.com        d2node      d2n
192.168.10.58 d3node.cluster.com        d3node      d3n
192.168.10.57 d4node.cluster.com        d4node    d4n





Step 1  - Download Hadoop V1 (1.2.1)

wget http://www-us.apache.org/dist/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
--2018-02-20 06:22:53--  http://www-us.apache.org/dist/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz
Connecting to 172.20.24.50:8080... connected.
Proxy request sent, awaiting response... 200 OK

Length: 63851630 (61M) [application/x-gzip]
Saving to: ‘hadoop-1.2.1.tar.gz’

100%[==========================================================================>] 63,851,630   369KB/s   in 3m 6s

2018-02-20 06:25:59 (335 KB/s) - ‘hadoop-1.2.1.tar.gz’ saved [63851630/63851630]



Step 2
Setup ssh equivalency between root user of all nodes (You can delete this later if you want)
This will help us to set-up everything as root user.

[As root]
./sshUserSetup.sh  -user root  -hosts "nn snn d1n d2n d3n d4n" -noPromptPassphrase -confirm -advanced

Give the passwords once prompted - End of the script you should have passwordless connectivity between root users on all nodes.

The Script sshUserSetup is something you will be knowing if you are an OracleDBA
If not you can do this manually or provide your email below, i'll send it over.
All what script does is create ssh equivalency between all the nodes given.

Step 3
[As root - Managed Hosts file creation]
Create hosts file , each host in the file is in a line in each line
 

cd /tmp
chmod 777 hosts
cat hosts
nn
snn
d1n
d2n
d3n
d4n


Step 4
[As root - Hosts File Updation]
##for i in $(cat hosts) ; do scp /etc/hosts ${i}:/etc/hosts; done

Step 5
[As root - Group Creation]
## for i in $(cat hosts) ; do echo "ssh ${i} groupadd -g 1000 hadoop" >> /tmp/useradd; done ; bash /tmp/useradd ; rm -f /tmp/useradd

Step 6
[As root - User Creation]
## for i in $(cat hosts) ; do echo "ssh ${i} useradd -u 1002  -g hadoop hduser" >> /tmp/useradd; done ; bash /tmp/useradd ; rm -f /tmp/useradd 

for i in $(cat hosts) ; do echo "ssh ${i} useradd -u 1003  -g hadoop mapred" >> /tmp/useradd; done ; bash /tmp/useradd ; rm -f /tmp/useradd

Step 7
[As root - hduser and mapred user password change ]

Change hduser to mapred for second execution.
Create Script as below and run it


#!/bin/bash
for server in `cat hosts`; do
echo $server;
ssh ${server} 'passwd hduser <<EOF
hadoop
hadoop
EOF';
done


Step 8
[Hadoop User Equivalency] [As hduser and mapred -  on namenode]
This will create ssh equivalency for hduser and mapred
#./sshUserSetup.sh  -user hduser  -hosts "nn snn d1n d2n d3n d4n" -noPromptPassphrase -confirm -advanced

#./sshUserSetup.sh  -user mapred  -hosts "nn snn d1n d2n d3n d4n" -noPromptPassphrase -confirm -advanced
 


Step 9
[As root - Java Installation]
#for i in $(cat hosts) ; do echo "scp jdk-8u152-linux-x64.rpm ${i}:/tmp &" >> /tmp/sendjdk.bash ; done
Paste and run contents of the file

Step 10
[As root - Install Java] 

# for i in $(cat hosts) ; do  ssh ${i}  rpm -Uvh /tmp/jdk-8u152-linux-x64.rpm  ; done;

Step 11
[AS root - Verify Changes done till now]
Java
# for i in $(cat hosts) ; do ssh ${i}   java -version  ;  done;



[Other Automated Helper Scripts]
[Partition Creation]
# for i in $(cat hosts) ; do ssh ${i} 'echo -e "n\np\n1\n\n\nw" | fdisk /dev/sdc'  ; done;
# for i in $(cat hosts) ; do ssh ${i} 'echo -e "n\np\n1\n\n\nt\n8e\nw" | fdisk /dev/sdd'  ; done;

[For deleting the partitions if required]

# for i in $(cat hosts) ; do ssh ${i} 'echo -e "d\nw" | fdisk /dev/sdd'  ; done;

[File System formatting]

# for i in $(cat hosts) ; do ssh ${i} mkfs.ext4 /dev/sdc1   ; done;

[Directory Creation]
# for i in $(cat hosts) ; do ssh ${i} mkdir /opt/HDV1; done;

[Ownership Change]
# for i in $(cat hosts) ; do ssh ${i} chmod 777 /opt  ; done;
# for i in $(cat hosts) ; do ssh ${i} chown hduser:hadoop  /opt/hadoop  ; done;

[FSTAB Entry Addition]

# for i in $(cat hosts) ; do ssh ${i} echo "/dev/sdc1       /opt/HDPV1 ext4 defaults 1 2 >> /etc/fstab" ; done

[Mounting the file system]

#for i in $(cat hosts) ; do ssh ${i} mount /opt/HDPV1 ; done


[VG and FS Extension]
for i in $(cat hosts) ; do ssh ${i} pvcreate /dev/sdd1  ; done;
for i in $(cat hosts) ; do ssh ${i} vgextend rootvg /dev/sdd1 ; done;
for i in $(cat hosts) ; do ssh ${i} lvextend /dev/mapper/rootvg-root_lv -L 15G  ; done;
for i in $(cat hosts) ; do ssh ${i} resize2fs /dev/mapper/rootvg-root_lv  ; done;
for i in $(cat hosts) ; do ssh ${i} lvextend /dev/mapper/rootvg-tmp_lv -L 8G  ; done;
for i in $(cat hosts) ; do ssh ${i} resize2fs /dev/mapper/rootvg-tmp_lv  ; done;

Thursday, February 1, 2018

OEM 13cR2 - Plugin Deployment on OMS - UI

GUI Plugin Deployment is similar to CLI deployment, it is just you are doing the same thing via clicks and not CLI.


To import a new plugin if required you can use the CLI method which I do prefer and can be found in my last blog

In this blog we are assuming the plugin is there in the repository of OMS but not deployed on the OMS Server

We are going to deploy and Engineered System Plugin => ZDLRA in this blog on OMS Server.



Go to Setup => Extensibility => Plugins



Select Plugin to be deployed (ZDLRA in this case)



Select Deploy on Management Servers and click Next



Pre-req Check will happen automatically and then Click Next 



Provide the required inputs and check the backup taken check box.


Click Next twice and Final Page appears.




Check the Status using emctl

[oracle@omshost ~]$ emctl status oms -details


Once OMS is UP verify the newly deployed plugin



emcli login -username=sysman
Enter password :

Login successful
[ora132@exadataoemoms1 ~]$ emcli sync
Synchronized successfully
[oracle@omshost ~]$ emcli list_plugins_on_server
OMS name is exadataoemoms1:4889_Management_Service
Plug-in Name                                 Plugin-id                     Version [revision]
Zero Data Loss Recovery Appliance            oracle.sysman.am              13.2.2.0.0

OEM 13cR2 - Plugin Deployment on OMS - CLI

This Blog discusses on Manual Upgrade of Plugin on the Server 

[oracle@omshost stage_patch]$ emcli list_plugins_on_server      

                                       
OMS name is omshost:4889_Management_Service
Plug-in Name                                 Plugin-id                     Version [revision]

Oracle Cloud Framework                       oracle.sysman.cfw             13.2.2.0.0
Oracle Database                              oracle.sysman.db              13.2.2.0.0
Oracle Fusion Middleware                     oracle.sysman.emas            13.2.2.0.0
Systems Infrastructure                       oracle.sysman.si              13.2.2.0.0
Oracle Exadata                               oracle.sysman.xa              13.2.2.0.0

We are upgrading System Infra plugin in this part (oracle.sysman.si) 


Import the manually downloaded opar file (Plugin File)

Download the File from Oracle –
http://www.oracle.com/technetwork/oem/enterprise-manager/downloads/oem-plugin-update-3774387.html


emcli login –username=sysman
export PATH=$OMS_HOME/bin:$OMS_HOME/OMSPatcher:$PATH

emcli import_update -file="/stage_patch/13.2.3.0.0_oracle.sysman.si_2000_0.opar" -omslocal

 
Processing update: Plug-in - Enterprise Manager Systems Infrastructure plug-in with support for datacenter hardware, OS and virtualization.
Successfully uploaded the update to Enterprise Manager. Use the Self Update Console to manage this update.


[oracle@omshost bin]$ emcli login -username=sysman                                                      Enter password :

Login successful
 [oracle@omshost bin]$  emcli deploy_plugin_on_server -plugin="oracle.sysman.si:13.2.3.0.0" -repo_backup_taken


(Note - This specifies the plugin oracle.sysman.si:13.2.3.0.0 to be deployed. The version is specified post the plugin name so that if the plugin exists new plugin version is deployed, if the plugin does not exists , still it's a good practice to specify plugin version)
 
Enter repository DB sys password:

Performing pre-requisites check... This will take a while.
Prerequisites check succeeded
Deployment of plug-in on the management servers is in progress
Use "emcli get_plugin_deployment_status -plugin=oracle.sysman.si" to track the plug-in deployment status.

Note: Deployment of plug-in on the Management Server will require downtime.
      All currently connected users will be automatically disconnected from the Enterprise Manager.
      During downtime, users will not be able to connect to Enterprise Manager, and
      Enterprise Manager will not be able to monitor any targets.
      During downtime, use "emctl status oms -details" to track the deployment status during downtime.


Check for Status During Deployment

 
[oracle@omshost bin]$ emctl status oms -details
Oracle Enterprise Manager Cloud Control 13c Release 2
Copyright (c) 1996, 2016 Oracle Corporation.  All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
Console Server Host        : omshost
HTTP Console Port          : 7788
HTTPS Console Port         : 7803
HTTP Upload Port           : 4889
HTTPS Upload Port          : 4903
EM Instance Home           : /opt/oracle/em13cr2/gc_inst/em/EMGC_OMS1
OMS Log Directory Location : /opt/oracle/em13cr2/gc_inst/em/EMGC_OMS1/sysman/log
OMS is not configured with SLB or virtual hostname
Agent Upload is locked.
OMS Console is locked.
Active CA ID: 1
Console URL: https://omshost:7803/em
Upload URL: https://omshost:4903/empbs/upload

WLS Domain Information
Domain Name            : GCDomain
Admin Server Host      : omshost
Admin Server HTTPS Port: 7102
Admin Server is STARTING

Oracle Management Server Information
Managed Server Instance Name: EMGC_OMS1
Oracle Management Server Instance Host: omshost
WebTier is Down

Oracle Management Server status is down possibly because plug-ins are being deployed or undeployed from it. Use -details option to get more details about the plug-in deployment status.
Plug-in Deployment/Undeployment Status

Destination          : Management Server - omshost:4889_Management_Service
Plug-in Name         : Systems Infrastructure
Version              : 13.2.3.0.0
ID                   : oracle.sysman.si
Content              : Plug-in
Action               : Deployment
Status               : Deploying
Steps Info:
---------------------------------------- ------------------------- ------------------------- ----------
Step                                     Start Time                End Time                  Status
---------------------------------------- ------------------------- ------------------------- ----------
Submit job for deployment                1/30/18 2:36:07 PM CET    1/30/18 2:36:07 PM CET    Success

Initialize                               1/30/18 2:36:09 PM CET    1/30/18 2:36:16 PM CET    Success

Install software                         1/30/18 2:36:16 PM CET    1/30/18 2:36:18 PM CET    Success

Validate plug-in home                    1/30/18 2:36:19 PM CET    1/30/18 2:36:20 PM CET    Success

Perform custom preconfiguration          1/30/18 2:36:20 PM CET    1/30/18 2:36:20 PM CET    Success

Check mandatory patches                  1/30/18 2:36:20 PM CET    1/30/18 2:36:20 PM CET    Success

Generate metadata SQL                    1/30/18 2:36:20 PM CET    1/30/18 2:36:20 PM CET    Success

Preconfigure Management Repository       1/30/18 2:36:21 PM CET    1/30/18 2:36:21 PM CET    Success

Preregister DLF                          1/30/18 2:36:21 PM CET    1/30/18 2:36:21 PM CET    Success

Stop management server                   1/30/18 2:36:21 PM CET    1/30/18 2:38:35 PM CET    Success

Register DLF                             1/30/18 2:38:36 PM CET    N/A                       Running

Configure Management Repository          1/30/18 2:38:36 PM CET    N/A                       Running

Configure middle tier                    1/30/18 2:38:36 PM CET    N/A                       Running

---------------------------------------- ------------------------- ------------------------- ----------

BI Publisher Server Information
BI Publisher Managed Server Name: BIP
BI Publisher Server is Down

BI Publisher HTTP Managed Server Port   : 9701
BI Publisher HTTPS Managed Server Port  : 9803
BI Publisher HTTP OHS Port              : 9788
BI Publisher HTTPS OHS Port             : 9851
BI Publisher is locked.
BI Publisher Server named 'BIP' is configured to run at URL: https://omshost:9851/xmlpserver
BI Publisher Server Logs: /opt/oracle/em13cr2/gc_inst/user_projects/domains/GCDomain/servers/BIP/logs/
BI Publisher Log        : /opt/oracle/em13cr2/gc_inst/user_projects/domains/GCDomain/servers/BIP/logs/bipublisher/bipublisher.log

If required logs are accessible in
<MIDDLEWARE_HOME>/cfgtoollogs/pluginca




Verify the newly deployed Plugin
[oracle@omshost ~]$ emcli login -username=sysman
Enter password :

Login successful

[oracle@omshost ~]$ emcli sync
Synchronized successfully
[oracle@omshost ~]$ emcli list_plugins_on_server
OMS name is omshost:4889_Management_Service

Plug-in Name                                 Plugin-id                     Version [revision]

Oracle Cloud Framework                       oracle.sysman.cfw             13.2.2.0.0
Oracle Database                              oracle.sysman.db              13.2.2.0.0
Oracle Fusion Middleware                     oracle.sysman.emas            13.2.2.0.0
Systems Infrastructure                       oracle.sysman.si              13.2.3.0.0
Oracle Exadata                               oracle.sysman.xa              13.2.2.0.0


The SI Plugin has been upgraded 13.2.3.0.0

OEM 13cR2 - Catalog Updation (Manual)

 Download Catalog File -  https://updates.oracle.com/Orion/Download/download_patch/p9348486_112000_Generic.zip

(Set Enterprise Manager Cloud Control to Offline Mode. To do so, follow these steps.
    From the Setup menu, select Provisioning and Patching, then select Offline Patching.
   In the Online and Offline Settings tab, select Offline)

    Go to Setup Menu => Extensibility => Self Update => Check for updates you will get the URL to download
 (Oracle Documentation Reference - https://docs.oracle.com/cd/E73210_01/EMADM/GUID-ECA444F6-88B3-436D-8B96-F65581AD2E2E.htm#GUID-5DF33C50-59DD-4187-B769-D66E886EC1BB)

Import the Catalog in OEM  using emcli utility


[oracle@omshost stage_patch]$ emcli import_update_catalog -file="/stage_patch/p9348486_112000_Generic.zip" -omslocal
Processing catalog for Agent Software
Processing update: Agent Software - Agent Software (12.1.0.5.0) for Microsoft Windows (32-bit)
Processing update: Agent Software - Agent Software (12.1.0.4.0) for Microsoft Windows (32-bit)
Processing update: Agent Software - Agent Software (12.1.0.3.0) for Microsoft Windows (32-bit)
Processing update: Agent Software - Agent Software (12.1.0.2.0) for Microsoft Windows (32-bit)
Processing update: Agent Software - Agent Software (12.1.0.1.0) for Microsoft Windows (32-bit)
.
.
.
Processing update: Event Monitoring Service - Always-On Monitoring Archive
Processing catalog for Diagnostic Tools

Successfully uploaded the Self Update catalog to Enterprise Manager. Use the Self Update Console to view and manage updates.
Time taken for import catalog is 01:45.353.