In this blog I discuss my configuration of Automatic Failover using QJM.
This is in continuation with my previous QJM blog for manual configuration.
Automatic failover is configured using ZKFC - Zookepere Failover Controller on Namenodes
and Zookeeper processes on Quorom nodes.
1. Set Automatic failover in hdfs-site.xml
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
2. Configure Qurom services parameter (core-site.xml)
<property>
<name>ha.zookeeper.quorum</name>
<value>d1.novalocal.com:2181,d2.novalocal.com:2181,d3.lcoalhost.com:2181</value>
</property>
Scp files to snn
scp core-site.xml hdfs-site.xml snn:/etc/hadoop/conf
3. Create user zkfc [d1n,d2n,d3n] and set password to hadoop
useradd -u 1011 -g hadoop zkfc
password zkfc
4. Download Zookeeper [As root - d1n]
[root@d1n tmp]# curl http://www-eu.apache.org/dist/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz -o zookeeper-3.4.10.tar.gz
Untar Zookeeper
cd /usr/local
tar -xzf /tmp/zookeeper-3.4.10.tar.gz
5. Do configuration [on d1n]
cd /usr/local/zookeeper/conf
cp zoo_sample.cfg zoo.cfg
Edit zoo.cfg and make below change
dataDir=/opt/HDPV2/zookeeper
6. Create Hosts file
create all_hosts in tmp directory file
cat all_hosts
d1n
d2n
d3n
7. Set up Passwordless connection
[As root and zkfc on d1n]
ssh-keygen
ssh-copy-id root@d1n
ssh-copy-id root@d2n
ssh-copy-id root@d3n
8. Create directories
[As root on d1n]
#for i in $(cat /tmp/all_hosts) ;do ssh ${i} mkdir -p /opt/HDPV2/zookeeper; done
#for i in $(cat /tmp/all_hosts) ;do ssh ${i} chmod 775 /opt/HDPV2/zookeeper ; done
#for i in $(cat /tmp/all_hosts) ;do ssh ${i} chown zkfc:hadoop /opt/HDPV2/zookeeper; done
# scp -r /usr/local/zookeeper-3.4.10 d2n:/usr/local
# scp -r /usr/local/zookeeper-3.4.10 d3n:/usr/local
# for i in $(cat /tmp/all_hosts) ;do ssh ${i} ln -s /usr/local/zookeeper-3.4.10/ /usr/local/zookeeper ; done
9. Start Zookeeper Qurom processes
[As zkfc on d1n]
for i in $(cat /tmp/all_hosts) ;do ssh ${i} /usr/local/zookeeper/bin/zkServer.sh start /usr/local/zookeeper/conf/zoo.cfg ; done
10. Stop nn and nn on Standby
hadoop-daemon.sh stop namenode
11. Start Namenodes
hadoop-daemon.sh start namenode
12. Start zkfc on both namenodes
hadoop-daemon.sh start zkfc -formatZK
Whichever node zkfc will be started first will become active node
You can now kill the process id of active namenode and watch the other one become active to verify auto-failover configuration of zkfc.
This is in continuation with my previous QJM blog for manual configuration.
Automatic failover is configured using ZKFC - Zookepere Failover Controller on Namenodes
and Zookeeper processes on Quorom nodes.
1. Set Automatic failover in hdfs-site.xml
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
2. Configure Qurom services parameter (core-site.xml)
<property>
<name>ha.zookeeper.quorum</name>
<value>d1.novalocal.com:2181,d2.novalocal.com:2181,d3.lcoalhost.com:2181</value>
</property>
Scp files to snn
scp core-site.xml hdfs-site.xml snn:/etc/hadoop/conf
3. Create user zkfc [d1n,d2n,d3n] and set password to hadoop
useradd -u 1011 -g hadoop zkfc
password zkfc
4. Download Zookeeper [As root - d1n]
[root@d1n tmp]# curl http://www-eu.apache.org/dist/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz -o zookeeper-3.4.10.tar.gz
Untar Zookeeper
cd /usr/local
tar -xzf /tmp/zookeeper-3.4.10.tar.gz
5. Do configuration [on d1n]
cd /usr/local/zookeeper/conf
cp zoo_sample.cfg zoo.cfg
Edit zoo.cfg and make below change
dataDir=/opt/HDPV2/zookeeper
6. Create Hosts file
create all_hosts in tmp directory file
cat all_hosts
d1n
d2n
d3n
7. Set up Passwordless connection
[As root and zkfc on d1n]
ssh-keygen
ssh-copy-id root@d1n
ssh-copy-id root@d2n
ssh-copy-id root@d3n
8. Create directories
[As root on d1n]
#for i in $(cat /tmp/all_hosts) ;do ssh ${i} mkdir -p /opt/HDPV2/zookeeper; done
#for i in $(cat /tmp/all_hosts) ;do ssh ${i} chmod 775 /opt/HDPV2/zookeeper ; done
#for i in $(cat /tmp/all_hosts) ;do ssh ${i} chown zkfc:hadoop /opt/HDPV2/zookeeper; done
# scp -r /usr/local/zookeeper-3.4.10 d2n:/usr/local
# scp -r /usr/local/zookeeper-3.4.10 d3n:/usr/local
# for i in $(cat /tmp/all_hosts) ;do ssh ${i} ln -s /usr/local/zookeeper-3.4.10/ /usr/local/zookeeper ; done
9. Start Zookeeper Qurom processes
[As zkfc on d1n]
for i in $(cat /tmp/all_hosts) ;do ssh ${i} /usr/local/zookeeper/bin/zkServer.sh start /usr/local/zookeeper/conf/zoo.cfg ; done
10. Stop nn and nn on Standby
hadoop-daemon.sh stop namenode
11. Start Namenodes
hadoop-daemon.sh start namenode
12. Start zkfc on both namenodes
hadoop-daemon.sh start zkfc -formatZK
Whichever node zkfc will be started first will become active node
You can now kill the process id of active namenode and watch the other one become active to verify auto-failover configuration of zkfc.
No comments:
Write comments