http://tecadmin.net/steps-to-install-hadoop-on-centosrhel-6/#
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.
This article will help you for step by step install and configure single node hadoop cluster.
Step 1. Install Java
Before installing hadoop make sure you have java installed on your system. If you do not have java installed use following article to install Java.
Step 2. Create User Account
Create a system user account to use for hadoop installation.
# useradd hadoop# passwd hadoop
Changing password for user hadoop.New password:Retype new password:passwd: all authentication tokens updated successfully.
Step 3. Configuring Key Based Login
Its required to setup hadoop user to ssh itself without password. Using following method it will enable key based login for hadoop user.
# su - hadoop$ ssh-keygen -t rsa$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys$ chmod 0600 ~/.ssh/authorized_keys$ exit
Step 4. Download and Extract Hadoop Source
Downlaod hadoop latest availabe version from its official site, and follow below steps.
# mkdir /opt/hadoop# cd /opt/hadoop/# wget http://apache.mesi.com.ar/hadoop/common/hadoop-1.2.1/hadoop-1.2.1.tar.gz# tar -xzf hadoop-1.2.1.tar.gz# mv hadoop-1.2.1 hadoop# chown -R hadoop /opt/hadoop# cd /opt/hadoop/hadoop/
Step 5: Configure Hadoop
First edit hadoop configuration files and make following changes.
5.1 Edit core-site.xml# vim conf/core-site.xml
#Add the following inside the configuration tagfs.default.name hdfs://localhost:9000/ dfs.permissions false
5.2 Edit hdfs-site.xml
# vim conf/hdfs-site.xml
# Add the following inside the configuration tagdfs.data.dir /opt/hadoop/hadoop/dfs/name/data true dfs.name.dir /opt/hadoop/hadoop/dfs/name true dfs.replication 2
5.3 Edit mapred-site.xml
# vim conf/mapred-site.xml
# Add the following inside the configuration tagmapred.job.tracker localhost:9001
5.4 Edit hadoop-env.sh
# vim conf/hadoop-env.sh
export JAVA_HOME=/opt/jdk1.7.0_17export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
Set JAVA_HOME path as per your system configuration for java.
Next to format Name Node
# su - hadoop$ cd /opt/hadoop/hadoop$ bin/hadoop namenode -format
13/06/02 22:53:48 INFO namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = srv1.tecadmin.net/192.168.1.90STARTUP_MSG: args = [-format]STARTUP_MSG: version = 1.2.1STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May 6 06:59:37 UTC 2013STARTUP_MSG: java = 1.7.0_17************************************************************/13/06/02 22:53:48 INFO util.GSet: Computing capacity for map BlocksMap13/06/02 22:53:48 INFO util.GSet: VM type = 32-bit13/06/02 22:53:48 INFO util.GSet: 2.0% max memory = 101364531213/06/02 22:53:48 INFO util.GSet: capacity = 2^22 = 4194304 entries13/06/02 22:53:48 INFO util.GSet: recommended=4194304, actual=419430413/06/02 22:53:49 INFO namenode.FSNamesystem: fsOwner=hadoop13/06/02 22:53:49 INFO namenode.FSNamesystem: supergroup=supergroup13/06/02 22:53:49 INFO namenode.FSNamesystem: isPermissionEnabled=true13/06/02 22:53:49 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=10013/06/02 22:53:49 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)13/06/02 22:53:49 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 013/06/02 22:53:49 INFO namenode.NameNode: Caching file names occuring more than 10 times13/06/02 22:53:49 INFO common.Storage: Image file of size 112 saved in 0 seconds.13/06/02 22:53:49 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/opt/hadoop/hadoop/dfs/name/current/edits13/06/02 22:53:49 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/opt/hadoop/hadoop/dfs/name/current/edits13/06/02 22:53:49 INFO common.Storage: Storage directory /opt/hadoop/hadoop/dfs/name has been successfully formatted.13/06/02 22:53:49 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at srv1.tecadmin.net/192.168.1.90************************************************************/
Step 6: Start Hadoop Services
Use the following command to start all hadoop services.
$ bin/start-all.sh
[sample output]
starting namenode, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-namenode-ns1.tecadmin.net.outlocalhost: starting datanode, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-datanode-ns1.tecadmin.net.outlocalhost: starting secondarynamenode, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-secondarynamenode-ns1 .tecadmin.net.outstarting jobtracker, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-jobtracker-ns1.tecadmin.net.outlocalhost: starting tasktracker, logging to /opt/hadoop/hadoop/libexec/../logs/hadoop-hadoop-tasktracker-ns1.tecadmin.ne t.out
Step 7: Test and Access Hadoop Services
Use ‘jps‘ command to check if all services are started well.
$ jpsor$ $JAVA_HOME/bin/jps
26049 SecondaryNameNode25929 DataNode26399 Jps26129 JobTracker26249 TaskTracker25807 NameNode
Web Access URLs for Services
http://srv1.tecadmin.net:50030/ for the Jobtracker http://srv1.tecadmin.net:50070/ for the Namenode http://srv1.tecadmin.net:50060/ for the Tasktracker
Hadoop JobTracker:
Hadoop Namenode:
Hadoop TaskTracker:
Step 8: Stop Hadoop Services
If you do no need anymore hadoop. Stop all hadoop services using following command.
# bin/stop-all.sh