hadoop-2.6.0为分布式安装
伪分布模式集群规划(单节点)
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
- host - ip - soft - process -
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
- single32 - 192.168.1.30 - jdk-6u32-linux-i586,hadoop-2.6.0 - NameNode、SecondaryNameNode、DataNode、ResourceManager、NodeManager、JobHistoryServer -
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
一、修改主机名为single32
vi /etc/sysconfig/network
将HOSTNAME=localhost改为HOSTNAME=single32
shift+zz
二、修改主机名与IP映射文件
vi /etc/hosts
添加192.168.1.30 single32
shift+zz
三、关闭防火墙
service iptables stop
chkconfig iptables off
chkconfig --list | grep iptables
四、配置ssh免密码登陆
ssh-keygen -t rsa(三次回车)
ssh-copy-id -i single32
ssh single32
exit
五、安装jdk
cd /usr/local/
./jdk-6u32-linux-i586.bin
mv jdk1.6.32 jdk
vi /etc/profile
export JAVA_HOME=/usr/local/jdk
export PATH=.:$JAVA_HOME/bin:$PATH
shift+zz
六、安装hadoop-2.6.0
cd /usr/local/
tar -zxvf hadoop-2.6.0.tar.gz
mv hadoop-2.6.0 hadoop
vi /etc/profile
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export PATH=.:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$JAVA_HOME/bin:$PATH
shift+zz
七、修改hadoop/etc/hadoop下的配置文件(hadoop-env.sh、core-site.xml、hdfs-site.xml、yarn-site.xml、mapred-site.xml)
hadoop-env.sh
export JAVA_HOME=/usr/local/jdk
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://single32:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/tmp</value>
</property>
<property>
<name>fs.trash.interval</name>
<value>1440</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
八、格式化hdfs(只能格式化一次)
cd /usr/local/hadoop/bin
hdfs namenode -format
九、启动hadoop
cd /usr/local/hadoop/sbin
启动hdfs
start-dfs.sh
启动yarn
start-yarn.sh
启动historyserver
mr-jobhistory-daemon.sh start historyserver
十、验证启动状态
1、命令行验证:(如下进程表示启动成功)
[root@single32 sbin]# jps
3691 SecondaryNameNode
3991 NodeManager
3420 NameNode
4270 JobHistoryServer
3831 ResourceManager
3539 DataNode
4307 Jps
2、web验证:(打开表示启动成功)
验证hdfs启动状态:http://single32:50070/
验证yarn启动状态:http://single32:8088/
验证启动状态:http://single32:19888/