YARN & HDFS2 安装和配置Kerberos

时间:2023-02-17 07:42:42

今天尝试在Hadoop 2.x开发集群上配置Kerberos,遇到一些问题,记录一下

设置hadoop security

core-site.xml

        <property>
<name>hadoop.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hadoop.security.authorization</name>
<value>true</value>
</property>

hadoop.security.authentication默认是simple方式,也就是基于文件系统的验证方式,这里我们改为kerberos

设置hdfs security
hdfs-site.xml
        <property>
<name>dfs.block.access.token.enable</name>
<value>true</value>
</property>
<property>
<name>dfs.https.enable</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.https-address</name>
<value>dev80.hadoop:50470</value>
</property>
<property>
<name>dfs.https.port</name>
<value>50470</value>
</property>
<property>
<name>dfs.namenode.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.namenode.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.namenode.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>dev80.hadoop:50090</value>
</property>
<property>
<name>dfs.namenode.secondary.https-port</name>
<value>50470</value>
</property>
<property>
<name>dfs.namenode.secondary.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.namenode.secondary.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.namenode.secondary.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1003</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1007</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:1005</value>
</property>
<property>
<name>dfs.datanode.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.data.dir.perm</name>
<value>700</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>0.0.0.0:1003</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>0.0.0.0:1007</value>
</property>
<property>
<name>dfs.datanode.https.address</name>
<value>0.0.0.0:1005</value>
</property>
<property>
<name>dfs.datanode.keytab.file</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>dfs.datanode.kerberos.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.datanode.kerberos.https.principal</name>
<value>host/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.principal</name>
<value>HTTP/_HOST@DIANPING.COM</value>
</property>
<property>
<name>dfs.web.authentication.kerberos.keytab</name>
<value>/etc/hadoop.keytab</value>
<description>
The Kerberos keytab file with the credentials for the
HTTP Kerberos principal used by Hadoop-Auth in the HTTP endpoint.
</description>
</property>

dfs.datanode.address表示data transceiver RPC server所绑定的hostname或IP地址,如果开启security,端口号必须小于1024,否则的话启动datanode时候会报“Cannot start secure cluster without privileged resources”错误

namenode和secondary namenode都是以hadoop用户身份启动
datanode需要以root用户身份用jsvc来启动,而Hadoop 2.x自身带的jsvc是32位版本的,需要去jsvc官网上重新下载编译
1. wget http://mirror.esocc.com/apache//commons/daemon/binaries/commons-daemon-1.0.15-bin.tar.gz
2. cd src/native/unix; configure; make
生成jsvc 64位executable,把它拷贝到$HADOOP_HOME/libexec
[hadoop@dev80 unix]$ file jsvc
jsvc: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped
3. mvn package
编译commons-daemon-1.0.15.jar,拷贝到$HADOOP_HOME/share/hadoop/hdfs/lib下,同时删除自带版本的commons-daemon jar包

hadoop-env.sh中修改
# The jsvc implementation to use. Jsvc is required to run secure datanodes.
export JSVC_HOME=/usr/local/hadoop/hadoop-2.1.0-beta/libexec
# On secure datanodes, user to run the datanode as after dropping privileges
export HADOOP_SECURE_DN_USER=hadoop
# The directory where pid files are stored. /tmp by default
export HADOOP_SECURE_DN_PID_DIR=/usr/local/hadoop
# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=/data/logs

分发配置和jar到整个集群
用hadoop帐号启动namenode,然后切换到root,再启动datanode,发现namenode web页面上有显示"
Security is 
ON

"


设置yarn security
yarn-site.xml
        <property>
<name>yarn.resourcemanager.keytab</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>yarn.resourcemanager.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>yarn.nodemanager.keytab</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>yarn.nodemanager.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
<property>
<name>yarn.nodemanager.container-executor.class</name>
<value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value>
</property>
<property>
<name>yarn.nodemanager.linux-container-executor.group</name>
<value>hadoop</value>
</property>
container-executor默认是DefaultContainerExecutor,是以起Nodemanager的用户身份启动container的,切换为LinuxContainerExecutor会以提交application的用户身份来启动,它使用一个setuid可执行文件来启动和销毁container
这个可执行文件在bin/container-executor,不过Hadoop默认带的还是32位版本,所以需要重新编译
下载Hadoop 2.x source code
mvn package -Pdist,native -DskipTests -Dtar -Dcontainer-executor.conf.dir=/etc
注:container-executor.conf.dir必须显示注明,它表示setuid可执行文件依赖的配置文件路径,默认会在$HADOOP_HOME/etc/hadoop下,不过由于该文件需要父目录和以上的目录的owner都为root,要不然会有以下报错,所以为了方便我们设置为/etc
Caused by: org.apache.hadoop.util.Shell$ExitCodeException: File /usr/local/hadoop/hadoop-2.1.0-beta/etc/hadoop must be owned by root, but is owned by 500
at org.apache.hadoop.util.Shell.runCommand(Shell.java:458)
at org.apache.hadoop.util.Shell.run(Shell.java:373)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:578)
at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.init(LinuxContainerExecutor.java:147)
默认的寻找configuration路径
[root@dev80 bin]# strings container-executor  | grep etc
../etc/hadoop/container-executor.cfg
看出来是默认加载$HADOOP_HOME/etc/hadoop
/container-executor.cfg
加上container-executor.conf.dir=/etc 再编译后
[hadoop@dev80 bin]$ strings container-executor | grep etc
/etc/container-executor.cfg

container-executor.cfg中设置
yarn.nodemanager.linux-container-executor.group=hadoop

min.user.id=499

将container-executor拷贝到$HADOOP_HOME/bin
chown root:hadoop container-executor /etc/container-executor.cfg
chmod 4750 container-executor

chmod 400 /etc/container-executor.cfg
同步配置文件到整个集群,用hadoop帐号启动ResourceManager和Nodemanager

设置jobhistory server security
mapred-site.xml
        <property>
<name>mapreduce.jobhistory.keytab</name>
<value>/etc/hadoop.keytab</value>
</property>
<property>
<name>mapreduce.jobhistory.principal</name>
<value>hadoop/_HOST@DIANPING.COM</value>
</property>
启动JobHistoryServer
sbin/mr-jobhistory-daemon.sh start historyserver

执行命令kinit,获得一张tgt(ticket granting ticket)
[hadoop@dev80 hadoop]$ kinit -r 24l -k -t /home/hadoop/.keytab hadoop
[hadoop@dev80 hadoop]$ klist
Ticket cache: FILE:/tmp/krb5cc_500
Default principal: hadoop@DIANPING.COM
Valid starting Expires Service principal
09/11/13 15:25:34 09/12/13 15:25:34 krbtgt/DIANPING.COM@DIANPING.COM
renew until 09/12/13 15:25:34

其中/tmp/krb5cc_500就是ticket cache file, 500表示hadoop帐号的uid,默认会读取

用户也可以通过设置export KRB5CCNAME=/tmp/krb5cc_500来指定ticket cache路径
用完之后可以kdestroy销毁掉该ticket cache

如果本地没有ticket cache,会报如下错误
13/09/11 16:21:35 ERROR security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:KERBEROS) cause:java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

附上keytab中的principal

[hadoop@dev80 hadoop]$ klist -k -t /etc/hadoop.keytab
Keytab name: WRFILE:/etc/hadoop.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 hadoop/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 host/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM
1 06/17/12 22:01:24 HTTP/dev80.hadoop@DIANPING.COM

YARN & HDFS2 安装和配置Kerberos的更多相关文章

  1. (转)RedHat&sol;CentOS安装和配置kerberos

    RedHat/CentOS安装和配置kerberos 需要在kerberos server和客户端都先安装ntp (Internet时间协议,保证服务器和客户机时间同步 ) 1  kerberos 服 ...

  2. CentOS6安装各种大数据软件 第九章:Hue大数据可视化工具安装和配置

    相关文章链接 CentOS6安装各种大数据软件 第一章:各个软件版本介绍 CentOS6安装各种大数据软件 第二章:Linux各个软件启动命令 CentOS6安装各种大数据软件 第三章:Linux基础 ...

  3. Hadoop-2&period;6&period;0 集群的 安装与配置

    1.  配置节点bonnie1 hadoop环境 (1) 下载hadoop- 2.6.0 并解压缩 [root@bonnie1 ~]# wget http://apache.fayea.com/had ...

  4. Spark&lpar;三&rpar;&colon; 安装与配置

    参见 HDP2.4安装(五):集群及组件安装 ,安装配置的spark版本为1.6, 在已安装HBase.hadoop集群的基础上通过 ambari 自动安装Spark集群,基于hadoop yarn ...

  5. 在虚拟机VM中安装的Ubuntu上安装和配置Hadoop

    一.系统环境: 我使用的Ubuntu版本是:ubuntu-12.04-desktop-i386.iso jdk版本:jdk1.7.0_67 hadoop版本:hadoop-2.5.0 二.下载jdk和 ...

  6. 完全分布式Hadoop2&period;3安装与配置

    一.Hadoop基本介绍 Hadoop优点 1.高可靠性:Hadoop按位存储和处理数据 2.高扩展性:Hadoop是在计算机集群中完成计算任务,这个集群可以方便的扩展到几千台 3.高效性:Hadoo ...

  7. Mysql多实例 安装以及配置

    MySQL多实例 1.什么是MySQL多实例 简单地说,Mysql多实例就是在一台服务器上同时开启多个不同的服务端口(3306.3307),同时运行多个Mysql服务进程,这些服务进程通过不同的soc ...

  8. 在Linux上怎么安装和配置Apache Samza

    samza是一个分布式的流式数据处理框架(streaming processing),它是基于Kafka消息队列来实现类实时的流式数据处理的.(准确的说,samza是通过模块化的形式来使用kafka的 ...

  9. 浅谈 zookeeper 原理&comma;安装和配置

    当前云计算流行, 单一机器额的处理能力已经不能满足我们的需求,不得不采用大量的服务集群.服务集群对外提供服务的过程中,有很多的配置需要随时更新,服务间需要协调工作,那么这些信息如何推送到各个节点?并且 ...

随机推荐

  1. Socket 类通信例子-第24章

    using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; usin ...

  2. SSH与Webservice整合记录

    一.首先搭好SSH框架: 1. Struts:MyEclipse菜单栏MyEclipse——>Project Capabilities——>Add Struts Capabilities, ...

  3. Uva442 hdu 1082 Matrix Chain Multiplication

    要注意取出来的时候 先取出q的是后面那个矩阵 后取出p的是前面的矩阵 所以是判断 p.a == q.b #include <iostream> #include <stack> ...

  4. java 20 - 4 IO流概述和一个简单例子解析

    IO流的分类:  流向: 输入流 读取数据  输出流 写出数据 数据类型:  字节流  字节输入流 读取数据 InputStream  字节输出流 写出数据 OutputStream  字符流  字符 ...

  5. JAVA GUI之CardLayout

    package refNet; import java.awt.*; import java.awt.event.*; import javax.swing.*; public class CardL ...

  6. &lbrack;ionic开源项目教程&rsqb; - 第12讲 医疗模块的实现以及Service层loadMore和doRefresh的提取封装

    关注微信订阅号:TongeBlog,可查看[ionic开源项目]全套教程. 这一讲主要实现tab2[医疗]模块,[医疗]模块跟tab1[健康]模块类似. [ionic开源项目教程] - 第12讲 医疗 ...

  7. Linux less命令

    less 工具也是对文件或其它输出进行分页显示的工具,应该说是linux正统查看文件内容的工具,功能极其强大.less 的用法比起 more 更加的有弹性.在 more 的时候,我们并没有办法向前面翻 ...

  8. 你知道自己执行的是哪个jre吗?

    多个JRE 我在做<Java日志工具之java.util.logging.Logger>的DEMO时,修改java.util.logging.Logger的配置文件,怎么修改都不起作用,因 ...

  9. dxxzc团队及队员学号后三位

    队名:dxxzc团队 组长:邢正080 组员:董冰068   许国庆079   张琦057 曹华058

  10. TCO之旅

    TCO之旅 时间限制: 1 Sec  内存限制: 128 MB提交: 77  解决: 24[提交][状态][讨论版] 题目描述 我们的小强终于实现了他TCO的梦想了,爬进了TCO的全球总决赛,开始了他 ...