问题记录:spark读取hdfs文件出错

时间:2023-03-09 20:01:18
问题记录:spark读取hdfs文件出错

错误信息:

scala> val file = sc.textFile("hdfs://kit-b5:9000/input/README.txt")

13/10/29 16:59:45 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)

13/10/29 16:59:45 DEBUG MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)], about=, type=DEFAULT, always=false, sampleName=Ops)

13/10/29 16:59:45 DEBUG MetricsSystemImpl: UgiMetrics, User and group related metrics

13/10/29 16:59:45 DEBUG Groups: Creating new Groups object

13/10/29 16:59:45 DEBUG NativeCodeLoader: Trying to load the custom-built native-hadoop library...

13/10/29 16:59:45 DEBUG NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path

13/10/29 16:59:45 DEBUG NativeCodeLoader: java.library.path=

13/10/29 16:59:45 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

13/10/29 16:59:45 DEBUG JniBasedUnixGroupsMappingWithFallback: Falling back to shell based

13/10/29 16:59:45 DEBUG JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping

13/10/29 16:59:45 DEBUG Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000

13/10/29 16:59:45 DEBUG UserGroupInformation: hadoop login

13/10/29 16:59:45 DEBUG UserGroupInformation: hadoop login commit

13/10/29 16:59:45 DEBUG UserGroupInformation: using local user:UnixPrincipal: hadoop

13/10/29 16:59:45 DEBUG UserGroupInformation: UGI loginUser:hadoop (auth:SIMPLE)

13/10/29 16:59:45 INFO MemoryStore: ensureFreeSpace(115052) called with curMem=0, maxMem=339585269

13/10/29 16:59:45 INFO MemoryStore: Block broadcast_0 stored as values to memory (estimated size 112.4 KB, free 323.7 MB)

13/10/29 16:59:45 DEBUG BlockManager: Put block broadcast_0 locally took 102 ms

file: org.apache.spark.rdd.RDD[String] = MappedRDD[1] at textFile at <console>:12

scala> file.count();

……紧接着file.count()报错,错误信息未记录。

解决办法:主要是jre目录下缺少了libhadoop.so和libsnappy.so两个文件。具体是,spark-shell依赖的是scala,scala依赖的是JAVA_HOME下的jdk,libhadoop.so和libsnappy.so两个文件应该放到$JAVA_HOME/jre/lib/amd64下面。要注意的是要知道真正依赖到的JAVA_HOME是哪一个,把两个.so放对地方。这两个so:libhadoop.so和libsnappy.so。前一个so可以在HADOOP_HOME下找到,比如hadoop\lib\native\Linux-amd64-64。第二个libsnappy.so需要下载一个snappy-1.1.0.tar.gz,然后./configure,make编译出来。snappy是google的一个压缩算法,在hadoop jira下https://issues.apache.org/jira/browse/HADOOP-7206记录了这次集成。