在mac os x 10.8上设置hadoop的问题。

时间:2022-10-04 00:07:07

I have set something up on my mac for installing hadoop. But there is an error message like this:

我已经在我的mac上设置了安装hadoop的东西。但是有这样一个错误信息:

13/02/18 04:05:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s).
13/02/18 04:05:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 1 time(s).
13/02/18 04:05:54 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 2 time(s).
13/02/18 04:05:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 3 time(s).
13/02/18 04:05:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 4 time(s).
13/02/18 04:05:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 5 time(s).
13/02/18 04:05:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 6 time(s).
13/02/18 04:05:59 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 7 time(s).
13/02/18 04:06:00 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 8 time(s).
13/02/18 04:06:01 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s).
java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:546)
    at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:318)
    at org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:265)
    at org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:68)
    at org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:139)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
Caused by: java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
    at org.apache.hadoop.ipc.Client.call(Client.java:1075)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy1.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
    at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:123)
    at org.apache.hadoop.mapred.JobConf.getWorkingDirectory(JobConf.java:542)
    ... 17 more
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:599)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:434)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:560)
    at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:184)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1206)
    at org.apache.hadoop.ipc.Client.call(Client.java:1050)
    ... 31 more

then I enter jps to check the service, the result is : 20635 Jps

然后我输入jps来检查服务,结果是:20635 jps。

20466 TaskTracker

20466年TaskTracker

20189 DataNode

20189年DataNode

20291 SecondaryNameNode

20291年SecondaryNameNode

I don't know how to deal with this error. could someone give me an answer? Thx a Lot!!!

我不知道该如何处理这个错误。有人能给我一个答案吗?谢谢很多! ! !

3 个解决方案

#1


2  

Actually all processes that (hadoop should run) are not running because of misconfiguration of IP.I am not familiar with Mac OS but on linux and windows we need to put hadoop enteries for connection in hosts files(etc/hosts) and I am damn sure that it should be for Mac.Now the point is
You need to put your hadoop entry in that file as a local mnachine like 127.0.0.1
Actually you need to put it against actual IP of your machine
For example
hadoop-machine 127.0.0.1 -->(placing loop back IP is wrong here because hadoop will try to connect with this IP).
remove this 127.0.0.1 and place the actual IP of your machine infront of this entry.You can find ip of your mac machine eaisly.here are some questions which are not directly related to hadoop but I guess they would be helpful for you.
Question 1 , Question 2, Question 3

实际上,所有(hadoop应该运行的)进程都不会因为IP的错误配置而运行。我不熟悉Mac OS但在linux和windows我们需要把hadoop enteries连接在主机文件(等)/主机和我该死的确定应该为Mac.Now关键是你需要把你的hadoop条目等文件作为本地mnachine 127.0.0.1实际上你需要把它与你的机器的实际IP例如hadoop-machine 127.0.0.1 - - >(放置回环IP是错的因为hadoop将尝试连接这个IP)。删除这个127.0.0.1,并将您的机器的实际IP放在这个条目的前面。你可以找到你的苹果机的ip。这里有一些与hadoop不直接相关的问题,但我想它们对您有帮助。问题1,问题2,问题3。

#2


1  

I had this same error: ConnectException: Connection refused in all the secondarynamenode log files.

我有一个相同的错误:ConnectException:连接拒绝在所有的secondarynamenode日志文件中。

But I also found this in the namenode's log file:

但我也在namenode的日志文件中发现了这个:

2015-10-25 16:35:15,720 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /private/tmp/hadoop-admin/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

I therefore did a

因此,我做了一个

hadoop namenode -format

and the problem has gone away. Thus, the error message was just about the fact that the namenode had not started successfully.

问题已经解决了。因此,错误消息只是关于namenode没有成功启动的事实。

#3


0  

This might help a bit. But first U gotta remove the earlier installation using the command

这可能会有所帮助。但是首先要使用命令删除前面的安装。

rm -rf /usr/local/Cellar /usr/local/.git && brew cleanup

rm射频/usr/local/Cellar /usr/local/.git & &啤酒清理

Then U may begin with a fresh installation of Hadoop on U're Mac.

然后你可以开始在你的Mac上安装一个新的Hadoop。

Step 1: Install Homebrew

步骤1:安装自制程序

$ ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"

$ ruby -e $(curl -fsSL https://raw.github.com/mxcl/homebrew/go)

Step 2: Install Hadoop

步骤2:安装Hadoop

$ brew install hadoop

酿造安装hadoop美元

Assume that brew installs Hadoop 1.2.1

假定brew安装了Hadoop 1.2.1。

Step 3: Configure Hadoop

步骤3:配置Hadoop

$ cd /usr/local/Cellar/hadoop/1.2.1/libexec

$ cd /usr/local/Cellar/hadoop/1.2.1/libexec

Add the following line to conf/hadoop-env.sh:

将以下代码添加到conf/hadoop-env.sh:

export HADOOP_OPTS="-Djava.security.krb5.realm= -Djava.security.krb5.kdc="

出口HADOOP_OPTS = " -Djava.security.krb5.realm = -Djava.security.krb5.kdc = "

Add the following lines to conf/core-site.xml inside the configuration tags:

将下列代码添加到conf/core-site。配置标签内的xml:

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

Add the following lines to conf/hdfs-site.xml inside the configuration tags:

将下列代码添加到conf/hdfs-site。配置标签内的xml:

<property>
<name>dfs.replication</name>
<value>1</value>
</property>

Add the following lines to conf/mapred-site.xml inside the configuration tags:

将下列代码添加到conf/mapred站点。配置标签内的xml:

 <property>
 <name>mapred.job.tracker</name>
 <value>localhost:9001</value>
 </property>

Go to System Preferences > Sharing. Make sure “Remote Login” is checked.

进入系统首选项>共享。确保检查“远程登录”。

$ ssh-keygen -t rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

$ ssh-keygen -t rsa $ cat ~/.ssh/id_rsa。酒吧> > ~ / . ssh / authorized_keys

Step 4: Enable SSH to localhost

步骤4:启用SSH到本地主机。

$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

猫~ / . ssh / id_rsa美元。酒吧> > ~ / . ssh / authorized_keys

Step 5: Format Hadoop filesystem

步骤5:格式化Hadoop文件系统。

$ bin/hadoop namenode -format

$本/ hadoop namenode格式

Step 6: Start Hadoop

第六步:启动Hadoop

$ bin/start-all.sh

bin / start-all.sh美元

Make sure that all Hadoop processes are running:

确保所有的Hadoop进程都在运行:

$ jps

jps美元

Run a Hadoop example:

运行一个Hadoop的例子:

One more thing ! “brew update” will update the hadoop binaries to recent version (1.2.1 at present).

一件事!“brew update”将更新hadoop二进制文件到最近的版本(目前1.2.1)。

#1


2  

Actually all processes that (hadoop should run) are not running because of misconfiguration of IP.I am not familiar with Mac OS but on linux and windows we need to put hadoop enteries for connection in hosts files(etc/hosts) and I am damn sure that it should be for Mac.Now the point is
You need to put your hadoop entry in that file as a local mnachine like 127.0.0.1
Actually you need to put it against actual IP of your machine
For example
hadoop-machine 127.0.0.1 -->(placing loop back IP is wrong here because hadoop will try to connect with this IP).
remove this 127.0.0.1 and place the actual IP of your machine infront of this entry.You can find ip of your mac machine eaisly.here are some questions which are not directly related to hadoop but I guess they would be helpful for you.
Question 1 , Question 2, Question 3

实际上,所有(hadoop应该运行的)进程都不会因为IP的错误配置而运行。我不熟悉Mac OS但在linux和windows我们需要把hadoop enteries连接在主机文件(等)/主机和我该死的确定应该为Mac.Now关键是你需要把你的hadoop条目等文件作为本地mnachine 127.0.0.1实际上你需要把它与你的机器的实际IP例如hadoop-machine 127.0.0.1 - - >(放置回环IP是错的因为hadoop将尝试连接这个IP)。删除这个127.0.0.1,并将您的机器的实际IP放在这个条目的前面。你可以找到你的苹果机的ip。这里有一些与hadoop不直接相关的问题,但我想它们对您有帮助。问题1,问题2,问题3。

#2


1  

I had this same error: ConnectException: Connection refused in all the secondarynamenode log files.

我有一个相同的错误:ConnectException:连接拒绝在所有的secondarynamenode日志文件中。

But I also found this in the namenode's log file:

但我也在namenode的日志文件中发现了这个:

2015-10-25 16:35:15,720 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /private/tmp/hadoop-admin/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.

I therefore did a

因此,我做了一个

hadoop namenode -format

and the problem has gone away. Thus, the error message was just about the fact that the namenode had not started successfully.

问题已经解决了。因此,错误消息只是关于namenode没有成功启动的事实。

#3


0  

This might help a bit. But first U gotta remove the earlier installation using the command

这可能会有所帮助。但是首先要使用命令删除前面的安装。

rm -rf /usr/local/Cellar /usr/local/.git && brew cleanup

rm射频/usr/local/Cellar /usr/local/.git & &啤酒清理

Then U may begin with a fresh installation of Hadoop on U're Mac.

然后你可以开始在你的Mac上安装一个新的Hadoop。

Step 1: Install Homebrew

步骤1:安装自制程序

$ ruby -e "$(curl -fsSL https://raw.github.com/mxcl/homebrew/go)"

$ ruby -e $(curl -fsSL https://raw.github.com/mxcl/homebrew/go)

Step 2: Install Hadoop

步骤2:安装Hadoop

$ brew install hadoop

酿造安装hadoop美元

Assume that brew installs Hadoop 1.2.1

假定brew安装了Hadoop 1.2.1。

Step 3: Configure Hadoop

步骤3:配置Hadoop

$ cd /usr/local/Cellar/hadoop/1.2.1/libexec

$ cd /usr/local/Cellar/hadoop/1.2.1/libexec

Add the following line to conf/hadoop-env.sh:

将以下代码添加到conf/hadoop-env.sh:

export HADOOP_OPTS="-Djava.security.krb5.realm= -Djava.security.krb5.kdc="

出口HADOOP_OPTS = " -Djava.security.krb5.realm = -Djava.security.krb5.kdc = "

Add the following lines to conf/core-site.xml inside the configuration tags:

将下列代码添加到conf/core-site。配置标签内的xml:

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

Add the following lines to conf/hdfs-site.xml inside the configuration tags:

将下列代码添加到conf/hdfs-site。配置标签内的xml:

<property>
<name>dfs.replication</name>
<value>1</value>
</property>

Add the following lines to conf/mapred-site.xml inside the configuration tags:

将下列代码添加到conf/mapred站点。配置标签内的xml:

 <property>
 <name>mapred.job.tracker</name>
 <value>localhost:9001</value>
 </property>

Go to System Preferences > Sharing. Make sure “Remote Login” is checked.

进入系统首选项>共享。确保检查“远程登录”。

$ ssh-keygen -t rsa $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

$ ssh-keygen -t rsa $ cat ~/.ssh/id_rsa。酒吧> > ~ / . ssh / authorized_keys

Step 4: Enable SSH to localhost

步骤4:启用SSH到本地主机。

$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

猫~ / . ssh / id_rsa美元。酒吧> > ~ / . ssh / authorized_keys

Step 5: Format Hadoop filesystem

步骤5:格式化Hadoop文件系统。

$ bin/hadoop namenode -format

$本/ hadoop namenode格式

Step 6: Start Hadoop

第六步:启动Hadoop

$ bin/start-all.sh

bin / start-all.sh美元

Make sure that all Hadoop processes are running:

确保所有的Hadoop进程都在运行:

$ jps

jps美元

Run a Hadoop example:

运行一个Hadoop的例子:

One more thing ! “brew update” will update the hadoop binaries to recent version (1.2.1 at present).

一件事!“brew update”将更新hadoop二进制文件到最近的版本(目前1.2.1)。