在hive中,创建表后,使用load data命令加载数据到表中,出现“There are 0 datanode(s) running and no node(s) are excluded in this operation”错误,于是使用jps命令查看datanode节点是否都已经启动,发现都是正常启动 的,然后又使用“hdfs dfsadmin -report”命令查看节点的报告信息,发现没有“存活”的datanode节点,同样通过http://master:50070/查看也是没有datanode节点。回想了下,估计是之前删除hdfs上的部分文件,格式化hadoop namenode -format后,存在版本残留的问题。于是,停掉hadoop,删除设置的(file:///dfs/data)目录下的current 文件夹,从节点相应目录也要删除,再次执行格式化,重启hadoop,问题解决。
相关文章
- There are 0 datanode(s) running and no node(s) are excluded in this operation
- Spark提交报错:1 node(s) are excluded in this operation
- could only be written to 0 of the 1 minReplication nodes. There are 1 datanode(s) running and 1 node
- dfsput操作报“There are 0 datanode(s) running and no node(s) are excluded in this operation”
- Hadoop问题:There are 0 datanode(s) running and no node(s) are excluded in this operation.
- There are 0 datanode(s) running and no node(s) are excluded in this operation.