说说单节点集群里安装hive、3\5节点集群里安装hive的诡异区别

时间:2022-12-24 16:23:59

  这几天,无意之间,被这件事情给迷惑,不解!先暂时贴于此,以后再解决!

详细问题如下:

  在hive的安装目录下(我这里是 /home/hadoop/app/hive-1.2.1),hive的安装目录的lib下(我这里是/home/hadoop/app/hive-1.2.1/lib)存放了mysql-connector-java-5.1.21.jar。

  我的mysql,是用root用户安装的,在/home/hadoop/app目录,所以,启动也得在此目录下。

  对于djt002,我的mysql是root用户安装的,在/usr/java目录,所以,启动也得在此目录下。service mysqld start

  在单节点集群里。主机名为weekedn110。

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:mysql://weekend110:3306/hive?createDatabaseIfNotExist=true</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>com.mysql.jdbc.Driver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>hive</value>
  <description>username to use against metastore database</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>hive</value>
  <description>password to use against metastore database</description>
</property>

说说单节点集群里安装hive、3\5节点集群里安装hive的诡异区别

[root@weekend110 app]# service mysqld start
Starting mysqld: [ OK ]
[root@weekend110 app]# su hadoop
[hadoop@weekend110 app]$ cd hive-0.12.0/
[hadoop@weekend110 hive-0.12.0]$ bin/hive
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
16/11/02 09:00:20 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-0.12.0/lib/hive-common-0.12.0.jar!/hive-log4j.properties
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/app/hadoop-2.4.1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/app/hive-0.12.0/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J:

  可以看出,这是单节点集群安装hive。直接这样既可。

但是,对于3节点集群。我测试过了,在HadoopMaster或HadoopSlave1或HadoopSlave2上,随便一台安装hive,以及同时启动hadoop进程。都是报如下错误。

说说单节点集群里安装hive、3\5节点集群里安装hive的诡异区别

[root@HadoopMaster app]# service mysqld status
mysqld (pid 3041) is running...
[root@HadoopMaster app]# su hadoop
[hadoop@HadoopMaster app]$ cd hive-1.2.1/
[hadoop@HadoopMaster hive-1.2.1]$ bin/hive

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
... 8 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
... 14 more
Caused by: MetaException(message:Got exception: java.io.IOException No FileSystem for scheme: hfds)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:1213)
at org.apache.hadoop.hive.metastore.Warehouse.getFs(Warehouse.java:106)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:140)
at org.apache.hadoop.hive.metastore.Warehouse.getDnsPath(Warehouse.java:146)
at org.apache.hadoop.hive.metastore.Warehouse.getWhRoot(Warehouse.java:159)
at org.apache.hadoop.hive.metastore.Warehouse.getDefaultDatabasePath(Warehouse.java:177)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:600)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5762)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:199)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
... 19 more
[hadoop@HadoopMaster hive-1.2.1]$

说说单节点集群里安装hive、3\5节点集群里安装hive的诡异区别

[hadoop@HadoopSlave1 hive-1.2.1]$ bin/hive

Logging initialized using configuration in jar:file:/home/hadoop/app/hive-1.2.1/lib/hive-common-1.2.1.jar!/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:677)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1523)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:503)
... 8 more
Caused by: java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
... 14 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
at org.apache.thrift.transport.TSocket.open(TSocket.java:187)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:420)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:236)
at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1521)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:86)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:132)
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:104)
at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3005)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3024)

  对此,真心,感觉到困惑。!!!

说说单节点集群里安装hive、3\5节点集群里安装hive的诡异区别的更多相关文章

  1. Cloudera Manager安装之利用parcels方式安装单节点集群(包含最新稳定版本或指定版本的安装)(添加服务)(CentOS6&period;5)(四)

    不多说,直接上干货! 福利 => 每天都推送 欢迎大家,关注微信扫码并加入我的4个微信公众号:   大数据躺过的坑      Java从入门到架构师      人工智能躺过的坑          ...

  2. Cloudera Manager安装之利用parcels方式安装3或4节点集群(包含最新稳定版本或指定版本的安装)(添加服务)(CentOS6&period;5)(五)

    参考博客 Cloudera Manager安装之利用parcels方式安装单节点集群  Cloudera Manager安装之Cloudera Manager 5.3.X安装(三)(tar方式.rpm ...

  3. Cloudera Manager安装之利用parcels方式(在线或离线)安装3或4节点集群(包含最新稳定版本或指定版本的安装)(添加服务)(Ubuntu14&period;04)(五)

    前期博客 Cloudera Manager安装之Cloudera Manager 5.6.X安装(tar方式.rpm方式和yum方式) (Ubuntu14.04) (三) 如果大家,在启动的时候,比如 ...

  4. 在 Linux 多节点安装配置 Apache Zookeeper 分布式集群

    规划: 三台物理服务器就形成了(法定人数).对于高可用性集群,您可以使用高于3的任何奇数.例如,如果设置5台服务器,则集群可以处理两个故障节点等. 物理服务器需要开启的端口 2888 , 3888 和 ...

  5. 60分钟内从零起步驾驭Hive实战学习笔记(Ubuntu里安装mysql)

    本博文的主要内容是: 1. Hive本质解析 2. Hive安装实战 3. 使用Hive操作搜索引擎数据实战 SparkSQL前身是Shark,Shark强烈依赖于Hive.Spark原来没有做SQL ...

  6. Centos7的安装、Docker1&period;12&period;3的安装,以及Docker Swarm集群的简单实例

    目录 [TOC] 1.环境准备 ​ 本文中的案例会有四台机器,他们的Host和IP地址如下 c1 -> 10.0.0.31 c2 -> 10.0.0.32 c3 -> 10.0.0. ...

  7. rabbitmq安装及基本操作(含集群配置)

    一.rabbitmq的安装 因为rabbitmq是基于 erlang语言开发,所有要先安装erlang 1.安装erlang 这里我下载的是19.2的版本,地址为https://www.erlang. ...

  8. &lbrack;转载&rsqb; Centos7的安装、Docker1&period;12&period;3的安装,以及Docker Swarm集群的简单实例

    1.环境准备 ​ 本文中的案例会有四台机器,他们的Host和IP地址如下 c1 -> 10.0.0.31 c2 -> 10.0.0.32 c3 -> 10.0.0.33 c4 -&g ...

  9. Elasticsearch之重要核心概念(cluster(集群)、shards(分配)、replicas(索引副本)、recovery(据恢复或叫数据重新分布)、gateway(es索引的持久化存储方式)、discovery&period;zen(es的自动发现节点机制机制)、Transport(内部节点或集群与客户端的交互方式)、settings(修改索引库默认配置)和mappings)

    Elasticsearch之重要核心概念如下: 1.cluster 代表一个集群,集群中有多个节点,其中有一个为主节点,这个主节点是可以通过选举产生的,主从节点是对于集群内部来说的.es的一个概念就是 ...

随机推荐

  1. gong server

    宫 server   mac os 系统     vpn 202.39.176.66 funmobigtmvpn  密码 funmobi!@     安装 eclipse 安装mysql   1 配置 ...

  2. Iterator&lpar;迭代器)-对象行为型模式

    1.意图 提供一种方法顺序访问一个聚合对象中的各个元素,而又不暴露该对象的内部表示. 2.别名 Cursor-游标. 3.动机 一个聚合对象,应该提供一种方法来让别人可以访问它的元素,而又不需暴露它的 ...

  3. Log4j基本用法----日志级别

    基本使用方法: Log4j由三个重要的组件构成:日志信息的优先级,日志信息的输出目的地,日志信息的输出格式.日志信息的优先级从高到低有ERROR.WARN.INFO.DEBUG,分别用来指定这条日志信 ...

  4. nginx安装配置域名转发

    1.安装pcre 1.[root@localhost home]# tar zxvf pcre-8.10.tar.gz //解压缩 2.[root@localhost home]# cd pcre-8 ...

  5. Codeforces Round &num;375 &lpar;Div&period; 2&rpar; A B C 水 模拟 贪心

    A. The New Year: Meeting Friends time limit per test 1 second memory limit per test 256 megabytes in ...

  6. 【转载】一行代码加载网络图片到ImageView——Android Picasso

    原文链接:一句代码加载网络图片到ImageView——Android Picasso  注意:此处使用下面代码需要先配置一下gradle,下载所需包. 具体操作如下图: compile 'com.sq ...

  7. Ansible安装及简单使用备注

    1.安装epel源: rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-6.noarch.rpm 2.安装: yum ...

  8. Linux inode与文件系统关系

    inode只有在linux文件系统的概念(ext3,ext4) .inode节点数量与文件存储的关系. 二.在文件系统初始化时设置合适的节点数量. linux服务器在存储文件小而数量多的情况下,需要考 ...

  9. LocalDate常用技巧

    LocalDate是Java8新增的处理日期的类,使用起来比java.utils.date方便了许多.记录一些常用技巧: // 取当前日期: LocalDate today = LocalDate.n ...

  10. webrtc学习&colon; 部署stun和turn服务器

    webrtc的P2P穿透部分是由libjingle实现的. 步骤顺序大概是这样的: 1. 尝试直连. 2. 通过stun服务器进行穿透 3. 无法穿透则通过turn服务器中转. stun 服务器比较简 ...