面向生产环境的大集群模式安装Hadoop

时间:2022-09-04 22:50:04

一、实验说明

1、本实验将使用DNS而不是hosts文件解析主机名;

2、使用NFS共享密钥文件,而不是逐个手工拷贝添加密钥;

3、复制Hadoop时使用批量拷贝脚本而不是逐台复制。

测试环境:

Hostname IP Hadoop版本 Hadoop 功能 系统            
hadoop1 192.168.1.161 0.20.0 namenode nfs服务器端  rhel5.4x86
hadoop2 192.168.1.162 0.20.0 datanode dns+nfs客服端   rhel5.4 x86
hadoop3 192.168.1.163 0.20.0 datanode nfs客户端   rhel5.4 x86

二、DNS的安装与配置

  1、上传dns目录:

 [root@hadoop2 dns]# ls
dnsmasq.conf dnsmasq.hosts dnsmasq.resolv.conf pid start.sh stop.sh

  2、修改dns目录中的文件:

----dnsmasq.conf为dnsmasq的配置文件----
[root@hadoop2 dns]# cat dnsmasq.conf cache-size=50000
dns-forward-max=1000
resolv-file=/dns/dnsmasq.resolv.conf
addn-hosts=/dns/dnsmasq.hosts ----dnsmasq缓存下来的域名,不使用/etc/hosts----
[root@hadoop2 dns]# cat dnsmasq.hosts
192.168.1.161 hadoop1
192.168.1.162 hadoop2
192.168.1.163 hadoop3
----在dnsmasq.resolv.conf添加上游dns的地址----
[root@hadoop2 dns]# cat dnsmasq.resolv.conf
### /etc/resolv.conf file autogenerated by netconfig!
#
# Before you change this file manually, consider to define the
# static DNS configuration using the following variables in the
# /etc/sysconfig/network/config file:
# NETCONFIG_DNS_STATIC_SEARCHLIST
# NETCONFIG_DNS_STATIC_SERVERS
# NETCONFIG_DNS_FORWARDER
# or disable DNS configuration updates via netconfig by setting:
# NETCONFIG_DNS_POLICY=''
#
# See also the netconfig(8) manual page and other documentation.
#
# Note: Manual change of this file disables netconfig too, but
# may get lost when this file contains comments or empty lines
# only, the netconfig settings are same with settings in this
# file and in case of a "netconfig update -f" call.
#
nameserver 218.108.248.228
nameserver 218.108.248.200

[root@hadoop2 dns]# cat start.sh
#!/bin/sh
killall dnsmasq
dnsmasq --port=53 --pid-file=/dns/pid --conf-file=/dns/dnsmasq.conf [root@hadoop2 dns]# cat stop.sh
#!/bin/sh
killall dnsmasq

  3、启动dns,并在hadoop2上进行测试:

[root@hadoop2 dns]# dig @hadoop2 www.qq.com

; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5 <<>> @hadoop2 www.qq.com
; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41272
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 7 ;; QUESTION SECTION:
;www.qq.com. IN A ;; ANSWER SECTION:
www.qq.com. 5 IN A 182.254.8.146 ;; AUTHORITY SECTION:
www.qq.com. 5 IN NS ns-cmn1.qq.com.
www.qq.com. 5 IN NS ns-cnc1.qq.com.
www.qq.com. 5 IN NS ns-os1.qq.com. ;; ADDITIONAL SECTION:
ns-os1.qq.com. 5 IN A 184.105.66.196
ns-os1.qq.com. 5 IN A 202.55.2.226
ns-os1.qq.com. 5 IN A 202.55.2.230
ns-os1.qq.com. 5 IN A 114.134.85.106
ns-cmn1.qq.com. 5 IN A 120.204.202.200
ns-cnc1.qq.com. 5 IN A 125.39.127.27
ns-cnc1.qq.com. 5 IN A 61.135.167.182 ;; Query time: 33 msec
;; SERVER: 192.168.1.162#53(192.168.1.162)
;; WHEN: Sat Aug 24 19:40:25 2013
;; MSG SIZE rcvd: 221

  4、配置hadoop1和hadoop3的/etc/resolv.conf

----dns客户端需要对/etc/resolv.conf文件中注释掉search localdomain,否则无法解析dnsmasq缓存下来的域名----
[hadoop@hadoop3 ~]$ cat /etc/resolv.conf
; generated by /sbin/dhclient-script
#search localdomain
#nameserver 192.168.11.2
nameserver 192.168.1.162
[root@hadoop3 ~]# dig @hadoop2 www.qq.com ; <<>> DiG 9.3.6-P1-RedHat-9.3.6-4.P1.el5 <<>> @hadoop2 www.qq.com
; (1 server found)
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 2061
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 3, ADDITIONAL: 7 ;; QUESTION SECTION:
;www.qq.com. IN A ;; ANSWER SECTION:
www.qq.com. 5 IN A 182.254.8.146 ;; AUTHORITY SECTION:
www.qq.com. 5 IN NS ns-cnc1.qq.com.
www.qq.com. 5 IN NS ns-cmn1.qq.com.
www.qq.com. 5 IN NS ns-os1.qq.com. ;; ADDITIONAL SECTION:
ns-os1.qq.com. 5 IN A 202.55.2.226
ns-os1.qq.com. 5 IN A 202.55.2.230
ns-os1.qq.com. 5 IN A 114.134.85.106
ns-os1.qq.com. 5 IN A 184.105.66.196
ns-cmn1.qq.com. 5 IN A 120.204.202.200
ns-cnc1.qq.com. 5 IN A 61.135.167.182
ns-cnc1.qq.com. 5 IN A 125.39.127.27 ;; Query time: 24 msec
;; SERVER: 192.168.1.162#53(192.168.1.162)
;; WHEN: Sat Aug 24 19:44:43 2013
;; MSG SIZE rcvd: 221

三、配置NFS

  1、查看nfs是否已经安装

[root@hadoop1 ~]# rpm -qa |grep nfs
nfs-utils-1.0.9-42.el5
nfs-utils-lib-1.0.8-7.6.el5

  2、编辑/etc/exports

[root@hadoop1 ~]# cat /etc/exports
/home/hadoop/.ssh/ *(rw,sync,no_root_squash)

  3、创建hadoop用户

[root@hadoop1 ~]# useradd hadoop
[root@hadoop1 ~]# passwd hadoop
Changing password for user hadoop.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.

  4、生成ssh密钥

[hadoop@hadoop1 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
11:a6:28:73:db:0b:c2:47:fc:c9:8d:1c:0c:b4:6e:00 hadoop@hadoop1

  5、修改挂载点的属性

[root@hadoop1 ~]# chmod 777 /home/hadoop/.ssh/

  6、重启nfs

[root@hadoop1 ~]# service nfs restart
Shutting down NFS mountd: [FAILED]
Shutting down NFS daemon: [FAILED]
Shutting down NFS quotas: [FAILED]
Shutting down NFS services: [FAILED]
Starting NFS services: [ OK ]
Starting NFS quotas: [ OK ]
Starting NFS daemon: [ OK ]
Starting NFS mountd: [ OK ]

  7、在本机挂载测试

[root@hadoop1 ~]# mount 192.168.1.161:/home/hadoop/.ssh /mnt
[root@hadoop1 ~]# mount
/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
192.168.1.161:/home/hadoop/.ssh on /mnt type nfs (rw,addr=192.168.1.161)
[root@hadoop1 ~]# ll /home/hadoop/.ssh/
total 8
-rw------- 1 hadoop hadoop 1675 Aug 25 10:59 id_rsa
-rw-r--r-- 1 hadoop hadoop 396 Aug 25 10:59 id_rsa.pub
[root@hadoop1 ~]# ll /mnt
total 8
-rw------- 1 hadoop hadoop 1675 Aug 25 10:59 id_rsa
-rw-r--r-- 1 hadoop hadoop 396 Aug 25 10:59 id_rsa.pub

四、nfs整合ssh密钥

  1、先将id_rsa.pub拷贝成authorized_keys

[hadoop@hadoop1 ~]$ cp .ssh/id_rsa.pub .ssh/authorized_keys

  2、再登陆hadoop2和hadoop3创建hadoop用户并用hadoop登陆,然后生成每个机器的ssh的rsa密钥

----hadoop2和hadoop3操作一样----
[root@hadoop2 dns]# useradd hadoop
[root@hadoop2 dns]# passwd hadoop
Changing password for user hadoop.
New UNIX password:
BAD PASSWORD: it is based on a dictionary word
Retype new UNIX password:
passwd: all authentication tokens updated successfully.
[root@hadoop2 dns]# su - hadoop
[hadoop@hadoop2 ~]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
3c:9d:07:2a:7d:3d:e3:d3:22:0c:0e:8b:5d:96:93:e1 hadoop@hadoop2

  3、在hadoop2和hadoop3上挂载nfs

[root@hadoop2 dns]# mount 192.168.1.161:/home/hadoop/.ssh /mnt
[root@hadoop2 dns]# ll /mnt
total 12
-rw-r--r-- 1 hadoop hadoop 396 Aug 25 11:04 authorized_keys
-rw------- 1 hadoop hadoop 1675 Aug 25 10:59 id_rsa
-rw-r--r-- 1 hadoop hadoop 396 Aug 25 10:59 id_rsa.pub
[root@hadoop2 dns]# mount
/dev/sda1 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
192.168.1.161:/home/hadoop/.ssh on /mnt type nfs (rw,addr=192.168.1.161)

  4、把hadoop2和hadoop3的公钥id_rsa_pub都添加到/mnt/authorized_keys里

[root@hadoop2 dns]# cat /home/hadoop/.ssh/id_rsa.pub >> /mnt/authorized_keys
[root@hadoop3 ~]# cat /home/hadoop/.ssh/id_rsa.pub >> /mnt/authorized_keys

  5、查看authorized_keys内容

[hadoop@hadoop1 ~]$ cat /mnt/authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA32vNwXv/23k0yF7QeITb61J5uccudHB3gQBtqCnB7wsOtIhdUsVIfcxGmPnWp6S9V+Ob+b73Vrl2xsxP4i0N8Cu1l2ZcU9jevc+o37yX4nW2oTBFVEP31y9E9fXkYf3cKiF0UrvunL59qgNnVUbq8qRtFr5QPAx6lGY0TYZiPaPr+POwNKF1IZvToqABsOnNimv0DNmAhbd3QyM7GaR/ZRQKOCMF8NYljo6exoDk9xPq/wCHC/rBnAU3gUlwi7Kn/tk2dirwvYZuqP3VO+w5zd6sYxscD8+UNK99XdOARzTlc8/iEPHy+JSBa6sQI2hOAOCAuHBtTymoJFUDH9YqXQ== hadoop@hadoop1
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4lTx6JTZlhoLI4Yyo0a6YeDmIgz60pYwYKwVL+p4wfp9OWB2/sEyf9iCsK8i94mnWMfNsRehqAG2ucPmWz1s/Kufxu/6uc8hJjDlOOMUOE7ENyN0Zre5MHj8jauDRhY4y37Rh3Crx86wzq79isDqJOWnKyjPQDjUH45780Hvtk87ckwNNSFhwuRgTFKhz0bQloJuHazU1/W924wmicqeEUSGhUFEkXUeJu7FqQjJcPjoRNqyTEuCHiYVh9HjOrUPdosfYqmQfuZ/x2gmsGRUdfTl32rkoZW43ay8CFV/MKqAFucEOiiHW7xttmm3zJgcyLptGhjo7NtvAQwKkPfG6w== hadoop@hadoop2
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAs7fkzQMR6yVqLBVAnJqTxFPO9NNngrmYDNZMbWXDz6V8J4Z7zC46odUERe3CNjC+v3X8rwvUWlALYtvMNonQwhnpvqe2s0CpDithSFkOt5fQarRYP5JtAjHvF5b22NqcyltF+ywLT4zKAg4tjgGV5nLafI2hsNjgljUOXkRjpwSSUpLmLayWnepLIwioCPPGIkM40balUOEWEASzaI4DaPoywmoVUrByou71i1F1VizXpbhIWW+LE2cANAy1xmP0zYBa+/O4mvpgZjWLtLpKFR/1nRZPh1emy+OB6RcoJl3Awmhcsyyjd4Q8jfOYsH78PKpnwJfyhtUEIENrzUV63w== hadoop@hadoop3

  6、做软连接(此步骤不需要在nfs服务器端做,只在客户端做)

[hadoop@hadoop2 ~]$ ln -s /mnt/authorized_keys /home/hadoop/.ssh/authorized_keys
[hadoop@hadoop2 ~]$ ll /home/hadoop/.ssh/authorized_keys
lrwxrwxrwx 1 hadoop hadoop 20 Aug 25 11:14 /home/hadoop/.ssh/authorized_keys -> /mnt/authorized_keys [hadoop@hadoop3 ~]$ ln -s /mnt/authorized_keys /home/hadoop/.ssh/authorized_keys
[hadoop@hadoop3 ~]$ ll /home/hadoop/.ssh/authorized_keys
lrwxrwxrwx 1 hadoop hadoop 20 Aug 25 11:15 /home/hadoop/.ssh/authorized_keys -> /mnt/authorized_keys

  7、修改权限

[hadoop@hadoop1 ~]$ chmod 700 /home/hadoop/.ssh/

备注:如果不修改的话,在进行登陆的时候会出现需要密码。

8、测试是否实验无密码登陆

[hadoop@hadoop1 ~]$ ssh hadoop2
The authenticity of host 'hadoop2 (192.168.1.162)' can't be established.
RSA key fingerprint is ca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop2,192.168.1.162' (RSA) to the list of known hosts.
[hadoop@hadoop2 ~]$ ssh hadoop3
The authenticity of host 'hadoop3 (192.168.1.163)' can't be established.
RSA key fingerprint is ca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop3,192.168.1.163' (RSA) to the list of known hosts.
[hadoop@hadoop3 ~]$

五、批量安装Hadoop
  1、先在hadoop1上把namenode安装完成,安装hadoop分布式可以参考Hadoop集群安装

[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves | awk '{print "scp -rp hadoop-0.20.2 hadoop@"$1":/home/hadoop/"}' >  scp.sh
[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves | awk '{print "scp -rp temp hadoop@"$1":/home/hadoop/"}' >> scp.sh
[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves | awk '{print "scp -rp user hadoop@"$1":/home/hadoop/"}' >> scp.sh
[hadoop@hadoop1 ~]$ cat hadoop-0.20.2/conf/slaves | awk '{print "scp -rp jdk1.7 hadoop@"$1":/home/hadoop/"}' >> scp.sh
[hadoop@hadoop1 ~]$ ls
hadoop-0.20.2 jdk1.7 scp.sh temp user
[hadoop@hadoop1 ~]$ cat scp.sh
scp -rp hadoop-0.20.2 hadoop@192.168.1.162:/home/hadoop/
scp -rp hadoop-0.20.2 hadoop@192.168.1.163:/home/hadoop/
scp -rp temp hadoop@192.168.1.162:/home/hadoop/
scp -rp temp hadoop@192.168.1.163:/home/hadoop/
scp -rp user hadoop@192.168.1.162:/home/hadoop/
scp -rp user hadoop@192.168.1.163:/home/hadoop/
scp -rp jdk1.7 hadoop@192.168.1.162:/home/hadoop/
scp -rp jdk1.7 hadoop@192.168.1.163:/home/hadoop/

  2、格式化namenode

[hadoop@hadoop1 ~]$ hadoop-0.20.2/bin/hadoop namenode -format
13/08/25 11:52:39 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop1/192.168.1.161
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
Re-format filesystem in /home/hadoop/user/name ? (Y or N) Y
13/08/25 11:52:46 INFO namenode.FSNamesystem: fsOwner=hadoop,hadoop
13/08/25 11:52:46 INFO namenode.FSNamesystem: supergroup=supergroup
13/08/25 11:52:46 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/08/25 11:52:47 INFO common.Storage: Image file of size 96 saved in 0 seconds.
13/08/25 11:52:48 INFO common.Storage: Storage directory /home/hadoop/user/name has been successfully formatted.
13/08/25 11:52:48 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop1/192.168.1.161
************************************************************/

  3、启动hadoop

[hadoop@hadoop1 ~]$ hadoop-0.20.2/bin/start-all.sh
starting namenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-hadoop1.out
192.168.1.163: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-hadoop3.out
192.168.1.162: starting datanode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-hadoop2.out
The authenticity of host '192.168.1.161 (192.168.1.161)' can't be established.
RSA key fingerprint is ca:9a:7e:19:ee:a1:35:44:7e:9d:d4:09:5c:fc:c5:0a.
Are you sure you want to continue connecting (yes/no)? yes
192.168.1.161: Warning: Permanently added '192.168.1.161' (RSA) to the list of known hosts.
192.168.1.161: starting secondarynamenode, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-hadoop1.out
starting jobtracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-hadoop1.out
192.168.1.162: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-hadoop2.out
192.168.1.163: starting tasktracker, logging to /home/hadoop/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-hadoop3.out

  4、查看各个节点

[hadoop@hadoop1 ~]$ jdk1.7/bin/jps
4416 Jps
4344 JobTracker
4306 SecondaryNameNode
4157 NameNode [hadoop@hadoop2 ~]$ jdk1.7/bin/jps
3699 TaskTracker
3636 DataNode
3752 Jps [hadoop@hadoop3 ~]$ jdk1.7/bin/jps
4763 TaskTracker
4834 Jps
4653 DataNode

 六、重点说明

1、如果重启以后无法自动挂载nfs,可以在/etc/rc.d/rc.local文件中添加:

/bin/mount -a

2、如果IP是自动获取的,请在DNS主机的/etc/rc.d/rc.local文件添加:

/bin/cat /app/resolv.conf > /etc/resolv.conf
[root@node1 ~]# cat /app/resolv.conf
; generated by /sbin/dhclient-script
#search localdomain
#nameserver 192.168.1.151

其它主机的/etc/rc.d/rc.local添加:

/bin/cat /app/resolv.conf > /etc/resolv.conf
[root@node2 ~]# cat /app/resolv.conf
; generated by /sbin/dhclient-script
#search localdomain
nameserver 192.168.1.151

面向生产环境的大集群模式安装Hadoop的更多相关文章

  1. Presto单机&sol;集群模式安装笔记

    Presto单机/集群模式安装笔记 一.安装环境 二.安装步骤 三.集群模式安装: 3.1 集群模式修改配置部分 3.1.1 coordinator 节点配置. Node172配置 3.1.2 nod ...

  2. 生产环境上,哨兵模式集群Redis版本升级应用实战

    背景: 由于生产环境上所使用的Redis版本并不一致,好久也没有更新,为了避免版本不同对Redis集群造成影响,从而升级为统一Redis版本! 1.集群架构 一主两从三哨兵: 2.升级方案 (1)升级 ...

  3. hadoop单机and集群模式安装

    最近在学习hadoop,第一步当然是亲手装一下hadoop了. 下面记录我hadoop安装的过程: 注意: 1,首先明确hadoop的安装是一个非常简单的过程,装hadoop的主要工作都在配置文件上, ...

  4. 单例模式在生产环境jedis集群中的应用

    背景:不久前单位上线一款应用,上了生产环境之后,没过多久,便吃掉了服务器所有的内存,最后导致网站服务挂了. 在解决了这一问题之后,我发现这其实是典型的一单例模式,现分享一下. 之前存在问题的老代码如下 ...

  5. Linux运维一:生产环境CentOS6&period;6系统的安装

    CentOS 6.6 x86_64官方正式版系统(64位)下载地址 系统之家:http://www.xitongzhijia.net/linux/201412/33603.html 百度网盘:http ...

  6. Spark Tachyon编译部署(含单机和集群模式安装)

    Tachyon编译部署 编译Tachyon 单机部署Tachyon 集群模式部署Tachyon 1.Tachyon编译部署 Tachyon目前的最新发布版为0.7.1,其官方网址为http://tac ...

  7. Hadoop学习笔记(4)hadoop集群模式安装

    具体的过程参见伪分布模式的安装,集群模式的安装和伪分布模式的安装基本一样,只有细微的差别,写在下面: 修改masers和slavers文件: 在hadoop/conf文件夹中的配置文件中有两个文件ma ...

  8. ZooKeeper-集群模式安装

    下载地址:https://zookeeper.apache.org/releases.html 至少需要准备三台节点(这里为h136.h138.h140),ZooKeeper 需要 JDK,关于 JD ...

  9. Kafka集群模式安装(二)

    我们来安装Kafka的集群模式,三台机器: 192.168.131.128 192.168.131.130 192.168.131.131 Kafka集群需要依赖zookeeper,所以需要先安装好z ...

随机推荐

  1. JAVA复制网络图片到本地

    import java.io.File; import java.io.FileOutputStream; import java.io.InputStream; import java.io.Out ...

  2. HDU 1117 免费馅饼 二维动态规划

    思路:a[i][j]表示j秒在i位置的数目,dp[i][j]表示j秒在i位置最大可以收到的数目. 转移方程:d[i][j]=max(dp[i-1][j],dp[i-1][j-1],dp[i-1][j+ ...

  3. Java中的List(转)

    List包括List接口以及List接口的所有实现类.因为List接口实现了Collection接口,所以List接口拥有Collection接口提供的所有常用方法,又因为List是列表类型,所以Li ...

  4. 文摘:威胁建模&lpar;STRIDE方法&rpar;

    文摘,原文地址:https://msdn.microsoft.com/zh-cn/magazine/cc163519.aspx 威胁建模的本质:尽管通常我们无法证明给定的设计是安全的,但我们可以从自己 ...

  5. mysql 初始化时root无密码

    修改密码 update user set password=PASSWORD('123456') where User='root'; 添加用户设置权限 grant select,insert,upd ...

  6. centOS7下Spark安装配置

    环境说明: 操作系统: centos7 64位 3台 centos7-1 192.168.190.130 master centos7-2 192.168.190.129 slave1 centos7 ...

  7. Redhat7&period;3更换CentOS7 yum源

    Redhat yum源是收费的,没有注册的Redhat机器是不能使用yum源的. 1.当前系统环境: 系统版本:Red Hat Enterprise Linux Server release 7.3 ...

  8. 查看linux的cpu信息

    # 总核数 = 物理CPU个数 X 每颗物理CPU的核数 # 总逻辑CPU数 = 物理CPU个数 X 每颗物理CPU的核数 X 超线程数 # 查看物理CPU个数 cat /proc/cpuinfo| ...

  9. linux自启动tomcat

    第一种方式 1.修改脚本文件rc.local:vim /etc/rc.d/rc.local 这个脚本是使用者自定的开机启动程序,可以在里面添加想在系统启动之后执行的脚本或者脚本执行命令 2.添加如下内 ...

  10. 解决 ImportError&colon; cannot import name pywrap&lowbar;tensorflow

    原文:https://aichamp.wordpress.com/2016/11/13/handeling-importerror-cannot-import-name-pywrap_tensorfl ...