【Hadoop学习之二】Hadoop伪分布式安装

时间:2023-03-09 06:54:25
【Hadoop学习之二】Hadoop伪分布式安装

环境
  虚拟机:VMware 10
  Linux版本:CentOS-6.5-x86_64
  客户端:Xshell4
  FTP:Xftp4
       jdk8
       hadoop-3.1.1

伪分布式就一台机器:主节点和从节点都在一个机器上,这里我们使用:node1 192.168230.11

一、平台和软件
平台:GNU/Linux
软件:JDK+SSH+rsync+hadoop3.1.1
修改主机/etc/hosts和/etc/sysconfig/network: 【切记】
192.168.230.11 node1
参考:https://www.cnblogs.com/heruiguo/p/7943006.html

1.安装JDK
参考:https://www.cnblogs.com/cac2020/p/9683212.html

2.安装ssh:
(1)查看是否已经安装

[root@node1 bin]# type ssh
ssh is hashed (/usr/bin/ssh)

(2)如果未安装 进行安装

yum install -y ssh

3.安装rsync(远程同步):
(1)查看是否已经安装

[root@node1 bin]# type rsync
-bash: type: rsync: not found

(2)如果未安装 进行安装

yum install -y rsync

4.免密登录(主要用于全分布模式)
(1)实现localhost免密登录
#1)生成公钥和私钥对

[root@node1 ~]# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
:e5:c1:eb::be:9c::3a:bc:c1::dd:8e:2c:f3 root@node1
The key's randomart image is:
+--[ RSA ]----+
| .o |
| o.. .. . |
| . .. o. . .|
| o. .... o |
| ..S...ooo .|
| .. .=+. |
| .o. oE |
| +. |
| .o |
+-----------------+

#2)将生成公钥id_rsa.pub拷贝至当前目录下隐藏目录.ssh下文件authorized_keys

[root@node1 ~]# cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

#3).ssh权限改为700

[root@node1 ~]# chmod  ~/.ssh

#4)authorized_keys权限改为0600

[root@node1 ~]# chmod  ~/.ssh/authorized_keys

#5)登录localhost

[root@node1 ~]# echo $$

[root@node1 ~]# ssh localhost
Last login: Tue Jan :: from 192.168.230.1
[root@node1 ~]# echo $$

(2)实现node1访问node2(192.168.230.12)免密登录

#1)通过命令ssh-copy-id将node1的公钥追加到node2  注意使用 ssh-copy-id命令 一定要在/root/.ssh目录下

[root@node1 .ssh]# ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.230.12
root@192.168.230.12's password:
Now try logging into the machine, with "ssh '192.168.230.12'", and check in: .ssh/authorized_keys to make sure we haven't added extra keys that you weren't expecting.

#注意如果提示 -bash: ssh-copy-id: command not found //提示命令不存在

#需要安装yum -y install openssh-clients

#2)node2修改权限
#.ssh权限改为700

[root@node2 ~]# chmod  ~/.ssh

#3)authorized_keys权限改为0600

[root@node2 ~]# chmod  ~/.ssh/authorized_keys

#4)免密登录

[root@node1 ~]# ssh 192.168.230.12
Last login: Tue Jan :: from 192.168.230.11
[root@node2 ~]#

5.安装Hadoop

[root@node1 src]# tar -xf hadoop-3.1..tar.gz -C /usr/local

二、配置Hadoop
1、修改/usr/local/hadoop-3.1.1/etc/hadoop/hadoop-env.sh 设置JAVA环境变量、角色用户
在最后添加如下设置:

export JAVA_HOME=/usr/local/jdk1..0_65
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root

2、修改/usr/local/hadoop-3.1.1/etc/hadoop/core-site.xml 配置主节点相关信息

(1)fs.defaultFS 主节点通讯信息 (hadoop3默认端口改为9820)
(2)hadoop.tmp.dir 设置namenode元数据和datanode block数据的目录

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://node1:9820</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop</value>
</property>
</configuration>

3、修改/usr/local/hadoop-3.1.1/etc/hadoop/hdfs-site.xml 配置从节点相关信息
(1)dfs.replication 副本数
(2)dfs.namenode.secondary.http-address 二级namenode (hadoop默认端口改为9868)

<property>
<name>dfs.replication</name>
<value></value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>node1:</value>
</property>

4、修改workers(hadoop2.X叫slave)配置从节点(DATANODE)信息
配置:node1

三、启动Hadoop
1、格式化namenode
1)将hadoop.tmp.dir指定的目录格式化,如果没有先创建,然后将元信息fsimge存到该目录下
2)生成全局唯一的集群ID 所以搭建集群时只执行一次

[root@node1 /]# /usr/local/hadoop-3.1.1/bin/hdfs namenode -format
-- ::, INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = node1/192.168.230.11
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 3.1.1
STARTUP_MSG: classpath = /usr/local/hadoop-3.1.1/etc/hadoop:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jackson-core-2.7.8.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/snappy-java-1.0.5.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jersey-servlet-1.19.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-lang3-3.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jetty-http-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/accessors-smart-1.2.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/netty-3.10.5.Final.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-io-2.5.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerb-common-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/curator-client-2.12.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerby-util-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/asm-5.0.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/stax2-api-3.1.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerb-server-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/json-smart-2.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/hadoop-annotations-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-net-3.6.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jetty-servlet-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/curator-framework-2.12.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jetty-util-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerb-core-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerby-config-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jetty-io-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/metrics-core-3.2.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/token-provider-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jetty-security-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jetty-server-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jetty-xml-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jetty-webapp-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/re2j-1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-codec-1.11.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jersey-core-1.19.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jackson-annotations-2.7.8.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/hadoop-auth-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerb-util-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jackson-databind-2.7.8.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/kerb-client-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jersey-json-1.19.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/avro-1.7.7.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/zookeeper-3.4.9.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/lib/jersey-server-1.19.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/hadoop-kms-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/hadoop-common-3.1.1-tests.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/hadoop-nfs-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/common/hadoop-common-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jackson-core-2.7.8.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-lang3-3.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jetty-http-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-io-2.5.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/asm-5.0.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/json-smart-2.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/hadoop-annotations-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jettison-1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-net-3.6.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/gson-2.2.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jetty-servlet-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jetty-util-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jetty-io-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/paranamer-2.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/okio-1.6.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jetty-security-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jetty-server-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jetty-xml-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jetty-webapp-9.3.19.v20170502.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/re2j-1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/xz-1.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jackson-annotations-2.7.8.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/hadoop-auth-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jackson-databind-2.7.8.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/avro-1.7.7.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/zookeeper-3.4.9.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.1-tests.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/hadoop-hdfs-nfs-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/hadoop-hdfs-client-3.1.1-tests.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.1-tests.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/hadoop-hdfs-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/hadoop-hdfs-native-client-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/hadoop-hdfs-3.1.1-tests.jar:/usr/local/hadoop-3.1.1/share/hadoop/hdfs/hadoop-hdfs-rbf-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.1.1-tests.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/dnsjava-2.1.7.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/jackson-jaxrs-base-2.7.8.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.7.8.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.7.8.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/guice-4.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/objenesis-1.0.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/lib/jersey-client-1.19.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-server-router-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-server-common-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-services-core-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-services-api-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-common-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-client-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-api-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-registry-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.1.1.jar:/usr/local/hadoop-3.1.1/share/hadoop/yarn/hadoop-yarn-server-tests-3.1.1.jar
STARTUP_MSG: build = https://github.com/apache/hadoop -r 2b9a8c1d3a2caf1e733d57f346af3ff0d5ba529c; compiled by 'leftnoteasy' on 2018-08-02T04:26Z
STARTUP_MSG: java = 1.8.0_65
************************************************************/
-- ::, INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
-- ::, INFO namenode.NameNode: createNameNode [-format]
-- ::, WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-501237ab---a36e-a48f31345438
-- ::, INFO namenode.FSEditLog: Edit logging is async:true
-- ::, INFO namenode.FSNamesystem: KeyProvider: null
-- ::, INFO namenode.FSNamesystem: fsLock is fair: true
-- ::, INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
-- ::, INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
-- ::, INFO namenode.FSNamesystem: supergroup = supergroup
-- ::, INFO namenode.FSNamesystem: isPermissionEnabled = true
-- ::, INFO namenode.FSNamesystem: HA Enabled: false
-- ::, INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to . Disabling file IO profiling
-- ::, INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=, counted=, effected=
-- ::, INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
-- ::, INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to :::00.000
-- ::, INFO blockmanagement.BlockManager: The block deletion will start around Jan ::
-- ::, INFO util.GSet: Computing capacity for map BlocksMap
-- ::, INFO util.GSet: VM type = -bit
-- ::, INFO util.GSet: 2.0% max memory 239.8 MB = 4.8 MB
-- ::, INFO util.GSet: capacity = ^ = entries
-- ::, INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
-- ::, INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension() assuming MILLISECONDS
-- ::, INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
-- ::, INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes =
-- ::, INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension =
-- ::, INFO blockmanagement.BlockManager: defaultReplication =
-- ::, INFO blockmanagement.BlockManager: maxReplication =
-- ::, INFO blockmanagement.BlockManager: minReplication =
-- ::, INFO blockmanagement.BlockManager: maxReplicationStreams =
-- ::, INFO blockmanagement.BlockManager: redundancyRecheckInterval = 3000ms
-- ::, INFO blockmanagement.BlockManager: encryptDataTransfer = false
-- ::, INFO blockmanagement.BlockManager: maxNumBlocksToLog =
-- ::, INFO util.GSet: Computing capacity for map INodeMap
-- ::, INFO util.GSet: VM type = -bit
-- ::, INFO util.GSet: 1.0% max memory 239.8 MB = 2.4 MB
-- ::, INFO util.GSet: capacity = ^ = entries
-- ::, INFO namenode.FSDirectory: ACLs enabled? false
-- ::, INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
-- ::, INFO namenode.FSDirectory: XAttrs enabled? true
-- ::, INFO namenode.NameNode: Caching file names occurring more than times
-- ::, INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit:
-- ::, INFO snapshot.SnapshotManager: SkipList is disabled
-- ::, INFO util.GSet: Computing capacity for map cachedBlocks
-- ::, INFO util.GSet: VM type = -bit
-- ::, INFO util.GSet: 0.25% max memory 239.8 MB = 613.8 KB
-- ::, INFO util.GSet: capacity = ^ = entries
-- ::, INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets =
-- ::, INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users =
-- ::, INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = ,,
-- ::, INFO namenode.FSNamesystem: Retry cache on namenode is enabled
-- ::, INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is millis
-- ::, INFO util.GSet: Computing capacity for map NameNodeRetryCache
-- ::, INFO util.GSet: VM type = -bit
-- ::, INFO util.GSet: 0.029999999329447746% max memory 239.8 MB = 73.7 KB
-- ::, INFO util.GSet: capacity = ^ = entries
-- ::, INFO namenode.FSImage: Allocated new BlockPoolId: BP--192.168.230.11-
-- ::, INFO common.Storage: Storage directory /data/hadoop/dfs/name has been successfully formatted.
-- ::, INFO namenode.FSImageFormatProtobuf: Saving image file /data/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
-- ::, INFO namenode.FSImageFormatProtobuf: Image file /data/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 386 bytes saved in 0 seconds .
-- ::, INFO namenode.NNStorageRetentionManager: Going to retain images with txid >=
-- ::, INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at node1/192.168.230.11
************************************************************/

【Hadoop学习之二】Hadoop伪分布式安装

2、启动hadoop

[root@node1 sbin]# /usr/local/hadoop-3.1.1/sbin/start-dfs.sh
Starting namenodes on [node1]
node1: Warning: Permanently added 'node1,192.168.230.11' (RSA) to the list of known hosts.
Starting datanodes
Starting secondary namenodes [node1]
-- ::, WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@node1 sbin]# jps
NameNode
SecondaryNameNode
DataNode
Jps
[root@node1 sbin]# ss -nal
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN *: *:*
LISTEN 192.168.230.11:9868 *:*
LISTEN *:9870 *:*
LISTEN 127.0.0.1: *:*
LISTEN ::: :::*
LISTEN *: *:*
LISTEN ::: :::*
LISTEN 127.0.0.1: *:*
LISTEN 192.168.230.11:9820 *:*
LISTEN *: *:*
LISTEN *: *:*
[root@node1 sbin]#

hadoop2与3的端口

【Hadoop学习之二】Hadoop伪分布式安装

3、可视化UI

谷歌或者火狐浏览器输入:http://node1:9870

【Hadoop学习之二】Hadoop伪分布式安装

【Hadoop学习之二】Hadoop伪分布式安装

【Hadoop学习之二】Hadoop伪分布式安装

4、上传文件

UI操作文件:上传、下载、查看

【Hadoop学习之二】Hadoop伪分布式安装

使用命令上传文件:

相关操作命令:

[root@node1 bin]# /usr/local/hadoop-3.1./bin/hdfs dfs
-- ::, WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
[-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...]
[-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] [-v] [-x] <path> ...]
[-expunge]
[-find <path> ... <expression> ...]
[-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getfattr [-R] {-n name | -d} [-e en] <path>]
[-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
[-head <file>]
[-help [cmd ...]]
[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setfattr {-n name [-v value] | -x name} <path>]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touchz <path> ...]
[-truncate [-w] <length> <path> ...]
[-usage [cmd ...]] Generic options supported are:
-conf <configuration file> specify an application configuration file
-D <property=value> define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port> specify a ResourceManager
-files <file1,...> specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...> specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...> specify a comma-separated list of archives to be unarchived on the compute machines The general command line syntax is:
command [genericOptions] [commandOptions]

(1)创建目录

[root@node1 bin]# /usr/local/hadoop-3.1.1/bin/hdfs dfs -mkdir /wjy
-- ::, WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

【Hadoop学习之二】Hadoop伪分布式安装

(2)上传文件

[root@node1 bin]# /usr/local/hadoop-3.1.1/bin/hdfs dfs -put /usr/local/src/hadoop-3.1.1.tar.gz /wjy
-- ::, WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 47006ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56321ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56317ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56448ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56401ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56424ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56463ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56503ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56490ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56147ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56150ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56237ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56240ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56242ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56260ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56262ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 55999ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56008ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 55998ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 56001ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 55901ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 55901ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 55730ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 55689ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 47743ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]
-- ::, INFO hdfs.DataStreamer: Slow ReadProcessor read fields for block BP--192.168.230.11-:blk_1073741825_1001 took 47749ms (threshold=30000ms); ack: seqno: reply: SUCCESS downstreamAckTimeNanos: flag: , targets: [DatanodeInfoWithStorage[192.168.230.11:,DS-8bc75a4e-7bac-46ea-8bbd-1e30d1daf6e2,DISK]]

上传过程,显示COPYING:

【Hadoop学习之二】Hadoop伪分布式安装

上传结束:

【Hadoop学习之二】Hadoop伪分布式安装

查看Block块:

【Hadoop学习之二】Hadoop伪分布式安装

(3)查看上传文件

[root@node1 bin]# /usr/local/hadoop-3.1.1/bin/hdfs dfs -ls /
-- ::, WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found items
drwxr-xr-x - root supergroup -- : /wjy
[root@node1 bin]# /usr/local/hadoop-3.1.1/bin/hdfs dfs -ls /wjy
-- ::, WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found items
-rw-r--r-- root supergroup -- : /wjy/hadoop-3.1..tar.gz
[root@node1 bin]#

可以进入数据目录查看:

[root@node1 subdir0]# cd /data/hadoop/dfs/data/current/BP--192.168.230.11-/current/finalized/subdir0/subdir0
[root@node1 subdir0]# ll
total
-rw-r--r--. root root Jan : blk_1073741825
-rw-r--r--. root root Jan : blk_1073741825_1001.meta
-rw-r--r--. root root Jan : blk_1073741826
-rw-r--r--. root root Jan : blk_1073741826_1002.meta
-rw-r--r--. root root Jan : blk_1073741827
-rw-r--r--. root root Jan : blk_1073741827_1003.meta
[root@node1 subdir0]#

5、关闭Hadoop

[root@node1 sbin]# /usr/local/hadoop-3.1.1/sbin/stop-dfs.sh
Stopping namenodes on [node1]
Stopping datanodes
Stopping secondary namenodes [node1]
-- ::, WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@node1 sbin]# jps
Jps
[root@node1 sbin]# ss -nal
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN ::: :::*
LISTEN *: *:*
LISTEN ::: :::*
LISTEN 127.0.0.1: *:*
[root@node1 sbin]#