Spark安装指南

时间:2022-12-27 15:48:32

一、Windows环境安装Spark

1.安装Java环境:jdk-8u101-windows-x64

配置环境变量:
(1)增加变量名:JAVA_HOME
变量值:C:\Program Files\Java\jdk1.8.0_101;
(2)找到系统变量Path
在其内容中增加值C:\Program Files\Java\jdk1.8.0_101\bin;
(3)验证:Win+R,输入cmd,在命令行窗口中输入如下命令:
java -version
显示下列信息表明安装配置正确:
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)

2.安装scala:scala-2.11.8

然后在命令行窗口输入命令:scala
如果不报错,则安装成功,应该显示如下信息:
Welcome to Scala 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_101).
Type in expressions for evaluation. Or try :help.
scala>
注意:Scala版本要和spark版本匹配,请根据spark的版本来选择scala的版本。

3.安装spark

将下载好的文件spark-1.6.2-bin-cdh4解压到当前目录
剪切到D:目录(或者你希望的目录)
打开命令行窗口:
D:
cd spark-1.6.2-bin-cdh4
(1)启动Master,在命令行中输入:
bin\spark-class org.apache.spark.deploy.master.Master
在显示结果中找到Web访问地址,在浏览器中输入http://10.0.1.119:8080/查看结果。
(2)启动Worker
bin\spark-class org.apache.spark.deploy.worker.Worker spark://10.0.1.119:7077 -c 1 -m 512M
(3)启动Worker
bin\spark-class org.apache.spark.deploy.worker.Worker spark://10.0.1.119:7077 -c 1 -m 1G
(4)启动单机模式
bin\spark-shell --master spark://10.0.1.119:7077

二、Linux环境安装Spark

1.安装Ubuntu Linux

(1)安装包:
VMware-workstation-full-11.1.1-2771112.exe
ubuntu-14.04.1-server-amd64.iso
jdk-8u91-linux-x64.tar.gz
spark-1.6.2-bin-hadoop2.6.tgz
Xmanager4_setup.1410342608.exe
(2)安装3台虚拟机,主机名分别为:spark01,spark02,spark03
IP分别为:192.168.6.128~130
(3)在主机安装Xshell

2.Linux安装Java

(1)拷贝文件
spark@spark01:~$ mkdir app
spark@spark01:~$ cd app/
然后将文件拷贝到该文件夹下
(2)解压
spark@spark01:~/app$ ll
spark@spark01:~/app$ tar -zxvf jdk-8u91-linux-x64.tar.gz
spark@spark01:~/app$ ll
(3)修改环境变量
spark@spark01:~/app$ sudo vim /etc/profile
[sudo] password for spark:
末尾增加两行:
JAVA_HOME=/home/spark/app/jdk1.8.0_91
export PATH=$PATH:$JAVA_HOME/bin
(4)环境变量修改生效
spark@spark01:~/app$ source /etc/profile
(5)查看安装好的java版本
spark@spark01:~/app$ java -version
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)
spark@spark01:~/app$

3.Linux安装spark

(1)解压spark
spark@spark01:~/app$ tar -zxvf spark-1.6.2-bin-hadoop2.6.tgz
(2)修改配置文件
spark@spark01:~/app$ cd spark-1.6.2-bin-hadoop2.6/
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6$ ll
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6$ cd conf/
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ ll
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ cp spark-env.sh.template spark-env.sh
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ ll
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ vim spark-env.sh
在配置文件末尾增加:
#export SPARK_LOCAL_IP=localhost
export JAVA_HOME=/home/spark/app/jdk1.8.0_91
export SPARK_MASTER_IP=spark01
#export SPARK_MASTER_IP=localhost
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=1g
#export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=FILESYSTEM -Dspark.deploy.recoveryDirectory=/nfs/spark/recovery"
#export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
#export YARN_CONF_DIR=$HADOOP_INSTALL/etc/hadoop
export SPARK_HOME=/home/spark/app/spark-1.6.2-bin-hadoop2.6
export SPARK_JAR=/home/spark/app/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2.6.0.jar
export PATH=$SPARK_HOME/bin:$PATH
(3)修改主机配置
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6$ sudo vim /etc/hosts
注释掉:
127.0.1.1 spark01
增加:
192.168.6.128 spark01
192.168.6.129 spark02
192.168.6.130 spark03
关闭文件后测试是否正确配置:
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6$ ping spark02
(4)修改另一个配置(3台机器都要进行操作)
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6$ cd conf/
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ ll
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ cp slaves.template slaves
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ vim slaves
在文件末尾增加:
spark02
spark03
(5)配置免密登录(只在spark01操作即可)
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ chmod 0600 ~/.ssh/authorized_keys
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ ssh-copy-id
Usage: /usr/bin/ssh-copy-id [-h|-?|-n] [-i [identity_file]] [-p port] [[-o <ssh -o options>] ...] [user@]hostname
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ ssh-copy-id spark02(按提示操作)
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ ssh-copy-id spark03(按提示操作)
测试免密登录:
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ ssh spark02
spark@spark02:~$ exit
(6)启动spark服务(只在spark01操作即可)
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ ../sbin/start-all.sh
(7)开始任务
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6/conf$ cd ../
spark@spark01:~/app/spark-1.6.2-bin-hadoop2.6$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://spark01:7077 --executor-memory 1G --total-executor-cores 1 ./lib/spark-examples-1.6.2-hadoop2.6.0.jar 100
(8)在浏览器中查看
(9)修改BASH配置,将Spark添加到PATH中,设置SPARK_HOME环境变量。在Ubuntu上,只要编辑~/.bash_profile或~/.profile文件,将以下语句添加到文件中:
export SPARK_HOME=/home/spark/app/spark-1.6.2-bin-hadoop2.6
export PATH=$SPARK_HOME/bin:$PATH
export PYTHONPATH=$SPARK_HOME/python/:$SPARK_HOME/python/lib/py4j-0.9-src.zip:$PYTHONPATH
然后source或者重启终端后,就可以使用pyspark启动spark的python交互式shell环境:
spark@spark01:~$ source .profile

4.Linux安装python开发库

(1)更新源
sudo gedit /etc/apt/sources.list
把旧的sources.list进行备份,用新的sources.list文件替换掉旧的。
(2)更新依赖关系
sudo apt-get update
(3)更新软件(可选)
sudo apt-get upgrade 或者只更新 pip install --upgrade pip
(4)安装pip工具
sudo apt-get install python-pip
(5)更新pip的源,下载软件速度会明显加快
首先新建文件:
sudo vim /etc/pip.conf
在文件中写入:
[global]
index-url = http://pypi.douban.com/simple/
trusted-host = pypi.douban.com
(6)安装所需的包
sudo pip install matplotlib
sudo pip install scipy
sudo pip install scikit-learn
sudo pip install ipython
sudo apt-get install python-tk
Tips:
(7)使用命令:sudo pip install numpy时,可能遇到:
The directory '/Users/huangqizhi/Library/Caches/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
说得很清楚,是pip目录的属主不是sudo的root用户。如果必须用sudo pip,更改pip目录属主即可:
sudo chown root /Users/huangqizhi/Library/Caches/pip

5.常用命令

(1)启动spark集群,在spark主目录执行:
./sbin/start-all.sh
(2)关闭spark集群,在spark主目录执行:
./sbin/stop-all.sh
(3)启动任务:
spark-submit pythonapp.py
spark-submit --master yarn-cluster get_hdfs.py
spark-submit --master spark://hadoop01:7077 spark_sql_wp.py
(4)启动spark的python交互式shell环境
pyspark