使用Apache Spark让MySQL查询速度提升10倍以上

时间:2021-08-21 05:41:23
作者:Alexander Rubin

链接:http://coyee.com/article/11012-how-apache-spark-makes-your-slow-mysql-queries-10x-faster-or-more


介绍

在我的前一篇文章 Apache Spark with MySQL 中介绍了如何利用 Apache Spark 实现数据分析以及如何对大量存放于文本文件的数据进行转换和分析。瓦迪姆还做了一个基准测试用来比较 MySQL 和 Spark with Parquet 柱状格式 (使用空中交通性能数据) 二者的性能。 这个测试非常棒,但如果我们不希望将数据从 MySQL 移到其他的存储系统中,而是继续在已有的 MySQL 服务器上执行查询的话,Apache Spark 一样可以帮到我们!

开始

在已有的 MySQL 服务器之上使用 Apache Spark (无需将数据导出到 Spark 或者 Hadoop 平台上),这样至少可以提升 10 倍的查询性能。使用多个 MySQL 服务器(复制或者 Percona XtraDB Cluster)可以让我们在某些查询上得到额外的性能提升。你也可以使用 Spark 的缓存功能来缓存整个 MySQL 查询结果表。

思路很简单:Spark 可以通过 JDBC 读取 MySQL 上的数据,也可以执行 SQL 查询,因此我们可以直接连接到 MySQL 并执行查询。那么为什么速度会快呢?对一些需要运行很长时间的查询(如报表或者BI),由于 Spark 是一个大规模并行系统,因此查询会非常的快。MySQL 只能为每一个查询分配一个 CPU 核来处理,而 Spark 可以使用所有集群节点的所有核。在下面的例子中,我们会在 Spark 中执行 MySQL 查询,这个查询速度比直接在 MySQL 上执行速度要快 5 到 10 倍。


另外,Spark 可以增加“集群”级别的并行机制,在使用 MySQL 复制或者 Percona XtraDB Cluster 的情况下,Spark 可以把查询变成一组更小的查询(有点像使用了分区表时可以在每个分区都执行一个查询),然后在多个 Percona XtraDB Cluster 节点的多个从服务器上并行的执行这些小查询。最后它会使用 map/reduce 方式将每个节点返回的结果聚合在一起行程完整的结果。

这篇文章跟我之前文章 “Airlines On-Time Performance” 所使用的数据库是相同的。瓦迪姆创建了一些脚本可以方便的下载这些数据并上传到 MySQL 数据库。脚本的下载地址请看 这里。同时我们这次使用的是 2016年7月26日发布的 Apache Spark 2.0

安装 Apache Spark

使用独立模式启动 Apache Spark 是很简单的,如下几步即可:

  1. 下载 Apache Spark 2.0 并解压到某目录
  2. 启动 master.
  3. 启动 slave (worker) 并连接到 master
  4. 启动应用 (spark-shell 或者 spark-sql).

示例:

root@thor:~/spark# ./sbin/start-master.sh
less ../logs/spark-root-org.apache.spark.deploy.master.Master-1-thor.out
15/08/25 11:21:21 INFO Master: Starting Spark master at spark://thor:7077
15/08/25 11:21:21 INFO Utils: Successfully started service 'MasterUI' on port 8080.
15/08/25 11:21:21 INFO MasterWebUI: Started MasterWebUI at http://10.60.23.188:8080
root@thor:~/spark# ./sbin/start-slave.sh spark://thor:7077

为了连接到 Spark ,我们可以使用 spark-shell (Scala)、pyspark (Python) 或者  spark-sql。spark-sql 和 MySQL 命令行类似,因此这是最简单的选择(你甚至可以用 show tables 命令)。我同时还需要在交互模式下使用 Scala ,因此我选择的是 spark-shell 。在下面所有的例子中,我都是在 MySQL 和 Spark 上使用相同的 SQL 查询,所以其实没多大的不同。

为了让 Spark 能用上 MySQL 服务器,我们需要驱动程序 Connector/J for MySQL. 下载这个压缩文件解压后拷贝 mysql-connector-java-5.1.39-bin.jar 到 spark 目录,然后在 conf/spark-defaults.conf 中添加类路径,如下:

spark.driver.extraClassPath = /usr/local/spark/mysql-connector-java-5.1.39-bin.jar
spark.executor.extraClassPath = /usr/local/spark/mysql-connector-java-5.1.39-bin.jar

利用 Apache Spark 运行 MySQL 查询

在这个测试中我们使用的一台拥有 12 核(老的 Intel(R) Xeon(R) CPU L5639 @ 2.13GHz 处理器) 以及 48G 内存,带有 SSD 磁盘的物理服务器。 在这台机器上我安装了 MySQL 并启动了 Spark 主节点和从节点。

现在我们可以在 Spark 中运行 MySQL 查询了。首先,从 Spark 目录中启动 Shell (在我这里是 /usr/local/spark ):

$ ./bin/spark-shell --driver-memory 4G --master spark://server1:7077

然后我们将连接到 MySQL 服务器并注册临时视图:

val jdbcDF = spark.read.format("jdbc").options(
Map("url" -> "jdbc:mysql://localhost:3306/ontime?user=root&password=",
"dbtable" -> "ontime.ontime_part",
"fetchSize" -> "10000",
"partitionColumn" -> "yeard", "lowerBound" -> "1988", "upperBound" -> "2016", "numPartitions" -> "28"
)).load()
jdbcDF.createOrReplaceTempView("ontime")

这样我们就为 Spark 创建了一个“数据源”(换句话说就是相当于 Spark 建立了到 MySQL 的连接)。Spark 表名 “ontime” 对应连接到 MySQL 的ontime.ontime_part 表,现在可以在 Spark 中运行 SQL 了,它们是按顺序被一一解析并转换成 MySQL 查询的。

partitionColumn” 在这里非常重要,它告诉 Spark 并行的执行多个查询,每个分区分配一个查询执行。

现在我们可以运行查询:

val sqlDF = sql("select min(year), max(year) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') and (origin = 'RDU' or dest = 'RDU') GROUP by carrier HAVING cnt > 100000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10")
sqlDF.show()

MySQL 查询示例

让我们暂时回到 MySQL 来看看这个查询例子,我选出了如下的查询语句 (来自我以前的文章):

select min(year), max(year) as max_year, Carrier, count(*) as cnt,
sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed,
round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate
FROM ontime
WHERE
DayOfWeek not in (6,7)
and OriginState not in ('AK', 'HI', 'PR', 'VI')
and DestState not in ('AK', 'HI', 'PR', 'VI')
GROUP by carrier HAVING cnt > 100000 and max_year > '1990'
ORDER by rate DESC, cnt desc
LIMIT 10

 这个查询用来查找出每个航空公司航班延误的架数。此外该查询还将很智能的计算准点率,考虑到航班数量(我们不希望小航空公司跟大航空公司比较,同时一些老的关闭的航空公司也不在计算范围之内)。

我选择这个查询主要的原因是,这在 MySQL 很难再优化了,所有的这些 WHERE 条件智能过滤掉约 70% 的记录行。我做了一个基本的计算:

mysql> select count(*) FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI');
+-----------+
| count(*) |
+-----------+
| 108776741 |
+-----------+
mysql> select count(*) FROM ontime;
+-----------+
| count(*) |
+-----------+
| 152657276 |
+-----------+
mysql> select round((108776741/152657276)*100, 2);
+-------------------------------------+
| round((108776741/152657276)*100, 2) |
+-------------------------------------+
| 71.26 |
+-------------------------------------+

表结构如下:

CREATE TABLE `ontime_part` (
`YearD` int(11) NOT NULL,
`Quarter` tinyint(4) DEFAULT NULL,
`MonthD` tinyint(4) DEFAULT NULL,
`DayofMonth` tinyint(4) DEFAULT NULL,
`DayOfWeek` tinyint(4) DEFAULT NULL,
`FlightDate` date DEFAULT NULL,
`UniqueCarrier` char(7) DEFAULT NULL,
`AirlineID` int(11) DEFAULT NULL,
`Carrier` char(2) DEFAULT NULL,
`TailNum` varchar(50) DEFAULT NULL,
...
`id` int(11) NOT NULL AUTO_INCREMENT,
PRIMARY KEY (`id`,`YearD`),
KEY `covered` (`DayOfWeek`,`OriginState`,`DestState`,`Carrier`,`YearD`,`ArrDelayMinutes`)
) ENGINE=InnoDB AUTO_INCREMENT=162668935 DEFAULT CHARSET=latin1
/*!50100 PARTITION BY RANGE (YearD)
(PARTITION p1987 VALUES LESS THAN (1988) ENGINE = InnoDB,
PARTITION p1988 VALUES LESS THAN (1989) ENGINE = InnoDB,
PARTITION p1989 VALUES LESS THAN (1990) ENGINE = InnoDB,
PARTITION p1990 VALUES LESS THAN (1991) ENGINE = InnoDB,
PARTITION p1991 VALUES LESS THAN (1992) ENGINE = InnoDB,
PARTITION p1992 VALUES LESS THAN (1993) ENGINE = InnoDB,
PARTITION p1993 VALUES LESS THAN (1994) ENGINE = InnoDB,
PARTITION p1994 VALUES LESS THAN (1995) ENGINE = InnoDB,
PARTITION p1995 VALUES LESS THAN (1996) ENGINE = InnoDB,
PARTITION p1996 VALUES LESS THAN (1997) ENGINE = InnoDB,
PARTITION p1997 VALUES LESS THAN (1998) ENGINE = InnoDB,
PARTITION p1998 VALUES LESS THAN (1999) ENGINE = InnoDB,
PARTITION p1999 VALUES LESS THAN (2000) ENGINE = InnoDB,
PARTITION p2000 VALUES LESS THAN (2001) ENGINE = InnoDB,
PARTITION p2001 VALUES LESS THAN (2002) ENGINE = InnoDB,
PARTITION p2002 VALUES LESS THAN (2003) ENGINE = InnoDB,
PARTITION p2003 VALUES LESS THAN (2004) ENGINE = InnoDB,
PARTITION p2004 VALUES LESS THAN (2005) ENGINE = InnoDB,
PARTITION p2005 VALUES LESS THAN (2006) ENGINE = InnoDB,
PARTITION p2006 VALUES LESS THAN (2007) ENGINE = InnoDB,
PARTITION p2007 VALUES LESS THAN (2008) ENGINE = InnoDB,
PARTITION p2008 VALUES LESS THAN (2009) ENGINE = InnoDB,
PARTITION p2009 VALUES LESS THAN (2010) ENGINE = InnoDB,
PARTITION p2010 VALUES LESS THAN (2011) ENGINE = InnoDB,
PARTITION p2011 VALUES LESS THAN (2012) ENGINE = InnoDB,
PARTITION p2012 VALUES LESS THAN (2013) ENGINE = InnoDB,
PARTITION p2013 VALUES LESS THAN (2014) ENGINE = InnoDB,
PARTITION p2014 VALUES LESS THAN (2015) ENGINE = InnoDB,
PARTITION p2015 VALUES LESS THAN (2016) ENGINE = InnoDB,
PARTITION p_new VALUES LESS THAN MAXVALUE ENGINE = InnoDB) */

就算有一个“覆盖”索引,MySQL 也将扫描约 ~70M-100M 行的数据并创建一个临时表:

mysql>  explain select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_part WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: ontime_part
type: range
possible_keys: covered
key: covered
key_len: 2
ref: NULL
rows: 70483364
Extra: Using where; Using index; Using temporary; Using filesort
1 row in set (0.00 sec)

下面是 MySQL 查询的响应时间:

mysql> select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_part WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT  10;
+------------+----------+---------+----------+-----------------+------+
| min(yearD) | max_year | Carrier | cnt | flights_delayed | rate |
+------------+----------+---------+----------+-----------------+------+
| 2003
| 2013 | EV | 2962008 | 464264 | 0.16 |
| 2003 | 2013 | B6 | 1237400 | 187863 | 0.15 |
| 2006 | 2011 | XE | 1615266 | 230977 | 0.14 |
| 2003 | 2005 | DH | 501056 | 69833 | 0.14 |
| 2001 | 2013 | MQ | 4518106 | 605698 | 0.13 |
| 2003 | 2013 | FL | 1692887 | 212069 | 0.13 |
| 2004 | 2010 | OH | 1307404 | 175258 | 0.13 |
| 2006 | 2013 | YV | 1121025 | 143597 | 0.13 |
| 2003 | 2006 | RU | 1007248 | 126733 | 0.13 |
| 1988 | 2013 | UA | 10717383 | 1327196 | 0.12 |
+------------+----------+---------+----------+-----------------+------+
10
rows in set (19 min 16.58 sec)

足足执行了 19 分钟,这个结果真的让人爽不起来。


SQL in Spark

现在我们希望在 Spark 中运行相同的查询,让 Spark 从 MySQL 读取数据。我们创建了一个“数据源”然后执行如下查询:

scala> val jdbcDF = spark.read.format("jdbc").options(
| Map("url" -> "jdbc:mysql://localhost:3306/ontime?user=root&password=mysql",
|
"dbtable" -> "ontime.ontime_sm",
| "fetchSize" -> "10000",
|
"partitionColumn" -> "yeard", "lowerBound" -> "1988", "upperBound" -> "2015", "numPartitions" -> "48"
| )).load()
16/08/02 23:24:12 WARN JDBCRelation: The number of partitions is reduced because the specified number of partitions is less than the difference between upper bound and lower bound. Updated number of partitions: 27; Input number of partitions: 48; Lower bound: 1988; Upper bound: 2015.
dbcDF: org.apache.spark.sql.DataFrame = [id: int, YearD: date ... 19 more fields]
scala> jdbcDF.createOrReplaceTempView("ontime")
scala> val sqlDF = sql("select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10")
sqlDF: org.apache.spark.sql.DataFrame = [min(yearD): date, max_year: date ... 4 more fields]
scala> sqlDF.show()
+----------+--------+-------+--------+---------------+----+
|
min(yearD)|max_year|Carrier| cnt|flights_delayed|rate|
+----------+--------+-------+--------+---------------+----+
| 2003| 2013| EV| 2962008| 464264|0.16|
|
2003| 2013| B6| 1237400| 187863|0.15|
| 2006| 2011| XE| 1615266| 230977|0.14|
|
2003| 2005| DH| 501056| 69833|0.14|
| 2001| 2013| MQ| 4518106| 605698|0.13|
|
2003| 2013| FL| 1692887| 212069|0.13|
| 2004| 2010| OH| 1307404| 175258|0.13|
|
2006| 2013| YV| 1121025| 143597|0.13|
| 2003| 2006| RU| 1007248| 126733|0.13|
|
1988| 2013| UA|10717383| 1327196|0.12|
+----------+--------+-------+--------+---------------+----+

Spark-shell 并不会显示查询的执行时间,这个可以从 spark-sql 提供的 Web UI 中获取到。我在 spark-sql 中重新执行相同的查询:

./bin/spark-sql --driver-memory 4G --master spark://thor:7077
spark-sql> CREATE TEMPORARY VIEW ontime
> USING org.apache.spark.sql.jdbc
> OPTIONS (
> url "jdbc:mysql://localhost:3306/ontime?user=root&password=",
> dbtable "ontime.ontime_part",
> fetchSize "1000",
> partitionColumn "yearD", lowerBound "1988", upperBound "2014", numPartitions "48"
> );
16/08/04 01:44:27 WARN JDBCRelation: The number of partitions is reduced because the specified number of partitions is less than the difference between upper bound and lower bound. Updated number of partitions: 26; Input number of partitions: 48; Lower bound: 1988; Upper bound: 2014.
Time taken: 3.864 seconds
spark-sql> select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10;
16/08/04 01:45:13 WARN Utils: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.debug.maxToStringFields' in SparkEnv.conf.
2003 2013 EV 2962008 464264 0.16
2003 2013 B6 1237400 187863 0.15
2006 2011 XE 1615266 230977 0.14
2003 2005 DH 501056 69833 0.14
2001 2013 MQ 4518106 605698 0.13
2003 2013 FL 1692887 212069 0.13
2004 2010 OH 1307404 175258 0.13
2006 2013 YV 1121025 143597 0.13
2003 2006 RU 1007248 126733 0.13
1988 2013 UA 10717383 1327196 0.12
Time taken: 139.628 seconds, Fetched 10 row(s)

可以看到查询的时间足足快了 10 倍之多(同一台机器,只有一台机器)。但是到底这些查询是怎么变成 MySQL 查询的呢?然后为什么这样的查询会快那么多。让我们深入到 MySQL 一探究竟:


深入 MySQL

Spark:

scala> sqlDF.show()
[Stage 4:> (0 + 26) / 26]

 

MySQL:

mysql> select id, info from information_schema.processlist where info is not NULL and info not like '%information_schema%';
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| id | info |
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| 10948 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2001 AND yearD < 2002) |
| 10965 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2007 AND yearD < 2008) |
| 10966 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1991 AND yearD < 1992) |
| 10967 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1994 AND yearD < 1995) |
| 10968 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1998 AND yearD < 1999) |
| 10969 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2010 AND yearD < 2011) |
| 10970 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2002 AND yearD < 2003) |
| 10971 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2006 AND yearD < 2007) |
| 10972 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1990 AND yearD < 1991) |
| 10953 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2009 AND yearD < 2010) |
| 10947 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1993 AND yearD < 1994) |
| 10956 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD < 1989 or yearD is null) |
| 10951 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2005 AND yearD < 2006) |
| 10954 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1996 AND yearD < 1997) |
| 10955 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2008 AND yearD < 2009) |
| 10961 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1999 AND yearD < 2000) |
| 10962 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2011 AND yearD < 2012) |
| 10963 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2003 AND yearD < 2004) |
| 10964 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1995 AND yearD < 1996) |
| 10957 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2004 AND yearD < 2005) |
| 10949 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1989 AND yearD < 1990) |
| 10950 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1997 AND yearD < 1998) |
| 10952 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2013) |
| 10958 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 1992 AND yearD < 1993) |
| 10960 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2000 AND yearD < 2001) |
| 10959 | SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2012 AND yearD < 2013) |
+-------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
26 rows in set (0.00 sec)

Spark 并行执行了 26 个查询,棒极了。由于表本身是分区的,因此每个分区一个查询,但是却扫描了整个分区:

mysql> explain partitions SELECT `YearD`,`ArrDelayMinutes`,`Carrier` FROM ontime.ontime_part WHERE (((NOT (DayOfWeek IN (6, 7)))) AND ((NOT (OriginState IN ('AK', 'HI', 'PR', 'VI')))) AND ((NOT (DestState IN ('AK', 'HI', 'PR', 'VI'))))) AND (yearD >= 2001 AND yearD < 2002)G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: ontime_part
partitions: p2001
type: ALL
possible_keys: NULL
key: NULL
key_len: NULL
ref: NULL
rows: 5814106
Extra: Using where
1 row in set (0.00 sec)

在这种情况下,服务器有 12 核,24 个超线程,可以非常高效的并行执行 26 个查询,而分区表可以帮助避免资源争用的问题(我希望 MySQL 可以并行的扫描分区,但在写的时候不是这样)。


另外一件有趣的事是 Spark 可以“推送”一些条件到 MySQL,但只是在 WHERE 语句范围内。所有的 GROUP BY/ORDER BY 聚合查询都是直接在 Spark 中执行。它需要从 MySQL 获取满足这些条件的数据,这样就不会再推送 GROUP BY/ORDER BY 到 MySQL 中。

这也意味着如果不带 WHERE 条件的查询(例如 "select count(*) as cnt, carrier from ontime group by carrier order by cnt desc limit 10")只能是从 MySQL 获取完整的数据并加载到 Spark 中(等于在 MySQL 做所有 GROUP BY 的操作),这种查询在 Spark 上运行根据其数据量以及索引的使用情况或慢或快没准,但其要求更多的资源以及更多的内存占用。上述的查询转成 26 个查询,每个包含一个 “select carrier from ontime_part where (yearD >= N AND yearD < N)”


将整个查询推送到 MySQL

如果我们想要避免将所有数据从 MySQL 发送到 Spark,我们可以在查询之上创建一个临时表(类似 MySQL 在 select 语句中创建临时表的方法),可以这样编写 Scala 代码:

val tableQuery =
"(select yeard, count(*) from ontime group by yeard) tmp"
val jdbcDFtmp = spark.read.format("
jdbc").options(
Map("
url" -> "jdbc:mysql://localhost:3306/ontime?user=root&password=",
"
dbtable" -> tableQuery,
"
fetchSize" -> "10000"
)).load()
jdbcDFtmp.createOrReplaceTempView("
ontime_tmp")

Spark SQL:

CREATE TEMPORARY VIEW ontime_tmp
USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:mysql://localhost:3306/ontime?user=root&password=mysql",
dbtable "(select yeard, count(*) from ontime_part group by yeard) tmp",
fetchSize "1000"
);
select * from ontime_tmp;

请注意:

  1. 这里我们不想用到 “partitionColumn” ,否则我们将看到 26 个类似 MySQL: “SELECT yeard, count(*) FROM (select yeard, count(*) from ontime_part group by yeard) tmp where (yearD >= N AND yearD < N)”  这样的查询,显然这种查询并没有经过优化
  2. 这并非使用 Spark 的好方法,顶多就是应急一下。这样做唯一的理由就是能将查询的结果作为一个额外的查询。

Spark 的查询缓存

另外一个方案就是缓存查询结果(甚至是整张表),然后在 Scala 使用 .filter 实现更快速处理。这种做法需要给 Spark 提供足够多的内存。好消息是我们可以为 Spark 增加额外的节点来为 Spark 集群获取更多的内存。

Spark SQL 示例:

CREATE TEMPORARY VIEW ontime_latest
USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:mysql://localhost:3306/ontime?user=root&password=",
dbtable "ontime.ontime_part partition (p2013, p2014)",
fetchSize "1000",
partitionColumn "yearD", lowerBound "1988", upperBound "2014", numPartitions "26"
);
cache table ontime_latest;
spark-sql> cache table ontime_latest;
Time taken: 465.076 seconds
spark-sql> select count(*) from ontime_latest;
5349447
Time taken: 0.526 seconds, Fetched 1 row(s)
spark-sql> select count(*), dayofweek from ontime_latest group by dayofweek;
790896 1
634664 6
795540 3
794667 5
808243 4
743282 7
782155 2
Time taken: 0.541 seconds, Fetched 7 row(s)
spark-sql> select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0)) as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_latest WHERE DayOfWeek not in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') and (origin='RDU' or dest = 'RDU') GROUP by carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10;
2013 2013 MQ 9339 1734 0.19
2013 2013 B6 3302 516 0.16
2013 2013 EV 9225 1331 0.14
2013 2013 UA 1317 177 0.13
2013 2013 AA 5354 620 0.12
2013 2013 9E 5520 593 0.11
2013 2013 WN 10968 1130 0.1
2013 2013 US 5722 549 0.1
2013 2013 DL 6313 478 0.08
2013 2013 FL 2433 205 0.08
Time taken: 2.036 seconds, Fetched 10 row(s)

这里我们在 Spark 中缓存了分区 p2013 和 p2014 。这将从 MySQL 获取数据并加载进 Spark 中。一旦在这些缓存的数据上执行查询,可以想象速度要快得多。

利用 Scala 我们可以缓存查询的结果,然后使用过滤器去获取我们所需的信息:

val sqlDF = sql("SELECT flightdate, origin, dest, depdelayminutes, arrdelayminutes, carrier, TailNum, Cancelled, Diverted, Distance from ontime")
sqlDF.cache().show()
scala> sqlDF.filter("flightdate='1988-01-01'").count()
res5: Long = 862

使用 Spark 和 Percona XtraDB Cluster

Spark 也可以用在集群模式下,支持越来越多的节点,因为只从一个单一的 MySQL 服务器上读取数据是很大的瓶颈。我们可以使用 MySQL 复制的从服务器或者是 Percona XtraDB Cluster (PXC) 节点来作为 Spark 的数据源。为了测试这个,我们在 AWS 上搞来了三个 Percona XtraDB Cluster 节点的集群 ( m4.2xlarge Ubuntu 实例) 然后在每个节点上启动 Apache Spark:

  1. Node1 (pxc1): Percona Server + Spark Master + Spark worker node + Spark SQL
  2. Node2 (pxc2): Percona Server + Spark worker node
  3. Node3 (pxc3): Percona Server + Spark worker node

所有的 Spark worker 节点使用相同的内存配置参数:

cat conf/spark-env.sh
export SPARK_WORKER_MEMORY=24g

然后我就可以启动 spark-sql (一样需要 connector/J 的 JAR 文件拷贝到所有节点):

$ ./bin/spark-sql --driver-memory 4G --master spark://pxc1:7077

当创建表时,我仍然使用 localhost 连接到 MySQL (url “jdbc:mysql://localhost:3306/ontime?user=root&password=xxx”). 由于运行 Spark worker 的节点和运行 Percona Cluster 节点是同一个,因此它将使用的是本地连接。然后运行一个 Spark SQL 会将所有 26 个查询平均分发到三个 MySQL 节点中。


另外我们可以在独立的主机上运行 Spark 集群,然后连接到 HAProxy ,这样可以实现多个 Percona XtraDB Cluster 节点的负载均衡。

查询性能测试

最后我们来看看在三个 AWS Percona XtraDB Cluster 节点上查询的响应时间测试结果:

查询 1: 

select min(yearD), max(yearD) as max_year, Carrier, count(*) as cnt, sum(if(ArrDelayMinutes>30, 1, 0))as flights_delayed, round(sum(if(ArrDelayMinutes>30, 1, 0))/count(*),2) as rate FROM ontime_part WHERE DayOfWeeknot in (6,7) and OriginState not in ('AK', 'HI', 'PR', 'VI') and DestState not in ('AK', 'HI', 'PR', 'VI') GROUPby carrier HAVING cnt > 1000 and max_year > '1990' ORDER by rate DESC, cnt desc LIMIT 10;


查询 / 索引类型 MySQL 执行时间 Spark 执行时间 (3 nodes) Times 提升
No covered index (分区) 19 分 16.58 秒 192.17 秒 6.02
Covered index (分区) 2 分 10.81 秒 48.38 秒 2.7

 

查询 2:  select dayofweek, count(*) from ontime_part group by dayofweek;

查询 / 索引类型 MySQL 执行时间 Spark 执行时间 (3 nodes) Times 提升
No covered index (partitoned) 19 分 15.21 秒 195.058 秒 5.92
Covered index (partitioned) 1 分 10.38 秒 27.323 秒 2.58

 

这个结果看起来很棒,但还可以更棒。使用三个节点的 @ m4.2xlarge 我们将拥有 8*3 = 24 核处理器 (虽然它们是在 Spark 和 MySQL 间共享的)。我们预期可达 10 倍的性能提升,特别是在没有覆盖索引 (Covered index) 的情况下。


然而,在 m4.2xlarge 上的内存数量不允许我们超额运行 MySQL,因此所有的数据读都是通过 EBS 的  IOPS,这个只能给我们提供 ~120MB/sec 的数据读取速度。我在三台物理机上重做了上面的测试,这三台机器配置如下:

  • 28 cores E5-2683 v3 @ 2.00GHz
  • 240GB of RAM
  • Samsung 850 PRO

这个测试是完全在内存之外的:

Query 1 (参考上面内容)

查询 / 索引类型 MySQL 执行时间 Spark 执行时间 (3 nodes) Times 提升
No covered index (分区) 3 min 13.94 sec 14.255 sec 13.61
Covered index (分区) 2 min 2.11 sec 9.035 sec 13.52

 Query 2:  select dayofweek, count(*) from ontime_part group by dayofweek; 

查询 / 索引类型 MySQL 执行时间 Spark 执行时间 (3 nodes) Times 提升
No covered index (分区)  2 min 0.36 sec 7.055 sec 17.06
Covered index (分区) 1 min 6.85 sec 4.514 sec 14.81

因为表只有 26 个分区,因此我们实际上并没有足够的并发来跑满所有的处理器核以及内存。我也测试了带 ID 主键的非分区表以及使用 128 个分区的情况。


分区的注意点

在测试中我们使用了分区表(根据年份进行分区)来降低 MySQL 资源争用的情况。同时 Spark 中的 “partitionColumn” 参数并不要求 MySQL 表是分区的。举例来说,如果一个表包含一个主键,我们可以在 Spark 中使用 CREATE VIEW:

CREATE OR REPLACE TEMPORARY VIEW ontime
USING org.apache.spark.sql.jdbc
OPTIONS (
url "jdbc:mysql://127.0.0.1:3306/ontime?user=root&password=",
dbtable "ontime.ontime",
fetchSize "1000",
partitionColumn "id", lowerBound "1", upperBound "162668934", numPartitions "128"
);

假设我们有足够的 MySQL 服务器(例如很多节点或者从节点),我们可以提升分区数来提升并发性(而不是根据年份分区只得到 26 个分区)。实际上,上述的测试可以得到更快的响应时间,查询1只执行了 6.44 秒。


Spark 表现不好的地方

对于那些本身执行很快的查询(那些完全使用索引或者是可以有效使用索引)来说,使用 Spark 没什么意义。因为从 MySQL 上加载数据并载入到 Spark 的开销很大。这个开销会影响查询的速度。例如类似这样的查询  select count(*) from ontime_part where YearD = 2013 and DayOfWeek = 7 and OriginState = 'NC' and DestState ='NC'; 只扫描了 1300 行数据然后立即返回结果(在 MySQL 中几乎不怎么消耗时间的查询)

一个更好的例子是 select max(id) from ontime_part. 在 MySQL 中这个查询会使用索引,所有的计算都在 MySQL 中完成。而如果是 Spark 则需要从 MySQL 获取所有的 ID (select id from ontime_part) 然后计算最大值,这个查询在 Spark 上花了 24.267 秒!


结论

在 MySQL 上使用 Apache Spark 作为一个额外的引擎层,可以帮助提升一些很慢的报表查询的执行速度,为运行那些需要很长时间的查询提供急需的伸缩性。此外,Spark 还可以帮我们实现常用查询的缓存



PS: Visual Explain Plan With Spark

Spark Web GUI 提供了很多监控 Spark 作业的方法,例如可以显示作业进度:

使用Apache Spark让MySQL查询速度提升10倍以上

 

以及 SQL 的可视化分解的详情展示:

使用Apache Spark让MySQL查询速度提升10倍以上