Sql查询性能限制行数之间的差异

时间:2022-11-24 13:26:34

I have a hug table with 170,000 records.

我有一张170,000条记录的拥抱表。

What is difference between this query

这个查询有什么区别

Showing rows 0 - 299 (1,422 total, Query took 1.9008 sec)
    SELECT 1 FROM `p_apartmentbuy` p 
    where 
    p.price between 500000000 and 900000000
    and p.yard = 1
    and p.dateadd between 1290000000 and 1320000000
    ORDER BY `p`.`id` desc
    limit 1669

Explain Sql查询性能限制行数之间的差异

And this one:

和这个:

Showing rows 0 - 299 (1,422 total, Query took 0.2625 sec)
    SELECT 1 FROM `p_apartmentbuy` p 
    where 
    p.price between 500000000 and 900000000
    and p.yard = 1
    and p.dateadd between 1290000000 and 1320000000
    ORDER BY `p`.`id` desc
    limit 1670

Explain: Sql查询性能限制行数之间的差异

Both of these queries are using 1 table with same data and have same where clasue, but only limit row count are different

这两个查询都使用1个具有相同数据的表,并且具有相同的clasue,但只有限制行数不同

2 个解决方案

#1


2  

MySQL has a buffer for sorting. When the stuff to be sorted is too big, it sorts chunks, then mergesorts them. This is called "filesort". Your 1670-th row apparently just overflows the sort buffer.

MySQL有一个用于排序的缓冲区。当要排序的东西太大时,它会对块进行排序,然后合并它们。这称为“filesort”。你的第1670行显然只是溢出了排序缓冲区。

Read more details here.

在这里阅读更多细节。

Now why it picks another key for the in-memory sort... I am not too sure; but apparently its strategy is not quite good since it ends up being slower.

现在为什么它选择内存排序的另一个键...我不太确定;但显然它的策略并不是很好,因为它最终会变慢。

#2


1  

recap: odd that the query returning more rows runs much faster

回顾:奇怪的是返回更多行的查询运行得更快

this is not related to buffer vs file sort, sorting 1400 records takes well under 1 second

这与缓冲区与文件排序无关,排序1400条记录的时间不到1秒

the first explain shows the query optimizer doing a linear scan, the second explain shows it using an index. Even a partially helpful index is usually much better than none at all.

第一个解释显示查询优化器执行线性扫描,第二个解释显示它使用索引。即使是部分有用的索引通常也比没有索引要好得多。

Internally, mysql maintains stats about the size of indexes and tries to guess which index, or whether a linear scan would be faster. This estimate is data specific, I've seen mysql use the right index 99 times out of 100, but every now and then pick a different one and run the query 50x slower.

在内部,mysql维护有关索引大小的统计信息,并尝试猜测哪个索引,或者线性扫描是否会更快。这个估计是特定于数据的,我已经看到mysql使用正确的索引99次中的100次,但是每次都会选择不同的并且运行查询的速度要慢50倍。

You can override the built-in query optimizer and specify the index to use manually, with SELECT ... FROM ... FORCE INDEX (...)

您可以覆盖内置查询优化器并指定要手动使用的索引,使用SELECT ... FROM ... FORCE INDEX(...)

#1


2  

MySQL has a buffer for sorting. When the stuff to be sorted is too big, it sorts chunks, then mergesorts them. This is called "filesort". Your 1670-th row apparently just overflows the sort buffer.

MySQL有一个用于排序的缓冲区。当要排序的东西太大时,它会对块进行排序,然后合并它们。这称为“filesort”。你的第1670行显然只是溢出了排序缓冲区。

Read more details here.

在这里阅读更多细节。

Now why it picks another key for the in-memory sort... I am not too sure; but apparently its strategy is not quite good since it ends up being slower.

现在为什么它选择内存排序的另一个键...我不太确定;但显然它的策略并不是很好,因为它最终会变慢。

#2


1  

recap: odd that the query returning more rows runs much faster

回顾:奇怪的是返回更多行的查询运行得更快

this is not related to buffer vs file sort, sorting 1400 records takes well under 1 second

这与缓冲区与文件排序无关,排序1400条记录的时间不到1秒

the first explain shows the query optimizer doing a linear scan, the second explain shows it using an index. Even a partially helpful index is usually much better than none at all.

第一个解释显示查询优化器执行线性扫描,第二个解释显示它使用索引。即使是部分有用的索引通常也比没有索引要好得多。

Internally, mysql maintains stats about the size of indexes and tries to guess which index, or whether a linear scan would be faster. This estimate is data specific, I've seen mysql use the right index 99 times out of 100, but every now and then pick a different one and run the query 50x slower.

在内部,mysql维护有关索引大小的统计信息,并尝试猜测哪个索引,或者线性扫描是否会更快。这个估计是特定于数据的,我已经看到mysql使用正确的索引99次中的100次,但是每次都会选择不同的并且运行查询的速度要慢50倍。

You can override the built-in query optimizer and specify the index to use manually, with SELECT ... FROM ... FORCE INDEX (...)

您可以覆盖内置查询优化器并指定要手动使用的索引,使用SELECT ... FROM ... FORCE INDEX(...)