如何使用bigdata提高查询速度? Mysql的

时间:2021-09-20 06:30:22

The table structure is as follows:

表结构如下:

如何使用bigdata提高查询速度? Mysql的

When I run this query, the execute time is about 2-3 minutes:

当我运行此查询时,执行时间约为2-3分钟:

select id,name,infohash,files from tb_torrent where id between 0 and 10000;

There's just over 200,000 data, why is the execution so slow? And how to fix it?

只有超过200,000个数据,为什么执行速度如此之慢?以及如何解决它?

1 个解决方案

#1


1  

The unnecessary use of BIGint is not enough to explain the sluggishness. Let's look for other issues.

不必要地使用BIGint并不足以解释这种迟缓。让我们来寻找其他问题。

Does that "key" icon mean that there is an index on id? Perchance is it PRIMARY KEY?

这个“关键”图标是否意味着id上有索引? Perchance是PRIMARY KEY?

What ENGINE is in use? If it is MyISAM, then you have the drawback of the PK not being 'clustered' with the data, thereby making the 10K lookups slower.

什么引擎正在使用?如果它是MyISAM,那么你的缺点就是PK没有与数据“聚集”,从而使得10K查找速度变慢。

What will you do with 10K rows? Think of the networks costs. And the memory costs in the client.

10K行你会怎么做?想想网络成本。而客户端的内存成本。

But maybe this is the real problem... If this is InnoDB, and if the TEXT columns are "big", then the values are stored "off record". This leads to another disk hit to get any big text values. Change them to some realistic max len of VARCHAR(...).

但也许这是真正的问题......如果这是InnoDB,并且如果TEXT列是“大”的话,那么这些值将被“记录”。这导致另一个磁盘命中以获得任何大文本值。将它们更改为VARCHAR(...)的一些实际最大值。

How much RAM do you have? What is the value of innodb_buffer_pool_size? Did you time the query twice? (The first time would be I/O-bound; the second might be hitting cache. How big (in MB or GB) is the table?

你有多少内存? innodb_buffer_pool_size的价值是多少?你有两次查询时间吗? (第一次是I / O绑定;第二次可能是命中缓存。表中有多大(以MB或GB为单位)?

#1


1  

The unnecessary use of BIGint is not enough to explain the sluggishness. Let's look for other issues.

不必要地使用BIGint并不足以解释这种迟缓。让我们来寻找其他问题。

Does that "key" icon mean that there is an index on id? Perchance is it PRIMARY KEY?

这个“关键”图标是否意味着id上有索引? Perchance是PRIMARY KEY?

What ENGINE is in use? If it is MyISAM, then you have the drawback of the PK not being 'clustered' with the data, thereby making the 10K lookups slower.

什么引擎正在使用?如果它是MyISAM,那么你的缺点就是PK没有与数据“聚集”,从而使得10K查找速度变慢。

What will you do with 10K rows? Think of the networks costs. And the memory costs in the client.

10K行你会怎么做?想想网络成本。而客户端的内存成本。

But maybe this is the real problem... If this is InnoDB, and if the TEXT columns are "big", then the values are stored "off record". This leads to another disk hit to get any big text values. Change them to some realistic max len of VARCHAR(...).

但也许这是真正的问题......如果这是InnoDB,并且如果TEXT列是“大”的话,那么这些值将被“记录”。这导致另一个磁盘命中以获得任何大文本值。将它们更改为VARCHAR(...)的一些实际最大值。

How much RAM do you have? What is the value of innodb_buffer_pool_size? Did you time the query twice? (The first time would be I/O-bound; the second might be hitting cache. How big (in MB or GB) is the table?

你有多少内存? innodb_buffer_pool_size的价值是多少?你有两次查询时间吗? (第一次是I / O绑定;第二次可能是命中缓存。表中有多大(以MB或GB为单位)?