MySQL ODBC更新查询非常慢

时间:2022-10-16 03:59:06

Our Access 2010 database recently hit the 2GB file size limit, so I ported the database to MySQL.

我们的Access 2010数据库最近达到了2GB的文件大小限制,因此我将数据库移植到MySQL。

I installed MySQL Server 5.6.1 x64 on Windows Server 2008 x64. All OS updates and patches are loaded.

我在Windows Server 2008 x64上安装了MySQL Server 5.6.1 x64。加载所有操作系统更新和修补程序。

I am using the MySQL ODBC 5.2w x64 Driver, as it seems to be the fastest.

我正在使用MySQL ODBC 5.2w x64驱动程序,因为它似乎是最快的。

My box has an i7-3960X with 64GB RAM and a 480GB SSD.

我的盒子有一个i7-3960X,64GB RAM和480GB SSD。

I use Access Query Designer as I prefer the interface and I regularly need to append missing records from one table to the other.

我使用Access Query Designer,因为我更喜欢接口,我经常需要将缺少的记录从一个表附加到另一个表。

As a test, I have a simple Access Database with two linked tables:

作为测试,我有一个带有两个链接表的简单Access数据库:

tblData links to another Access Database and

tblData链接到另一个Access数据库和

tblOnline uses a SYSTEM DSN to a linked ODBC table.

tblOnline将SYSTEM DSN用于链接的ODBC表。

Both tables contain over 10 million records. Some of my ported working tables already have over 30 million records.

两个表都包含超过1000万条记录。我的一些移植工作表已经拥有超过3000万条记录。

To select records to append, I use a field called INDBYN which is either true or false.

要选择要追加的记录,我使用一个名为INDBYN的字段,该字段为true或false。

First I run an Update query on tblData:

首先,我在tblData上运行Update查询:

UPDATE tblData SET tblData.InDBYN = False;

Then I update all matching records:

然后我更新所有匹配的记录:

UPDATE tblData INNER JOIN tblData ON tblData.IDMaster = tblOnline.IDMaster SET tblData.InDBYN = True; 

This works reasonably fast, even to the linked ODBC table.

即使对于链接的ODBC表,这也相当快。

Lastly I Append all records where INDBYN is False to tblOnline. This is also acceptable speed, although slower than appends to a Linked Access table.

最后,我将INDBYN为假的所有记录附加到tblOnline。这也是可接受的速度,虽然慢于附加到链接访问表。

Within Access everything works 100% and is incredibly fast, except the DB is getting too big.

在Access中,一切都是100%工作并且速度非常快,除了DB太大了。

On the Linked Access Table, it takes 2m15s to update 11,500,000 records.

在链接访问表上,更新11,500,000条记录需要2分15秒。

However, I now need to move the SOURCE table to MySQL, as it is reaching the 2GB limit.

但是,我现在需要将SOURCE表移动到MySQL,因为它达到了2GB的限制。

So in future I will need to run the UPDATE statement on a linked ODBC table.

因此,将来我需要在链接的ODBC表上运行UPDATE语句。

So far, when I run the same simple UPDATE query on the linked ODBC table it runs for more than 20 minutes, and then bombs out saying the query has exceeded the 2GB memory limit.

到目前为止,当我在链接的ODBC表上运行相同的简单UPDATE查询时,它运行超过20分钟,然后炸弹说该查询已超过2GB内存限制。

Both tables are identical in structure.

两个表的结构相同。

I do not know how to resolve this and need advice please.

我不知道如何解决这个问题,需要咨询。

I prefer to use Access as the front-end as I have hundreds of queries already designed for the app, and there is no time to re-develop the app.

我更喜欢使用Access作为前端,因为我已经为应用程序设计了数百个查询,并且没有时间重新开发应用程序。

I use the InnoDB engine and have tried various tweaks without success. Since my database uses relational tables, it looked like the best option to use INNODB as opposed to MyISAM.

我使用InnoDB引擎并尝试了各种调整但没有成功。由于我的数据库使用关系表,因此它看起来是使用INNODB而不是MyISAM的最佳选择。

I have turned doublewrite on and off and tried various buffer pool sizes, including query cache. It does not make a difference on this particular query.

我打开和关闭了doublewrite并尝试了各种缓冲池大小,包括查询缓存。它对此特定查询没有任何影响。

My current my.ini file looks like this:

我当前的my.ini文件如下所示:

#-----------------------------------------------------------------------
# MySQL Server Instance Configuration File
# ----------------------------------------------------------------------

[client]
no-beep

port=3306

[mysql]

default-character-set=utf8

server_type=3
[mysqld]

port=3306

basedir="C:\Program Files\MySQL\MySQL Server 5.6\"

datadir="E:\MySQLData\data\"

character-set-server=utf8

default-storage-engine=INNODB

sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"

log-output=FILE
general-log=0
general_log_file="SQLSERVER.log"
slow-query-log=1
slow_query_log_file="SQLSERVER-slow.log"
long_query_time=10

log-error="SQLSERVER.err"

max_connections=100

query_cache_size = 20M

table_open_cache=2000

tmp_table_size=502M

thread_cache_size=9

myisam_max_sort_file_size=100G

myisam_sort_buffer_size=1002M

key_buffer_size=8M

read_buffer_size=64K
read_rnd_buffer_size=256K

sort_buffer_size=256K

innodb_additional_mem_pool_size=32M

innodb_flush_log_at_trx_commit = 1

innodb_log_buffer_size=16M

innodb_buffer_pool_size = 48G

innodb_log_file_size=48M

innodb_thread_concurrency = 0

innodb_autoextend_increment=64M

innodb_buffer_pool_instances=8

innodb_concurrency_tickets=5000

innodb_old_blocks_time=1000

innodb_open_files=2000

innodb_stats_on_metadata=0

innodb_file_per_table=1

innodb_checksum_algorithm=0

back_log=70

flush_time=0

join_buffer_size=256K

max_allowed_packet=4M

max_connect_errors=100

open_files_limit=4110

query_cache_type = 1

sort_buffer_size=256K

table_definition_cache=1400

binlog_row_event_max_size=8K

sync_relay_log=10000
sync_relay_log_info=10000

tmpdir = "G:/MySQLTemp"
innodb_write_io_threads = 16
innodb_doublewrite
innodb = ON
innodb_fast_shutdown = 1

query_cache_min_res_unit = 4096

query_cache_limit = 1048576

innodb_data_home_dir = "E:/MySQLData/data"

bulk_insert_buffer_size = 8388608

Any advice will be greatly appreciated. Thank you in advance.

任何建议将不胜感激。先感谢您。

3 个解决方案

#1


1  

Communication of MS Access with MySQL thru linked table is slow. Terribly slow. That is the fact which can't be changed. Why is it happening? Access firstly load data from MySQL, then it process the command and finally it puts the data back. In addition, it does this process row by row! However, you can avoid this if you don't need to use parameters or data from local tables in your "update" query. (In another words - if your query is always same and it use only MySQL data)

MS Access与MySQL通过链接表的通信很慢。非常慢。这是无法改变的事实。为什么会这样?访问首先从MySQL加载数据,然后处理命令,最后将数据放回原处。另外,它一行一行地完成这个过程!但是,如果您不需要在“更新”查询中使用本地表中的参数或数据,则可以避免这种情况。 (换句话说 - 如果您的查询始终相同且仅使用MySQL数据)

Trick is to force MySQL server to process the query instead of Access! This can be achieved by creating "pass-thru" query in Access, where you can write directly your SQL code (in MySQL syntax). Access then sends this command to MySQL server and it is processed directly within that server. So your query will be almost as fast as doing it in local access table.

诀窍是强制MySQL服务器处理查询而不是Access!这可以通过在Access中创建“pass-thru”查询来实现,您可以在其中直接编写SQL代码(在MySQL语法中)。 Access然后将此命令发送到MySQL服务器,并在该服务器中直接处理。因此,您的查询几乎与在本地访问表中执行查询一样快。

#2


0  

Access is a single-user system. MySQL with InnoDB is a transaction-protected multi-user system.

Access是单用户系统。带有InnoDB的MySQL是一个受事务保护的多用户系统。

When you issue an UPDATE command that hits ten or so megarows, MySQL has to construct rollback information in case the operation fails before it hits all the rows. This takes a lot of time and memory.

当你发出一个命中10个左右的megarows的UPDATE命令时,MySQL必须构造回滚信息,以防操作在它到达所有行之前失败。这需要大量的时间和记忆。

Try switching your table access method to MyISAM if you're going to do these truly massive UPDATE and INSERT commands. MyISAM isn't transaction-protected so these operations may run faster.

如果您要执行这些真正大量的UPDATE和INSERT命令,请尝试将表访问方法切换到MyISAM。 MyISAM不受事务保护,因此这些操作可能运行得更快。

You may find it helpful to do your data migration with some tool other than ODBC. ODBC is severely limited in its ability to handle lots of data, as you have discovered. For example, you could export your Access tables to flat files and then import them with a MySQL client program. See here... https://*.com/questions/9185/what-is-the-best-mysql-client-application-for-windows

您可能会发现使用ODBC之外的某些工具进行数据迁移很有帮助。正如您所发现的,ODBC在处理大量数据方面受到严重限制。例如,您可以将Access表导出为平面文件,然后使用MySQL客户端程序导入它们。请看这里... https://*.com/questions/9185/what-is-the-best-mysql-client-application-for-windows

Once you've imported your data to MySQL, you then can run Access-based queries. But avoid UPDATE requests that hit everything in the database.

一旦将数据导入MySQL,就可以运行基于访问的查询。但是要避免命中数据库中所有内容的UPDATE请求。

#3


0  

Ollie, I get your point on avoiding UPDATES that hit all rows. I use that to flag rows which are missing from the destination database, and it has been a quick and easy way to append only the missing rows. I see SQLyog has an import tool to Append new records only, but this still runs through all rows in the import table, and runs for hours. I will see if I can export only the data I want to CSV, but would still be nice to get the ODBC connector to work faster than present, if at all possible.

奥利,我明白你要避免在所有行上都有更新。我使用它来标记目标数据库中缺少的行,并且它是一种快速简便的方法来仅追加丢失的行。我看到SQLyog有一个导入工具只能添加新记录,但是这仍然会遍历导入表中的所有行,并运行几个小时。我将看看我是否只能将我想要的数据导出为CSV,但如果可能的话,仍然可以让ODBC连接器比现在更快地工作。

#1


1  

Communication of MS Access with MySQL thru linked table is slow. Terribly slow. That is the fact which can't be changed. Why is it happening? Access firstly load data from MySQL, then it process the command and finally it puts the data back. In addition, it does this process row by row! However, you can avoid this if you don't need to use parameters or data from local tables in your "update" query. (In another words - if your query is always same and it use only MySQL data)

MS Access与MySQL通过链接表的通信很慢。非常慢。这是无法改变的事实。为什么会这样?访问首先从MySQL加载数据,然后处理命令,最后将数据放回原处。另外,它一行一行地完成这个过程!但是,如果您不需要在“更新”查询中使用本地表中的参数或数据,则可以避免这种情况。 (换句话说 - 如果您的查询始终相同且仅使用MySQL数据)

Trick is to force MySQL server to process the query instead of Access! This can be achieved by creating "pass-thru" query in Access, where you can write directly your SQL code (in MySQL syntax). Access then sends this command to MySQL server and it is processed directly within that server. So your query will be almost as fast as doing it in local access table.

诀窍是强制MySQL服务器处理查询而不是Access!这可以通过在Access中创建“pass-thru”查询来实现,您可以在其中直接编写SQL代码(在MySQL语法中)。 Access然后将此命令发送到MySQL服务器,并在该服务器中直接处理。因此,您的查询几乎与在本地访问表中执行查询一样快。

#2


0  

Access is a single-user system. MySQL with InnoDB is a transaction-protected multi-user system.

Access是单用户系统。带有InnoDB的MySQL是一个受事务保护的多用户系统。

When you issue an UPDATE command that hits ten or so megarows, MySQL has to construct rollback information in case the operation fails before it hits all the rows. This takes a lot of time and memory.

当你发出一个命中10个左右的megarows的UPDATE命令时,MySQL必须构造回滚信息,以防操作在它到达所有行之前失败。这需要大量的时间和记忆。

Try switching your table access method to MyISAM if you're going to do these truly massive UPDATE and INSERT commands. MyISAM isn't transaction-protected so these operations may run faster.

如果您要执行这些真正大量的UPDATE和INSERT命令,请尝试将表访问方法切换到MyISAM。 MyISAM不受事务保护,因此这些操作可能运行得更快。

You may find it helpful to do your data migration with some tool other than ODBC. ODBC is severely limited in its ability to handle lots of data, as you have discovered. For example, you could export your Access tables to flat files and then import them with a MySQL client program. See here... https://*.com/questions/9185/what-is-the-best-mysql-client-application-for-windows

您可能会发现使用ODBC之外的某些工具进行数据迁移很有帮助。正如您所发现的,ODBC在处理大量数据方面受到严重限制。例如,您可以将Access表导出为平面文件,然后使用MySQL客户端程序导入它们。请看这里... https://*.com/questions/9185/what-is-the-best-mysql-client-application-for-windows

Once you've imported your data to MySQL, you then can run Access-based queries. But avoid UPDATE requests that hit everything in the database.

一旦将数据导入MySQL,就可以运行基于访问的查询。但是要避免命中数据库中所有内容的UPDATE请求。

#3


0  

Ollie, I get your point on avoiding UPDATES that hit all rows. I use that to flag rows which are missing from the destination database, and it has been a quick and easy way to append only the missing rows. I see SQLyog has an import tool to Append new records only, but this still runs through all rows in the import table, and runs for hours. I will see if I can export only the data I want to CSV, but would still be nice to get the ODBC connector to work faster than present, if at all possible.

奥利,我明白你要避免在所有行上都有更新。我使用它来标记目标数据库中缺少的行,并且它是一种快速简便的方法来仅追加丢失的行。我看到SQLyog有一个导入工具只能添加新记录,但是这仍然会遍历导入表中的所有行,并运行几个小时。我将看看我是否只能将我想要的数据导出为CSV,但如果可能的话,仍然可以让ODBC连接器比现在更快地工作。