在mysql中设置正确的innodb_log_file_size

时间:2022-09-24 10:16:34

We ran an alter table today today that took down the DB. We failed over to the slave, and in the post-mortem, we discovered this in the mysql error.log

今天我们今天运行了一个转换表,取消了数据库。我们故障转移到奴隶,在验尸中,我们在mysql error.log中发现了这一点

InnoDB: ERROR: the age of the last checkpoint is 90608129,
InnoDB: which exceeds the log group capacity 90593280.
InnoDB: If you are using big BLOB or TEXT rows, you must set the
InnoDB: combined size of log files at least 10 times bigger than the
InnoDB: largest such row.

This error rings true because we were working on a very large table that contains BLOB data types.

这个错误是正确的,因为我们正在处理一个包含BLOB数据类型的非常大的表。

The best answer we found online said

我们在网上找到的最佳答案说

To solve it, you need to stop MySQL cleanly (very important), delete the existing InnoDB log files (probably lb_logfile* in your MySQL data directory, unless you've moved them), then adjust the innodb_log_file_size to suit your needs, and then start MySQL again. This article from the MySQL performance blog might be instructive.

要解决这个问题,你需要彻底停止MySQL(非常重要),删除现有的InnoDB日志文件(可能是MySQL数据目录中的lb_logfile *,除非你已经移动它们),然后调整innodb_log_file_size以满足你的需要,然后再次启动MySQL。来自MySQL性能博客的这篇文章可能很有启发性。

and in the comments

并在评论中

Yes, the database server will effectively hang for any updates to InnoDB tables when the log fills up. It can cripple a site.

是的,当日志填满时,数据库服务器将有效挂起以对InnoDB表进行任何更新。它可以削弱一个网站。

which is I guess what happened, based on our current (default) innodb_log_file_size of 48mb?

根据我们当前(默认)的innodb_log_file_size为48mb,我想发生了什么?

SHOW GLOBAL VARIABLES LIKE '%innodb_log%';
+-----------------------------+----------+
| Variable_name               | Value    |
+-----------------------------+----------+
| innodb_log_buffer_size      | 8388608  |
| innodb_log_compressed_pages | ON       |
| innodb_log_file_size        | 50331648 |
| innodb_log_files_in_group   | 2        |
| innodb_log_group_home_dir   | ./       |
+-----------------------------+----------+

So, this leads me to two pointed questions and one open-ended one:

所以,这引出了两个尖锐的问题和一个开放的问题:

  1. How do we determine the largest row so we can set our innodb_log_file_size to be bigger than that?
  2. 我们如何确定最大的行,以便我们可以将innodb_log_file_size设置为大于那个?
  3. What is the consequence of the action in step 1? I'd read about long recovery times with bigger logs.
  4. 第1步中的操作有什么后果?我读过更大的日志需要很长的恢复时间。
  5. Is there anything else I should worry about regarding migrations, considering that we have a large table (650k rows, 6169.8GB) with unrestrained, variable length BLOB fields.
  6. 考虑到我们有一个带有无限制,可变长度BLOB字段的大表(650k行,6169.8GB),还有什么我应该担心的迁移。

We're running mysql 5.6 and here's our my.cnf.

我们正在运行mysql 5.6,这是我们的my.cnf。

[mysqld]

#defaults
basedir                   = /opt/mysql/server-5.6
datadir                   = /var/lib/mysql
port                      = 3306
socket                    = /var/run/mysqld/mysqld.sock
tmpdir                    = /tmp
bind-address              = 0.0.0.0

#logs
log_error                 = /var/log/mysql/error.log
expire_logs_days          = 4
slow_query_log            = on
long_query_time           = 1


innodb_buffer_pool_size   = 11G

#http://*.com/a/10866836/182484
collation-server          = utf8_bin
init-connect              ='SET NAMES utf8'
init_connect              ='SET collation_connection = utf8_bin'
character-set-server      = utf8
max_allowed_packet        = 64M
skip-character-set-client-handshake

#cache
query_cache_size          = 268435456
query_cache_type          = 1
query_cache_limit         = 1048576
```

As a follow-up to the suggestions listed below, I began investigation into the file size of the table in question. I ran a script that wrote the combined byte size of the three BLOB fields to a table called pen_sizes. Here's the result of getting the largest byte size:

作为下面列出的建议的后续行动,我开始调查相关表格的文件大小。我运行了一个脚本,将三个BLOB字段的组合字节大小写入一个名为pen_sizes的表中。这是获得最大字节大小的结果:

select pen_size as bytes,·
  pen_size  /  1024 / 1024 as mb,·
  pen_id from pen_sizes
  group by pen_id
  order by bytes desc
  limit 40

+---------+------------+--------+
| bytes   | mb         | pen_id |
+---------+------------+--------+
| 3542620 | 3.37850571 |  84816 |
| 3379107 | 3.22256756 |  74796 |
| 3019237 | 2.87936878 | 569726 |
| 3019237 | 2.87936878 | 576506 |
| 3019237 | 2.87936878 | 576507 |
| 2703177 | 2.57795048 | 346965 |
| 2703177 | 2.57795048 | 346964 |
| 2703177 | 2.57795048 |  93706 |
| 2064807 | 1.96915340 | 154627 |
| 2048592 | 1.95368958 | 237514 |
| 2000695 | 1.90801144 |  46798 |
| 1843034 | 1.75765419 | 231988 |
| 1843024 | 1.75764465 | 230423 |
| 1820514 | 1.73617744 |  76745 |
| 1795494 | 1.71231651 | 650208 |
| 1785353 | 1.70264530 |  74912 |
| 1754059 | 1.67280102 | 444932 |
| 1752609 | 1.67141819 |  76607 |
| 1711492 | 1.63220596 | 224574 |
| 1632405 | 1.55678272 |  76188 |
| 1500157 | 1.43066120 |  77256 |
| 1494572 | 1.42533493 | 137184 |
| 1478692 | 1.41019058 | 238547 |
| 1456973 | 1.38947773 | 181379 |
| 1433240 | 1.36684418 |  77631 |
| 1421452 | 1.35560226 | 102930 |
| 1383872 | 1.31976318 |  77627 |
| 1359317 | 1.29634571 | 454109 |
| 1355701 | 1.29289722 | 631811 |
| 1343621 | 1.28137684 |  75256 |
| 1343621 | 1.28137684 |  75257 |
| 1334071 | 1.27226925 |  77626 |
| 1327063 | 1.26558590 | 129731 |
| 1320627 | 1.25944805 | 636914 |
| 1231918 | 1.17484856 | 117269 |
| 1223975 | 1.16727352 |  75103 |
| 1220233 | 1.16370487 | 326462 |
| 1220233 | 1.16370487 | 326463 |
| 1203432 | 1.14768219 | 183967 |
| 1200373 | 1.14476490 | 420360 |
+---------+------------+--------+

This makes me believe that the average row size is closer to 1mb than the 10 suggested. Maybe the table size I listed earlier includes the indexes, too?

这让我相信平均行大小比建议的10行更接近1mb。也许我之前列出的表格大小也包括索引?

I ran

我跑了

SELECT table_name AS "Tables", 
round(((data_length + index_length) / 1024 / 1024), 2) "Size in MB" 
FROM information_schema.TABLES 
WHERE table_schema = 'codepen'

+-------------------+------------+
| Tables            | Size in MB |
+-------------------+------------+
...snip
| pens              |    6287.89 |
...snip

1 个解决方案

#1


8  

0. Preliminary information

0.初步信息

Your settings:

你的设置:

innodb_log_file_size = 50331648
innodb_log_files_in_group = 2

Therefore your "log group capacity" = 2 x 50331648 = 96 MB

因此,您的“日志组容量”= 2 x 50331648 = 96 MB

1. How to determine the largest row

1.如何确定最大的行

There is no direct method. But one can easily calculate the size of one given row based on these tables (compression should not matter to us here, if, as I assume, rows are not compressed in the log files).

没有直接的方法。但是可以根据这些表轻松计算一个给定行的大小(压缩对我们来说无关紧要,如果我假设的话,行不会在日志文件中压缩)。

2. Impact of innodb_log_file_size

2. innodb_log_file_size的影响

Reference manual:

参考手册:

The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. Larger log files also make crash recovery slower, although improvements to recovery performance in MySQL 5.5 and higher make the log file size less of a consideration.

值越大,缓冲池中需要的检查点刷新活动越少,从而节省了磁盘I / O.较大的日志文件也会使崩溃恢复速度变慢,但MySQL 5.5及更高版本中恢复性能的改进使得日志文件大小不再考虑。

3. Anything else to worry about

3.还有其他需要担心的事情

6169.8 GB / 650k rows = about 10 MB per row on average This is a serious problem per se if you intend to use your database in a transactional, multi-user situation. Consider storing your BLOB's as files outside of the database. Or, at least, store them in a separate MyISAM (non-transactional) table.

6169.8 GB / 650k行=平均每行大约10 MB如果您打算在事务性多用户情况下使用数据库,这本身就是一个严重的问题。考虑将BLOB存储为数据库之外的文件。或者,至少将它们存储在单独的MyISAM(非事务性)表中。

#1


8  

0. Preliminary information

0.初步信息

Your settings:

你的设置:

innodb_log_file_size = 50331648
innodb_log_files_in_group = 2

Therefore your "log group capacity" = 2 x 50331648 = 96 MB

因此,您的“日志组容量”= 2 x 50331648 = 96 MB

1. How to determine the largest row

1.如何确定最大的行

There is no direct method. But one can easily calculate the size of one given row based on these tables (compression should not matter to us here, if, as I assume, rows are not compressed in the log files).

没有直接的方法。但是可以根据这些表轻松计算一个给定行的大小(压缩对我们来说无关紧要,如果我假设的话,行不会在日志文件中压缩)。

2. Impact of innodb_log_file_size

2. innodb_log_file_size的影响

Reference manual:

参考手册:

The larger the value, the less checkpoint flush activity is needed in the buffer pool, saving disk I/O. Larger log files also make crash recovery slower, although improvements to recovery performance in MySQL 5.5 and higher make the log file size less of a consideration.

值越大,缓冲池中需要的检查点刷新活动越少,从而节省了磁盘I / O.较大的日志文件也会使崩溃恢复速度变慢,但MySQL 5.5及更高版本中恢复性能的改进使得日志文件大小不再考虑。

3. Anything else to worry about

3.还有其他需要担心的事情

6169.8 GB / 650k rows = about 10 MB per row on average This is a serious problem per se if you intend to use your database in a transactional, multi-user situation. Consider storing your BLOB's as files outside of the database. Or, at least, store them in a separate MyISAM (non-transactional) table.

6169.8 GB / 650k行=平均每行大约10 MB如果您打算在事务性多用户情况下使用数据库,这本身就是一个严重的问题。考虑将BLOB存储为数据库之外的文件。或者,至少将它们存储在单独的MyISAM(非事务性)表中。