Primary key id reaching limit of bigint data type

时间:2022-09-16 00:03:30

I have a table that is exposed to large inserts and deletes on a regular basis (and because of this there are large gaps in the number sequence of the primary id column). It had a primary id column of type 'int' that was changed to 'bigint'. Despite this change, the limit of this datatype will also inevitably be exceeded at some point in the future (current usage would indicate this to be the case within the next year or so).

我有一个表定期暴露于大插入和删除(因此,主id列的数字序列中存在大的间隙)。它有一个'int'类型的主id列,它被改为'bigint'。尽管有这种变化,但在未来的某个时间点也不可避免地会超出此数据类型的限制(当前使用情况将表明在未来一年内会出现这种情况)。

How do you handle this scenario? I'm wondering (shock horror) whether I even need the primary key column as it's not used in any obvious way in any queries or referenced by other tables etc. Would removing the column be a solution? Or would that sort of action see you expelled from the mysql community in disgust?!

你如何处理这种情况?我想知道(震惊恐怖)我是否甚至需要主键列,因为它在任何查询中都没有以任何明显的方式使用或被其他表引用等等。删除列是一个解决方案吗?或者那种行为会让你厌恶地从mysql社区中被驱逐出去?!

We're already nearly at the 500 million mark for the auto increment id. The table holds keywords associated with file data in a separate table. Each file data row could have as many as 30 keywords associated with it in the keywords table, so they really start to stack up after you've got tens of thousands of files constantly being inserted and deleted. Currently the keyword table contains the keyword and the id of the file it's associated with, so if I got rid of the current primary id column, there would be no unique identifier other than the keyword (varchar) and file id (int) fields combined, which would be a terrible primary key.

对于自动增量ID,我们已经接近5亿大关。该表在单独的表中保存与文件数据相关联的关键字。每个文件数据行在关键字表中可以有多达30个与之关联的关键字,因此在您不断插入和删除数万个文件后,它们真正开始堆叠。目前关键字表包含关键字及其关联的文件的ID,因此如果我删除了当前的主要id列,除了关键字(varchar)和文件id(int)字段之外,没有唯一的标识符,这将是一个可怕的主键。

All thoughts/answers/experiences/solutions very gratefully received.

非常感谢所有的想法/答案/经验/解决方案。

4 个解决方案

#1


9  

If you don't need that column because you have another identifier for a record which is unique. Like a supplied measurement_id or whatever or even a combined key: measurement_id + location_id it should not be needed to use an auto increment key. If there is any chance you won't have a unique key than make one for sure.

如果您不需要该列,因为您有另一个唯一的记录标识符。像提供的measurement_id或任何甚至组合键一样:measurement_id + location_id不需要使用自动增量键。如果有可能你没有唯一的钥匙而不是肯定的。

What if I need a very very big autoincrement ID?

如果我需要一个非常大的自动增量ID怎么办?

Are you really sure you have so many inserts and deletes you will get to the limit?

你真的确定你有这么多的插入和删除你会达到极限吗?

#2


22  

I know it has been already answered a year ago but just to continue on Luc Franken answer,

我知道它已经在一年前得到解答,但只是继续对Luc Franken的回答,

If you insert 500 million rows per second, it would take around 1173 years to reach the limit of the BIG INT. So yeah i think don't worry about that

如果每秒插入5亿行,则需要大约1173年才能达到BIG INT的限制。所以是的,我认为不要担心

#3


3  

If we inserted 1 hundred thousand (100,000) records per second into the table then it would take 2,924,712 yrs

如果我们每秒将十万(100,000)条记录插入表中,则需要2,924,712次

If we inserted 1 million (1,000,000) records per second into the table then it would take 292,471 yrs

如果我们在表中每秒插入100万(1,000,000)条记录,则需要292,471年

If we inserted 10 million (10,000,000) records per second into the table then it would take 29,247 yrs

如果我们每秒在表中插入1000万(10,000,000)条记录,则需要29,247年

If we inserted 100 million records per second into the table then it would take 2,925 yrs

如果我们每秒在表中插入1亿条记录,则需要2,925年

If we inserted 1000 million records per second into the table then it would take 292 yrs

如果我们每秒在表中插入1000万条记录,则需要292年

So don't worry about it

所以不要担心

#4


0  

maybe it's bit too late, but you can add trigger in DELETE,

也许这有点太晚了,但你可以在DELETE中添加触发器,

here is sample code in SQL SERVER

这是SQL SERVER中的示例代码

CREATE TRIGGER resetidentity
    ON dbo.[table_name]
    FOR DELETE
AS
    DECLARE @MaxID INT
    SELECT @MaxID = ISNULL(MAX(ID),0)
    FROM dbo.[table_name]
    DBCC CHECKIDENT('table_name', RESEED, @MaxID)
GO

In a nutshell, this will reset you ID (in case it is auto increment and primary). Ex: if you have 800 rows and deleted last 400 of them, next time you insert, it will start at 401 instead of 801.

简而言之,这将重置您的ID(如果它是自动增量和主要)。例如:如果您有800行并删除了最后400行,则下次插入时,它将从401而不是801开始。

But the down is it will not rearrange your ID if you delete it on the middle record.EX if you have 800 rows and deleted ID 200-400, ID still count at 801 next time you write new row(s)

但是,如果您在中间记录中删除它,它将不会重新排列您的ID。如果您有800行并且已删除ID 200-400,则在您下次写入新行时ID仍然计为801

#1


9  

If you don't need that column because you have another identifier for a record which is unique. Like a supplied measurement_id or whatever or even a combined key: measurement_id + location_id it should not be needed to use an auto increment key. If there is any chance you won't have a unique key than make one for sure.

如果您不需要该列,因为您有另一个唯一的记录标识符。像提供的measurement_id或任何甚至组合键一样:measurement_id + location_id不需要使用自动增量键。如果有可能你没有唯一的钥匙而不是肯定的。

What if I need a very very big autoincrement ID?

如果我需要一个非常大的自动增量ID怎么办?

Are you really sure you have so many inserts and deletes you will get to the limit?

你真的确定你有这么多的插入和删除你会达到极限吗?

#2


22  

I know it has been already answered a year ago but just to continue on Luc Franken answer,

我知道它已经在一年前得到解答,但只是继续对Luc Franken的回答,

If you insert 500 million rows per second, it would take around 1173 years to reach the limit of the BIG INT. So yeah i think don't worry about that

如果每秒插入5亿行,则需要大约1173年才能达到BIG INT的限制。所以是的,我认为不要担心

#3


3  

If we inserted 1 hundred thousand (100,000) records per second into the table then it would take 2,924,712 yrs

如果我们每秒将十万(100,000)条记录插入表中,则需要2,924,712次

If we inserted 1 million (1,000,000) records per second into the table then it would take 292,471 yrs

如果我们在表中每秒插入100万(1,000,000)条记录,则需要292,471年

If we inserted 10 million (10,000,000) records per second into the table then it would take 29,247 yrs

如果我们每秒在表中插入1000万(10,000,000)条记录,则需要29,247年

If we inserted 100 million records per second into the table then it would take 2,925 yrs

如果我们每秒在表中插入1亿条记录,则需要2,925年

If we inserted 1000 million records per second into the table then it would take 292 yrs

如果我们每秒在表中插入1000万条记录,则需要292年

So don't worry about it

所以不要担心

#4


0  

maybe it's bit too late, but you can add trigger in DELETE,

也许这有点太晚了,但你可以在DELETE中添加触发器,

here is sample code in SQL SERVER

这是SQL SERVER中的示例代码

CREATE TRIGGER resetidentity
    ON dbo.[table_name]
    FOR DELETE
AS
    DECLARE @MaxID INT
    SELECT @MaxID = ISNULL(MAX(ID),0)
    FROM dbo.[table_name]
    DBCC CHECKIDENT('table_name', RESEED, @MaxID)
GO

In a nutshell, this will reset you ID (in case it is auto increment and primary). Ex: if you have 800 rows and deleted last 400 of them, next time you insert, it will start at 401 instead of 801.

简而言之,这将重置您的ID(如果它是自动增量和主要)。例如:如果您有800行并删除了最后400行,则下次插入时,它将从401而不是801开始。

But the down is it will not rearrange your ID if you delete it on the middle record.EX if you have 800 rows and deleted ID 200-400, ID still count at 801 next time you write new row(s)

但是,如果您在中间记录中删除它,它将不会重新排列您的ID。如果您有800行并且已删除ID 200-400,则在您下次写入新行时ID仍然计为801