sqlite表中的最大行数

时间:2022-11-25 22:15:34

Give an simple sqlite3 table (create table data (key PRIMARY KEY,value)) with key size of 256 bytes and value size of 4096 bytes, what is the limit (ignoring disk space limits) on the maximum number of rows in this sqlite3 table? Are their limits associated with OS (win32, linux or Mac)

给一个简单的sqlite3表(创建表数据(键主键,值),键大小为256字节,值大小为4096字节,那么这个sqlite3表中最大行数的限制(忽略磁盘空间限制)是什么?是否与操作系统(win32, linux或Mac)相关?

7 个解决方案

#1


11  

In SQLite3 the field size isn't fixed. The engine will commit as much space as needed for each cell.

在SQLite3中,字段大小不是固定的。引擎将提交每个单元所需的尽可能多的空间。

For the file limits see this SO question:
What are the performance characteristics of sqlite with very large database files?

对于文件限制,请参见这个SO问题:使用非常大的数据库文件的sqlite的性能特征是什么?

#2


25  

As of Jan 2017 the sqlite3 limits page defines the practical limits to this question based on the maximum size of the database which is 140 terabytes:

截至2017年1月,sqlite3 limit页面根据数据库的最大大小(140 tb)定义了这个问题的实际限制:

Maximum Number Of Rows In A Table

表中的最大行数

The theoretical maximum number of rows in a table is 2^64 (18446744073709551616 or about 1.8e+19). This limit is unreachable since the maximum database size of 140 terabytes will be reached first. A 140 terabytes database can hold no more than approximately 1e+13 rows, and then only if there are no indices and if each row contains very little data.

理论最大表中的行数是2 ^ 64(18446744073709551616约1.8 e + 19)。这个限制是无法达到的,因为最大数据库大小将首先达到140 tb。一个140tb的数据库只能容纳不超过1e+13行,并且只有在没有索引和每一行包含很少数据的情况下。

So with a max database size of 140 terabytes you'd be lucky to get ~1 Trillion rows since if you actually had a useful table with data in it the number of rows would be constrained by the size of the data. You could probably have up to 10s of billions of rows in a 140 TB database.

因此,如果数据库的最大大小为140 tb,那么您将幸运地得到1万亿行,因为如果您有一个包含数据的有用表,那么行数将受到数据大小的限制。在一个140 TB的数据库中,可能有多达10亿到10亿的行。

#3


11  

I have SQLite database 3.3 GB in size with 25million rows of stored numeric logs and doing calculations on them, it is working fast and well.

我有SQLite数据库3.3 GB,有2500万行存储的数值日志,并对它们进行计算,它运行得很快,很好。

#4


6  

I have a 7.5GB SQLite database which stores 10.5 million rows. Querying is fast as long as you have correct indexes. To get the inserts to run quickly, you should use transactions. Also, I found it's better to create the indexes after all rows have been inserted. Otherwise the insert speed is quite slow.

我有一个7.5GB的SQLite数据库,它存储1050万行。只要有正确的索引,查询就会很快。要使插入快速运行,您应该使用事务。而且,我发现最好在插入所有行之后创建索引。否则,插入速度会很慢。

#5


4  

The answer you want is right here.

你想要的答案就在这里。

Each OS you mentioned supports multiple file system types. The actual limits will be per-filesystem, not per-OS. It's difficult to summarize the constraint matrix on SO, but while some file systems impose limits on file sizes, all major OS kernels today support a file system with extremely large files.

您提到的每个操作系统都支持多种文件系统类型。实际的限制是每个文件系统,而不是每个操作系统。很难总结SO上的约束矩阵,但是虽然有些文件系统对文件大小进行了限制,但是现在所有主要的OS内核都支持一个文件系统,其中包含非常大的文件。

The maximum page size of an sqlite3 db is quite large, 2^32768, although this requires some configuration. I presume an index must specify a page number but the result is likely to be that an OS or environment limit is reached first.

的最大页面大小sqlite3 db是相当大的,2 ^ 32768,虽然这需要一些配置。我假设一个索引必须指定一个页码,但结果可能是操作系统或环境限制首先到达。

#6


2  

Essentially no real limits

基本上没有真正的限制

see http://www.sqlite.org/limits.html for details

有关详细信息,请参阅http://www.sqlite.org/limits.html

#7


0  

No limits, but basically after a certain point the sqlite database will become useless. PostgreSQL is the top free database BY FAR for huge databases. In my case, it is about 1 million rows on my Linux 64-Bit Quad Core Dual Processor computer with 8GB RAM and Raptor hard disks. PostgreSQL is unbeatable, even by a tuned MySQL database. (Posted in 2011).

没有限制,但基本上在某个点之后,sqlite数据库将变得无用。PostgreSQL是迄今为止最大的免费数据库。在我的例子中,在我的Linux 64位Quad核心双处理器计算机上大约有100万行,带有8GB的RAM和Raptor硬盘。PostgreSQL是不可战胜的,即使是经过优化的MySQL数据库。(2011年发布)。

#1


11  

In SQLite3 the field size isn't fixed. The engine will commit as much space as needed for each cell.

在SQLite3中,字段大小不是固定的。引擎将提交每个单元所需的尽可能多的空间。

For the file limits see this SO question:
What are the performance characteristics of sqlite with very large database files?

对于文件限制,请参见这个SO问题:使用非常大的数据库文件的sqlite的性能特征是什么?

#2


25  

As of Jan 2017 the sqlite3 limits page defines the practical limits to this question based on the maximum size of the database which is 140 terabytes:

截至2017年1月,sqlite3 limit页面根据数据库的最大大小(140 tb)定义了这个问题的实际限制:

Maximum Number Of Rows In A Table

表中的最大行数

The theoretical maximum number of rows in a table is 2^64 (18446744073709551616 or about 1.8e+19). This limit is unreachable since the maximum database size of 140 terabytes will be reached first. A 140 terabytes database can hold no more than approximately 1e+13 rows, and then only if there are no indices and if each row contains very little data.

理论最大表中的行数是2 ^ 64(18446744073709551616约1.8 e + 19)。这个限制是无法达到的,因为最大数据库大小将首先达到140 tb。一个140tb的数据库只能容纳不超过1e+13行,并且只有在没有索引和每一行包含很少数据的情况下。

So with a max database size of 140 terabytes you'd be lucky to get ~1 Trillion rows since if you actually had a useful table with data in it the number of rows would be constrained by the size of the data. You could probably have up to 10s of billions of rows in a 140 TB database.

因此,如果数据库的最大大小为140 tb,那么您将幸运地得到1万亿行,因为如果您有一个包含数据的有用表,那么行数将受到数据大小的限制。在一个140 TB的数据库中,可能有多达10亿到10亿的行。

#3


11  

I have SQLite database 3.3 GB in size with 25million rows of stored numeric logs and doing calculations on them, it is working fast and well.

我有SQLite数据库3.3 GB,有2500万行存储的数值日志,并对它们进行计算,它运行得很快,很好。

#4


6  

I have a 7.5GB SQLite database which stores 10.5 million rows. Querying is fast as long as you have correct indexes. To get the inserts to run quickly, you should use transactions. Also, I found it's better to create the indexes after all rows have been inserted. Otherwise the insert speed is quite slow.

我有一个7.5GB的SQLite数据库,它存储1050万行。只要有正确的索引,查询就会很快。要使插入快速运行,您应该使用事务。而且,我发现最好在插入所有行之后创建索引。否则,插入速度会很慢。

#5


4  

The answer you want is right here.

你想要的答案就在这里。

Each OS you mentioned supports multiple file system types. The actual limits will be per-filesystem, not per-OS. It's difficult to summarize the constraint matrix on SO, but while some file systems impose limits on file sizes, all major OS kernels today support a file system with extremely large files.

您提到的每个操作系统都支持多种文件系统类型。实际的限制是每个文件系统,而不是每个操作系统。很难总结SO上的约束矩阵,但是虽然有些文件系统对文件大小进行了限制,但是现在所有主要的OS内核都支持一个文件系统,其中包含非常大的文件。

The maximum page size of an sqlite3 db is quite large, 2^32768, although this requires some configuration. I presume an index must specify a page number but the result is likely to be that an OS or environment limit is reached first.

的最大页面大小sqlite3 db是相当大的,2 ^ 32768,虽然这需要一些配置。我假设一个索引必须指定一个页码,但结果可能是操作系统或环境限制首先到达。

#6


2  

Essentially no real limits

基本上没有真正的限制

see http://www.sqlite.org/limits.html for details

有关详细信息,请参阅http://www.sqlite.org/limits.html

#7


0  

No limits, but basically after a certain point the sqlite database will become useless. PostgreSQL is the top free database BY FAR for huge databases. In my case, it is about 1 million rows on my Linux 64-Bit Quad Core Dual Processor computer with 8GB RAM and Raptor hard disks. PostgreSQL is unbeatable, even by a tuned MySQL database. (Posted in 2011).

没有限制,但基本上在某个点之后,sqlite数据库将变得无用。PostgreSQL是迄今为止最大的免费数据库。在我的例子中,在我的Linux 64位Quad核心双处理器计算机上大约有100万行,带有8GB的RAM和Raptor硬盘。PostgreSQL是不可战胜的,即使是经过优化的MySQL数据库。(2011年发布)。