Cassandra数据库行大小是否受可用内存限制?

时间:2023-01-23 08:47:29

I'm working with very long time series -- hundreds of millions of data points in one series -- and am considering Cassandra as a data store. In this question, one of the Cassandra committers (the über helpful jbellis) says that Cassandra rows can be very large, and that column slicing operations are faster than row slices, hence my question: Is the row size still limited by available memory?

我正在使用很长的时间序列 - 一个系列中有数亿个数据点 - 我正在考虑将Cassandra作为数据存储。在这个问题中,其中一个Cassandra提交者(超级有用的jbellis)说Cassandra行可能非常大,并且列切片操作比行切片快,因此我的问题是:行大小是否仍然受可用内存的限制?

1 个解决方案

#1


5  

Yes, row size is still limited by available memory. This is because the compaction algorithm today de-serializes the entire row in memory before writing out the compacted SSTable.

是的,行大小仍受可用内存的限制。这是因为今天的压缩算法在写出压缩的SSTable之前反序列化了内存中的整行。

This is currently aimed to be fixed in the 0.7 release. See CASSANDRA-16 for progress.

目前这个目标是在0.7版本中修复。有关进展,请参阅CASSANDRA-16。

Another interesting link: CassandraLimitations

另一个有趣的链接:CassandraLimitations

#1


5  

Yes, row size is still limited by available memory. This is because the compaction algorithm today de-serializes the entire row in memory before writing out the compacted SSTable.

是的,行大小仍受可用内存的限制。这是因为今天的压缩算法在写出压缩的SSTable之前反序列化了内存中的整行。

This is currently aimed to be fixed in the 0.7 release. See CASSANDRA-16 for progress.

目前这个目标是在0.7版本中修复。有关进展,请参阅CASSANDRA-16。

Another interesting link: CassandraLimitations

另一个有趣的链接:CassandraLimitations