NDBCluster stores a lot of data in RAM:
https://www.fromdual.com/mysql-cluster-memory-sizing
In the link there is a section that says:
PRIMARY KEY (PK)
Every MySQL cluster table must have a primary key (PK). If you do NOT create onw, MySQL Cluster creates one for you with a size of 8 bytes. Every PK causes a hash index (HI) which has a size of 20 bytes. HI are stored in index memory while all other information are stored in data memory.
A PK also creates an ordered index (OI) unless you create it with USING HASH
Does this means that every value of the table has 20Bytes just for the index?, if so and assuming that the the PK is a datatype DATETIME (+8Bytes), each row has 28Bytes just for the index that would go directly to RAM.
Whit this said + an example I have for ATDome_command_moveAzimuth, where I have 5.136.720 rows
(Not considering data that are not indexes because they can go to Disk)
Data In RAM = (20 + 8)*5136720/1024/1024 [MB]
Data in RAM = 137MB (with commandLog is twice this value)
Then if we consider the other tables + other TIERS it seems to me that is only possible to handle a limited amount of data (if we are careful maybe just for TIER1) before the RAM goes full.
Using a concrete example (M1M3) there are 11 topics as telemetry that are published at 50Hz (there are also events at 50Hz but I’m only considering telemetry for the exercise), then:
data/day for each table = 50*60*60*24 = 4.320.000
Then for 11 topics, 11*4.320.000 = 47.520.000 or 1,27GB of RAM/day.
Re-purpose this task due to some issues we are having with the EFD, these are:
-After a few days of test, the EFD, was giving a "Table full" error, even though the disk was almost empty. This seems to be a configuration issue which will require some research to solve properly
-Still having some issues with a limited number of attributes
-Not sure what else but I would like to write more tests to ensure that this will not keep happening
These are probably due that the EFD need some configuration optimization to handle what we need.