What's the recommended cluster.max_shards_per_node for an OpenSearch/ElasticSearch cluster when using it with DataMiner?
I'm receiving following error:
Validation Failed: 1: this action would add [4] total shards, but this cluster currently has [1000]/[1000] maximum shards open;
On elastic.co i found the following blog: https://www.elastic.co/blog/how-many-shards-should-i-have-in-my-elasticsearch-cluster
There they mention the following:
The number of shards you can hold on a node will be proportional to the amount of heap you have available, but there is no fixed limit enforced by Elasticsearch. A good rule-of-thumb is to ensure you keep the number of shards per node below 20 per GB heap it has configured. A node with a 30GB heap should therefore have a maximum of 600 shards, but the further below this limit you can keep it the better. This will generally help the cluster stay in good health.
Hope this helps.
Kind regards,
Thanks for your answer, Stacey. In the mean time I was able to gather some extra info. The default is 1000 shards per node, as recommended by ElasticSearch. You can indeed increase that amount, but then you need to pay attention with regards to memory consumption. Maybe safer to add extra nodes, rather then to increase this max_shards_per_node setting.
The number of shards needed for a DataMiner system depends on what data you have (e.g. number of DOM modules/definitions, how long you keep alarms, …). I reached the the maximum because of unnecessary ‘suggest’ indices that were created for DOM which is fixed as from 10.3.9. So I cleaned them up which drastically lowered my number of shards.
Also note that you have to make the sum of the unassigned shards and the assigned shards (to a node).