Hi Dojo,
we are planning to move to a cluster solution (4 pairs with failover in different countries) with local Cassandra database on the server.
- Does it make any impact if the latency is <100ms from a DMA to the cluster?
Because in the docs it says <30ms which is not achievable. - Other solution could be a VM on every DMA to run a cluster node
- If using a cluster which is only connected to one DMA is this possible?
Is SLGateway than buffering the data like for Staas?
Happy to hear your thoughts.
Hi Stefan,
If I understand this well your current cluster is not yet making use of an indexing database (elasticsearch / opensearch) and is running the local cassandra database in a setup where each failover pair is connected to its own 2 node cassandra cluster dedicated for that failover pair.
Summary: 4 DataMiner failover pairs where each pair has its own local cassandra database cluster of 2 nodes.
This kind of setup where each DMA (failover or regular) has its own local database is considered legacy and will not allow you in the future to make use of high value features such as swarming and low code apps.
For these features the DataMiner system (DMS) architecture relies on shared central database cluster as well as indexing database where all DMA's in the cluster share the same local and indexing database. This is automatically the case when opting for the STaaS as your storage solution. More detailed info on STaaS can be found here
In such a future proof architecture we can actually distinguish multiple clusters:
- DMS: 1 or more DataMiner Nodes grouped together.
- Local database: 3 or more Cassandra nodes grouped together.
- Indexing database: 3 or more Opensearch nodes grouped together.
In each of these clusters the max. latency requirements differ.
A DMS cluster is not that prone to latency between its nodes. We have several global DMS clusters running where the latency between certain DataMiner nodes of the same cluster is greater than 100ms.
For the Cassandra cluster and Opensearch cluster, the latency requirements for the communication between nodes of the same cluster are more strict where Opensearch has the lowest max. latency of 30ms between nodes of the same cluster.
The communication between your DMA's in the DMS and the local database / indexing database cluster doesn't immediately have a max. latency defined from a functional point of view. We do advise to keep this latency as low as possible as there eventually will be performance impact if the latency between DMA and its database is increased. Sub 30ms latency between the DMA and its local / indexing database should be fine where the general rule of thumb is: the lower the better performance on certain aspects like trend data requests / DMA startup times you will get.
For sake of convenience our general advise would be to choose for STaaS as your future storage solution. This will relief you from the burden of having to maintain and operate the local / indexing database cluster and will also alleviate any concerns you have on latency in your DataMiner cluster.
In any scenario (local database per DMA, local / indexing database cluster shared between DMA's / STaaS) each DataMiner node maintains its own connection to the storage and in the event that the DMA loses this connection, it will start local caching and remain operational. Only specific funtions like requesting historical alarmreports / trenddata will no longer function at that time. Once the connection recovers the cached data will be offloaded to the storage and the system will return in full operation.
Hopefully this clarified some things.
Let me know if you have any additional doubts.


Hi Stefan,
Each DMA in a DMS still maintains its own connection to the local / indexing database. Grouping or tunneling all storage communication from multiple Dataminer nodes through a single Dataminer node in the cluster is currently not possible.
If you plan to run OpenSearch / Cassandra together with DataMiner on the same bare metal server it's highly advised to split their operating systems by means of virtual machines as both OpenSearch and Cassandra are designed in such a way that they can/will reserve all memory/cpu/disk resources the OS makes available to them.
So in your case you would have 8 physical servers to distribute in total a minimum of 14 virtual machines accross.
8 VM's for DataMiner SW (windows server OS)
3 VM's for Cassandra (central local database) (linux OS)
3 VM's for Opensearch (indexing database) (linux OS)

Hi Stefan,
It's not really that we all want to sell STaaS. We just want all of our users to have the best possible solution. STaaS is just superior on any level compared to self-managed solutions. Reliability, security, availability, robustness, etc.. It's difficult to achieve the same with self-managed solutions if not impossible because STaaS uses cloud native storage such as Cosmos DB and Table Storage. We just see that our users spend a lot of time on building and maintaining storage, and this doesn't really bring any value. We rather have our users creating solutions on top of DataMiner, creating low-code apps, automating things and so on. Those are the things bringing real value. 25 to 30% of our tickets are related to self-managed storage. With STaaS this disappears as well. It is really a no-brainer for me.
Therefore, we try to provide STaaS as cheap as possible. We just want to worry as little as possible about storage and focus all our time and attention on DataMiner itself. It is also possible to purchase DataMiner credits upfront so that you can run on STaaS for an extended period of time to avoid monthly OPEX costs.
And about moving to the cloud or away from it… We see a clear move towards the cloud. More and more companies apply a cloud first strategy. And I'm convinced that this trend will continue in this direction, it will only accelerate. It's becoming more and more difficult to compete with the cloud and its cloud native solutions. At some point, an on-prem solution will never be able to compete with that. And in terms of STaaS, I'm actually convinced we are already at that point…
Hi Jeroen,
thanks for your feedback! Your summary is correct, but we have already an Opensearch node running.
Is it wise to pack them together with Cassandra in the cluster or run a seperate VM for Cassandra/Opensearch?
I know you all want to sell STaaS or DaaS but these are Opex costs and we will not go for it.
Also in the industry there are moves away from the cloud because of the costs.
The cluster itself will have less than 5ms once we have it established.
I'm still not sure about my question if we can configure ONE DMA for the database connection like we do it for the cloud connection?
Or is it better to let all DMA's connect to the cluster.