Hi, consider the following situation: we have specified in db.xml the same list of Elastic nodes for each DMA in a cluster. However the Elastic nodes were not yet configured to be in the same Elastic cluster. (i.e.: Elastic nodes A, B, C and D are specified in <DBServer> tag but A, B, C and D are not aware of each other and operating individually)
- Having understood how a DMA interfaces with an Elastic cluster, what sort of view of the Elastic database would each DMA form in this situation?
- We have since reconfigured A, B, C and D correctly to form a cluster of 4 nodes. Is a restart of each DMA in the cluster required to rectify this situation? What other corrective measures needs to take place to ensure that the DMA cluster and Elastic cluster is working as intended given the initial oversight mentioned above?
Bing Herng Chong [SLC] [DevOps Advocate] Selected answer as best 20th July 2021
Hello Bing Herng Chong,
- I am not sure about the exact behavior, neither do I believe it's too important since we're trying to describe what will happen in a faulty setup. However, I believe that DataMiner will connect to one of the ip's and from that point onward, he will keep communicating to this single ip (this single cluster). I would even believe this would be the first IP that is mentioned in DB.xml that is reachable. This would be decided per DMA (not on a DMS level). However, as you stated, this is an oversight and should preferably not occur :).
- So, assuming that the Elastic cluster is configured correclty, a simple DMA restart would suffice to rectify the situation. I always advice to stop the DMA's when reconfiguring/wiping the Elastic, cause this could lead to unexpected issues. To double check whether your Elastic has been configured correctly, you could navigate in a browser of your choice to like: http://ElasticIp:9200/_cat/nodes and check if all the ip's are mentioned.
I hope this answers your question.
Kind regards,
Bing Herng Chong [SLC] [DevOps Advocate] Selected answer as best 20th July 2021
Thank you Thomas for your inputs.