Hi,
As a user I want to reset an agent (part of a failover pair) to factory settings without impact on any external database technologies (ElasticSearch, Cassandra clusters) that the agent might be configured to use so that there is no impact on the operation of the DMS.
Today, the factory reset of an agent can be done with the help of the SLReset tool.
Let’s consider the following example DMS: 1+1 (failover/high availability setup (HA)) on site A and another 1+1 on site B (used as a disaster recovery (DR) setup for site A); All 4 agents share the same DMAID and only one site is active at the same time (to prevent double-polling of the same devices – since both sites are connected to the same devices). Each agent has its own (locally hosted) Cassandra node but configured as a 2 node cluster (failover). Each site has a dedicated (externally hosted) ElasticSearch cluster (2 separate clusters, in total). Each site is connected to both ElasticSearch clusters and is writing the same data to both, this way the same ES data is available on both sites, in case of a disaster.
Now, let’s assume we want to decommission one of the failover nodes in one of the sites, let’s say site B. Now, both machines in site B are offline (to prevent the double-polling explained above).
Can the SLReset tool be safely used to perform this action without any prior file manipulation (db.xml/dbmaintenancedms.xml, etc) to decommission the node with 0 impact on the data that is being stored on the (ES/Cassandra FO) clusters? How will the FO pair of the decommissioned node be notified of this forced failover disable/offline decommission, given that both of agents are stopped prior to the execution of the SLReset on the node needing to be decommissioned?
Thanks in advance,
Hey Ciprian,
With "the machines are offline" I will assume that you mean that the entire dataminer including SLNet is stopped.
First of all, SLReset is a factory reset tool, not a decommission tool. It is part of the process when a failover configuration is deleted, that is true, but it is not a direct replacement.
Running SLReset on a stopped node will not notify the other agent in the pair of its changes, effectively leaving it clueless about the whereabouts of its partner. So the safest way to decommission a failover pair for stopped nodes is to run SLReset on both nodes, and if needed restore a DMA backup on one agent.
Your second question "to decommission the node with 0 impact on the data that is being stored" is mutually exclusive with the concept of a factory reset. By default the tool will make a distinction between 2 types of storage. "Local/Failover" and clustered.
Clustered databases like ElasticSearch and CassandraCluster contain the data for the entire DMS and thus cannot reasonably be deleted, the SLReset will skip cleaning these databases. So the data should not be deleted. (However given your specific setup and use case where 1 ES cluster is used for 2 DMA systems, it would be safe to remove the elasticsearch tag from the db.xml before running SLReset).
Local/failover storage however will be deleted, it will remove all the local files for that database.In case of Cassandra it will also reset the cassandra.yaml to the defaults. Again, no communication will happen to the other agent, so it is possible that nodetool status still reports 2 nodes, one up and one down. Since [ID_29894] we will also make sure the reset node is completely removed from the setup.