During a DataMiner training, I explained the concept of redundancy groups and software redundancy.
There was the expectation/question that within e.g. 500 ms that something goes wrong, the switch should start.
- Is this behavior something that is common?
- How is it done? E.g. is it related to polling parameters with a high frequency, so DataMiner can quickly react?
There was also a concern this could maybe put too much load on the device and/or DataMiner. Can this indeed be the case?
Thanks!
The delay between the moment the issue occurs at the side of the equipment/data source, and the moment DataMiner receives and processes that data, will indeed determine how fast a software redundancy switch can be triggered.
While DataMiner is capable of quite fast polling - we have built systems which operate at polling speeds of <500ms - it's not always best practice. As you indicate, both the load on the DataMiner system, but especially also on the target equipment/data source, are valid concerns.
In case a switch needs to be executed with a very low reaction time (eg <1 sec), it is likely more advised to look into having the data pushed to DataMiner, rather than polling it. This could be done through an SNMP trap, sending a message over a TCP socket to which DataMiner listens, or any other supported "push-interface".
Note that there will also be a delay - usually rather small - between the moment the data is received, and the moment the switch is actually completed. This mostly depends on the load on the DataMiner system, as well as the speed at which the target equipment/data source can process and execute the switch request.