What is the typical network bandwidth required for
- Northbound API (Cube connection to the DMS)?
- Intra communication between active and Failover Agent?
- Inter communication between active Agents in a cluster?
In some cases client and/or DMA are located on different sites and it would be useful to know which bandwidth should be allocated for the DMS in the network.
Hi Bernard,
as you will understand, it is very difficult to put hard numbers on something like this, because there are a lot of factors playing into this. Some observations that might help:
Cube client <> DataMiner System: typically bursty in nature depending on what users do. If you use eventing and just use Cube to monitor the operations, you will observe very little traffic is needed. Of course if you load the alarm history of the last month, edit and update Visual Overview graphics, and that kind of activities, then you will have some solid peaks in your traffic. Note: in my experience it is not only about the bandwidth, but also the delay time can play into the user experience.
DMA node <> managed products: typically a more steady traffic stream, as DataMiner continuously collects (push or pull) data from the operation. But again this largely depends on the number of managed resources, the type of protocols used and all that.
DMA node <> DMA node: here you will again find the traffic to be more bursty, and to be very low in a normal operational state, and it will peak when certain data needs to be synchronized between the DMA nodes (e.g. if you load a new driver into the system), or for example if a DMA is set up to offload data into an external database, or when a DMA needs to offload its automatic backup archive, etc.
northbound API: again very much depending on how it is used, i.e. which API (because there are several ones) and whether it is used to fetch an occasional metric or if it is used to pull bigger data volumes around the clock.
So you really have to look at it in more detail for each specific deployment to come up with some estimates around this.