Is there a example of a High level dataminer system design for main site and remote sites?
Hi Wilfred,
There are several possibilities for remote sites.
If there are only a few devices and systems to integrate on the remote sites, you could opt to poll this all from the main site. The downside is that you have no local control possibilities, and you have gaps when connectivity is down.
If the remote sites are bigger, or you want to avoid those downsides, then it's recommended to install a local DataMiner node. These nodes in the remote sites, can be added in a cluster with the main site, then you just have one big system. If you have too many remote systems, or the connectivity is too limited, you could opt to use replication to replicate some or all the data of the remote sites on to the main site.
Many choices for many different use cases... We would need to discuss a bit more in detail what your requirements are to know what the best fit is. Please feel free to contact us (e.g. sales@skyline.be) to discuss this in more detail.
About IP connectivity, we always need connectivity between the remote sites and the main site. If that's established via VPN or something else, that doesn't really matter for DataMiner, we just need to be able to exchange data over IP. If there are firewalls, we need to open a few ports depending on the chosen method.
Let us know if you have any other questions.
Bert
thanks for the response sir Bert.
We are actually working with dataminer team in APAC.
So the infrastructure is remote site equipment–>network switchDataminer Node(Public IP)internetDataminer Agent Main site.
Is this the correct setup?
Yes, this is definitely a correct setup. Just make sure the connection over the internet is sufficiently secured.
Hi Bert, by referring dataminer node, do you mean a gateway server or a Agent?
Which will reside on the remote site and sync its gathered data to the main site?
With DataMiner node, I just mean a computer or VM with the DataMiner software on it. But you can indeed still work with two different methods:
(1) You can go for a so-called full DataMiner Agent on each remote site, and also on the main site, and then combine all of them in one big cluster which we then call a DataMiner System. This system behaves like one big system, synchronizes everything and when you connect it gets the data from all the remote systems in order to show you a full picture of everything.
(2) You could also leave the nodes on the remote sites more as independent nodes, then we often refer to this as a DataMiner Probe. It’s the same thing, just the DataMiner software running on a machine, it manages all the devices and systems of the remote site, but this probe is not included in a cluster with the main site and other remote sites. Instead, it will report the data back to the main site via element replication. All elements can be replicated to the main site or only some of them, or sometimes we only replicate a “manager” element with a summary of that remote site and this can be easily customized.
Both methods have of course some differences in terms of the communication between the remote and the main site, and also where the data is being stored. Which kind of access is needed locally is also important to keep in mind, as well as which kind of data you still want to see in the main site when the connection to the remote site is down. In other words, there are some small differences, and depending on the exact details of your setup, one or the other will make more sense. We can always set up a short call to discuss this in more detail. Feel free to reach out to the team and schedule such a call. An expert or I can always join this call.
What would be the infrastructure like VPNs? network firewall?