Is there a DataMiner driver being able to extract system logs from a certain application of broadcast equipment like (Vizrt or VSM), analyze it, and send a real-time notification with specific keywords filtered like Critical, Error, Warning, and so on.... to be posted in a dashboard for NOC engineers in order to monitor the system through the system logs itself?
How heavy it will be the workload on the DataMiner system? and from the other side on the performance of the device due to the heavy operation or the high traffic?
Is there a developed tested driver having the same Idea?
Hi Kawssar,
Welcome to the community!
Yes, that is perfectly possible, DataMiner can interact with log files and ensure that the proper notifications are available in dashboards for your operators, when specific events occur. Note that this can be done in different ways, depending on the details of the log files and what you want to get out of it.
Too many people instantly associate log files immediately with unstructured data that needs to be ingested and indexed, and monitoring is then based on queries. That method is supported by DataMiner. Large volumes of data can be ingested (from log files, or streaming data sources for example) and stored in the underlying Elastic instance. In fact there are different methods for that, depending on the volume of data. For some applications this can be 100 records per second, but we have done already implementations that go up to 10.000 records per second.
But fact of the matter is that log files are not necessarily unstructured data. Often they are very structured, and if you know exactly what you are interested in (specific messages or keywords), then it might be way more efficient to process the file entries immediately. There is no use in storing huge amounts of log data, and spending all the storage and compute for that, just to do some basic queries that could have been derived from the log file on the fly.
In conclusion. It is most definitely possible. And there are two methods (ingest/index and continuous on the fly processing), and it would depend on the details of the type of log file and what you expect from it, to choose which approach is the most efficient. Load and impact really depends on the exact application, because this can really vary from basic small log files to very large amounts of data. So we would have to look at the details to assess that.
Reading logs files and build structured reports from them is always something useful to diagnose and monitor the production systems. To come around loads of the logs files from the different systems in the infrastructure and keep the load off DataMiner, there must be a special system that is made for big data processing. For example, Splunk and Elasticsearch are two special software that are able to process big data and produce reports.
At Sky UK, we use Splunk to ingest the logs files from all over the place and then generate the required reports. These reports are then polled into DataMiner via HTTP and the alarming criteria (templates) are then built.
We have improved a Skyline Splunk Enterprise driver to receive the reports from Splunk and fetch the data into their tables regardless of their data types.