Have you noticed that DataMiner 10.0.11 was released earlier this month? In this blog, you’ll find an overview of why you should update your feature release clusters to this latest DataMiner version.
Can I grab your attention by talking about security in your DataMiner System? I’ll try! You’ve probably already installed an Elasticsearch cluster in your DMS to leverage nice features like enhanced alarm searching, Jobs, SRM, etc. In this release, we secure this setup. Why now, instead of from day one? Well, only recently Elastic made the paid security features (x-pack) available in the standard package. In this DataMiner release, we now support an authenticated connection to the Elasticsearch cluster and we enable TLS for inter-node communication by default. Since this is part of the Elasticsearch installation, a few manual actions will be needed to configure this on previously installed Elasticsearch clusters. Another security improvement concerns the DataMiner credential library. That credential library keeps track of the different passwords on your products managed by DataMiner and gives you one place to manage these. Previously, there was a caveat where “raw” SNMP queries in a driver could ignore the credential library.
If security is not your preferred topic, how about some nice Dashboard enhancements? Let me cherry-pick two must-haves for you. In DataMiner 10.0.11, there’s a major extension of the spectrum capabilities in DataMiner Dashboards. You can now link buffers to your spectrum component and show the threshold lines linked with the severity of the spectrum parameter. A second major improvement is the availability of dynamic units in your state components. With this, you no longer need to count the number of digits, but your dashboard will take the best possible unit. Note that this feature will appear in more places in DataMiner in upcoming releases. By the way, the DataMiner Dashboards are maybe pretty new to you, so note that we have a great new training module, with more topics coming soon.
What have we changed in DataMiner Cube? Well, there is the option to assign a visual to one element specifically via the hamburger menu of the element card. This allows you to overrule the protocol-specific visual that applies to all elements of that type.
As part of our Augmented Operation innovation track, several exciting new AI features have also been integrated into DataMiner Cube. In DataMiner 10.0.11, we introduce Proactive Cap Detection. This is a proactive warning capability that continuously looks at the metrics for which you have trending enabled. Whenever it detects that the forecasted trend of the metric will cause a threshold breach in the foreseeable future, it will notify you by means of a Suggestion Event (available in the Suggestion tab of the Alarm Console in Cube).
Also released, but by default disabled, is a first version of the Incident Tracking. With this AI-powered feature, DataMiner groups alarms in the Alarm Console intelligently. Because when a failure happens, you typically get several alarms all related to that same incident. DataMiner now combines all the alarms related to that same failure together in one incident or group alarm. This will help your operators on a day-to-day basis to reduce the time to figure out the root cause of an incident. In the background, incident tracking makes use of, among others, the Focus Alarm data that we released earlier this year. Also, activating IDP (our standard App for Infrastructure, Discovery and Provisioning) on your system for example makes sure DataMiner has more contextual information to make a correct decision when performing alarm grouping.
That’s it? No, there is way more to read in the release notes document, but allow me to still quickly emphasize a new data offload functionality introduced in this release. DataMiner already supports the offload to central databases, but now also allows offloads to plain files. Not fancy, you say? Not as such, but this allows us to incorporate more third-party data offload systems through drivers that interface with these third-party systems using these files as input. This means that if you prefer to offload data from DataMiner to on-premises or cloud data storage, or to a data lake solution that is not “natively” supported by DataMiner, then it is simply a matter of using this new capability now. You just need a fairly simple driver to orchestrate the data ingest into your third-party data storage.
Oh yeah, and the Jobs module now has history tracking too ;-).
See you around halfway November, for DataMiner 10.0.12. Our Create squads have a lot of new capabilities coming up.
Cheers, and stay safe!