I could see the prediction on the trend graph. Just curious to understand the logic behind.
With forecasting, the main goal is to provide users with a quick overview on future behavior and predict alarms based on that with our proactive alarm detection engine.
Forecasting does not use a “one model fits all” approach, but rather generates a model based on the behavior of each parameter. This uses a combination of our own “Fortress” model, which dissects the data into a level, a trend and a periodic component, and a model based on SARIMA, which by itself incorporates 4 different prediction models.
DataMiner is able to adapt these models to a greater extent than similar approaches you might find in academic papers however. Periodic behavior is automatically extracted via autocorrelation plots, transformation to a stochastic series is automatically done via both linear and nonlinear transformations, and even the model structure itself is also generated in a fully automated way based on parameter behavior. The training itself also analyses certain values and devalues them if it is deemed not useful to incorporate them fully in the forecasting model.
Because a lot of the forecasting model generation is data-driven, typical DataMiner systems have hundreds of different models specifically tailored to certain parameter subsets. They are also able to adapt to concept drift, meaning if the behavior of the parameter changes, the model for that parameter will rewrite itself.
Besides that, we are also aware that certain impulses might be important to one parameter, but be noise to others. Therefore, DataMiner merges information of different resolution wavelets to improve its accuracy, which plays a crucial role in other proactive features such as alarm forecasting.
Lastly, as there is always a level of uncertainty present in parameter behavior that is impossible to model mathematically, DataMiner also provides confidence bounds, which shows how the data could fluctuate at different degrees of certainty. These confidence bounds are calculated using statistical properties of the parameter behavior.
Hi Jeyaram,
The 'logic' is difficult to explain as this is quite sophisticated. And the 'logic' could be even different depending on the behavior of the metric itself. If it was a moving average, it would be easy for me to answer your question, but it is a whole lot more than that (read: the crew developing this have tried to explain it to me and they lost me by the end of their second sentence 😉 ).
And note that we keep on refining this capability, we are aiming at making DataMiner better and more intelligent with every release. For the forecasting, the most recent release now also looks at the time frame you are looking at, because forecasting a metric for the next hour is different as compared to the next day, next week, next month.
And that's also the case for all of our other Augmented Operation capabilities, such as proactive alarming, change event and anomaly detection, automatic tagging, incident identification, etc.
By the way if you could share some screen captures of some nice forecasts with us, that would always be appreciated. You can add them in the questions here. Our analytics team would love to see those and learn from what happens also in the field.
Cheers!
Hi Jeyaram, There's a similar question already asked on here. The answers might be of interest to you.
Sounds complex and intriguing. Thanks for this insight.