Skip to content
DataMiner DoJo

More results...

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Search in posts
Search in pages
Log in
Menu
  • Updates & Insights
  • Questions
  • Learning
    • E-learning Courses
    • Empower Replay: Limited Edition
    • Tutorials
    • Open Classroom Training
    • Certification
      • DataMiner Fundamentals
      • DataMiner Configurator
      • DataMiner Automation
      • Scripts & Connectors Developer: HTTP Basics
      • Scripts & Connectors Developer: SNMP Basics
      • Visual Overview – Level 1
      • Verify a certificate
    • Video Library
    • Books We Like
    • >> Go to DataMiner Docs
  • Expert Center
    • Solutions & Use Cases
      • Solutions
      • Use Case Library
    • Markets & Industries
      • Media production
      • Government & defense
      • Content distribution
      • Service providers
      • Partners
      • OSS/BSS
    • Agile
      • Agile Webspace
      • Everything Agile
        • The Agile Manifesto
        • Best Practices
        • Retro Recipes
      • Methodologies
        • The Scrum Framework
        • Kanban
        • Extreme Programming
      • Roles
        • The Product Owner
        • The Agile Coach
        • The Quality & UX Coach (QX)
    • DataMiner DevOps Professional Program
      • About the DevOps Program
      • DataMiner DevOps Support
  • Downloads
  • More
    • DataMiner Releases & Updates
    • Feature Suggestions
    • Climb the leaderboard!
    • Swag Shop
    • Contact
    • Global Feedback Survey
  • PARTNERS
    • All Partners
    • Technology Partners
    • Strategic Partner Program
    • Deal Registration
  • >> Go to dataminer.services

Historical / Long Term Data Storage recommendations

Solved1.63K views9th October 2020central database data archiving database
5
Jamie Stutz [SLC] [DevOps Member]1.18K 9th October 2020 2 Comments

What are some recommendations / best practices for long term data storage while still maintaining access to the data in Cube? Currently we have elements that generate large volumes of data each day. To keep the element performant, we’re archiving data to CSV stored in the Documents folder. This works nicely in keeping the element from getting overloaded, but makes the archived data very difficult to analyze as it is all “chunked” up. It seems to me there has to be a better way!

The end goal is to be able to continue to offload data to keep the element performing nicely, but at the same time improve the visibility into historical data. Here are some of the things we’re hoping to achieve:

  • Elements can “archive” historical data as needed to a DBMS. This could be the standard Cassandra Node (using a different DB than the standard DM DB), Elasticsearch or some other DBMS.
  • The historical data should be searchable / accessible from Cube for reference. Not sure exactly how this would work, but it seems to me it would need to be done in a way that bypasses the standard DM Data Layer.
  • Historical Data should not be trended or evaluated for alarming.

Perhaps the Central Database feature could be used for this? If so, what I’m unsure of is how we’d get the information back into Cube for searchability.

I know this is probably not a simple topic to address, but one that’s been on our minds for a while. Wanted to see what people’s thoughts are. Thanks in advance!

Jamie Stutz [SLC] [DevOps Member] Selected answer as best 9th October 2020
Michiel Vanthuyne [SLC] [DevOps Enabler] commented 9th October 2020

Hi Jamie,

Just to make the requirement more clear: what kind of data are we talking about? Is it the trend data of an element you want to store with high detail for a long time, or do you need log type data where you store a timestamp with a number of values, or maybe something else?

Jamie Stutz [SLC] [DevOps Member] commented 9th October 2020

Hi Michiel,
The immediate use would be storing the test results for VOD testing. We’re generating something like 4000-5000 tests / day which is too much data to store long term in DM. That said, it would be nice to be able to see a full history of testing for a given asset without having to parse though the CSVs. In this case I could see that we’d want to retrieve from the archives a list of test results for a given Asset GUID. Hope this helps!

1 Answer

  • Active
  • Voted
  • Newest
  • Oldest
6
Brent [SLC]1.57K Posted 9th October 2020 2 Comments

Hi Jamie,

A similar case has been done before, utilizing the LoggerTable functionality and Elastic-database. The protocol would collect the data and push it straight to the Loggertable when available using a DirectConnection.

The data would then be available:

  • Via a query UI in the element card (which supports ad hoc querie and searches
  • Dashboards (as datasource for the dashboards)

Alarming and trending would not be possible on the loggerTable. If this is needed then the protocol can still do on the fly calculations on the incoming data, these can then be pushed as a regular metric in dataminer, which can be monitored/trended.

The Generic sFlow manager is a good example of this functionality. Which handles a large number of netflow packets for long term storage and querying.

Central database would not be suitable for this, the intended purpose for this database is to offload data for external usage. The data cannot be accessed from within DataMiner.

Michiel Vanthuyne [SLC] [DevOps Enabler] Posted new comment 12th October 2020
Jamie Stutz [SLC] [DevOps Member] commented 9th October 2020

Thanks Brent! We’ll give that a look and see if we can make it work. Sounds very promising!

Michiel Vanthuyne [SLC] [DevOps Enabler] commented 12th October 2020

To make the search possibilities more user friendly, you could add a dashboard using this data to a visio page to make it accessible directly in Cube.

Please login to be able to comment or post an answer.

My DevOps rank

DevOps Members get more insights on their profile page.

My user earnings

0 Dojo credits

Spend your credits in our swag shop.

0 Reputation points

Boost your reputation, climb the leaderboard.

Promo banner DataMiner DevOps Professiona Program
DataMiner Integration Studio (DIS)
Empower Katas
Privacy Policy • Terms & Conditions • Contact

© 2025 Skyline Communications. All rights reserved.

DOJO Q&A widget

Can't find what you need?

? Explore the Q&A DataMiner Docs