Hi,
We have the need to validate the contents of the backups generated by 2 "big" Dataminer systems (one with ~40 DMA and another with ~100 – EPM environment). So far, the Sysadmins have taken random files and restored them in a blank DMA, Check that everything is in there (manually) and proceed with the next one. That is a time-consuming task for an environment like this.
For now, we want to know if the backup contents are what is expected (all templates, connectors, elements copied, Correlations, automations...) to avoid having to do that manually. The restore process will be executed occasionally to try to avoid the surprise of a corrupted backup when you most need it.
Has anyone had the same request before? Is there any automatic procedure already in place to do so? If so, can you share it?
Hi Arturo,
Do you still need an answer to this question? If yes, could you provide some more information about the database architecture as requested by Miguel in the comment above? If no, could you select this answer to indicate that no further follow-up is needed for this question?
That is a very interesting question. Given the size of each DMS, it is worth taking the time to reflect on the nature of the data that needs to be backed up to ensure the process is as efficient as possible.
In addition to the configuration files you mentioned (elements, connectors, templates, etc), there is also the database information. Could you share more details about the database architecture used in these large clusters?
There is the configuration that is unique to each DMA (e.g. elements), and there is the configuration that applies to the entire cluster (e.g. connectors, templates, scripts).
The naïve approach is to take full backups of all the DMAs in the cluster but then some of the data will be duplicated and you might run into consistency problems.
It would be better to design a backup strategy that meets the unique requirements of each type of data.