We maintain automation scripts, correlation rules, memory files, protocol versions etc in git repositories. When a release is tested we install them on a productive system. This system is currently using DataMiner (9.6.0.0-9175-CU13) in a failover configuration with a cassandra database.
When we install a new set of files we try to use the dataminer interfaces in the cube as much as possible. This is not possible e.g. for memory files which we have to copy manually on the file system of the prime server. After performing a failover we often have the issue, that not all files are synchronized. In the worst cases some protocols had the wrong version set as production and a few element connections disappeared. In the best case only a few memory files were not synced between the servers.
Is there a recommended way to properly install a new set of these configuration files? Or should we export the configuration of the prime server as backup without database and import it on the failover dma?
Some insight on how synchronization in DataMiner works for most of the file types (such as Automation memory files):
The file C:\Skyline Dataminer\files\SyncInfo\{DO_NOT_REMOVE_C0E05277-A7C5-4969-904D-E2E52076400A}.xml contains a list of all the files known to DataMiner, each with a timestamp indicating when it was last changed through DataMiner.
DataMiner uses this file to decide which files to copy over to other agents (or backup agents) in the cluster (as part of the midnight sync or when agents reconnect after a disconnection)
When editing files through Cube, this file gets updated while requests are also made to other servers to notify about the file change.
When manually replacing files on disk, DataMiner is unaware of the change and will not sync it to other server.
Some options you could use to get your files updated on the different DataMiner Agents/Failover backups:
- Replace the files on all of the DataMiner agents
- Use the force sync option as already mentioned by Christine. You could also create an automation script to programmatically do this for a list of files.
Also be aware that while DataMiner is running, some content like memory files might be loaded into memory. If a new file is copied over, DataMiner will keep working with the in-memory version until it is restarted or has another reason to reload the file. It would be advised to restart the DMA after changing files.
Thank you very much for your advice. I found no reference / documentation for doing this in an automation script, but i found the reference for a qaction. For testing i wrote a virtual protocol with 1 button for syncing all memory files and my test was positive, the files were properly synced.