We maintain automation scripts, correlation rules, memory files, protocol versions etc in git repositories. When a release is tested we install them on a productive system. This system is currently using DataMiner (9.6.0.0-9175-CU13) in a failover configuration with a cassandra database.
When we install a new set of files we try to use the dataminer interfaces in the cube as much as possible. This is not possible e.g. for memory files which we have to copy manually on the file system of the prime server. After performing a failover we often have the issue, that not all files are synchronized. In the worst cases some protocols had the wrong version set as production and a few element connections disappeared. In the best case only a few memory files were not synced between the servers.
Is there a recommended way to properly install a new set of these configuration files? Or should we export the configuration of the prime server as backup without database and import it on the failover dma?
Some insight on how synchronization in DataMiner works for most of the file types (such as Automation memory files):
The file C:\Skyline Dataminer\files\SyncInfo\{DO_NOT_REMOVE_C0E05277-A7C5-4969-904D-E2E52076400A}.xml contains a list of all the files known to DataMiner, each with a timestamp indicating when it was last changed through DataMiner.
DataMiner uses this file to decide which files to copy over to other agents (or backup agents) in the cluster (as part of the midnight sync or when agents reconnect after a disconnection)
When editing files through Cube, this file gets updated while requests are also made to other servers to notify about the file change.
When manually replacing files on disk, DataMiner is unaware of the change and will not sync it to other server.
Some options you could use to get your files updated on the different DataMiner Agents/Failover backups:
- Replace the files on all of the DataMiner agents
- Use the force sync option as already mentioned by Christine. You could also create an automation script to programmatically do this for a list of files.
Also be aware that while DataMiner is running, some content like memory files might be loaded into memory. If a new file is copied over, DataMiner will keep working with the in-memory version until it is restarted or has another reason to reload the file. It would be advised to restart the DMA after changing files.
I was wondering if you have already tried the \'Forcing synchronization of a file with the DMS\' functionality in Cube? When a file has been changed on a particular DMA, it is possible to force synchronization of that file across the cluster.
- On the Online DMA, copy the memory file to \'C:\\Skyline DataMiner\\Scripts\\Memory\' after which it will be available in Cube on that agent only.
- Perform the following steps to sync the file across the cluster including the offline agents:
1.In Cube, go to Apps > System Center.
2.In the System Center module, select the Tools page.
3.In the second column, select Synchronization.
4.In the drop-down list next to Type, select File.
5.In the File box enter the path of the file in question, e.g. C:\\Skyline DataMiner\\Scripts\\Memory\\MyMemoryFile.xml.
6.Click the Sync now button at the bottom of the card.
7.In the confirmation window, click Yes.
verify that the file is synced by checking that it copied to the path on the offline agent.
Forcing the sync of a single file does work, but this procedure for 100+ files is very time consuming and error prone.
Using other sync features only work if i remove that file first, sync the deletion and then add the file again.
Thank you very much for your advice. I found no reference / documentation for doing this in an automation script, but i found the reference for a qaction. For testing i wrote a virtual protocol with 1 button for syncing all memory files and my test was positive, the files were properly synced.