Hello,
I am trying to migrate the MySQL-Database at a Customer to Cassandra and ElasticSearch Clusters.
The Cluster Migrator Tool will run for a very long time, but in several tries so far it never completes the Migration before MySQL causes some problems (either Timeouts or Server appears to be running but is in such an undefined state that Elements in DataMiner start crashing with Database errors, but Alarms and Trends function normally. Those Elements that crash can no longer be started until MySQL is restarted).
Now I have several questions:
- Can the MySQL config be optimized in some way to make it more robust for the Migration?
- Considering that the Trend-Tables easily migrate for a week, do the other tables get another pass before the Migration finishes, so there is no Gap in the Data there? The Counters seem to no longer go up once their initial migration finishes
- Are there any recommendations for other preparations that might help the Migration along? For example can I Migrate to a local Cassandra Database first and then Migrate to the Cluster? Could that help?
Best Regards,
Robert
Hi Robert,
indeed depending on the volume of data it can take a while, all data needs to be queried and transformed into the correct format and pushed to the new DBs
There are several things tho that can be done to optimize this process
-clean up irrelevant information
If you are talking about trending that takes a very long time, there might be some very huge trend tables in MySQL, MySQL has the tendency to exponentially slow down when tables grow larger than a couple of GB
->if you can find very big real time trend tables, it might be an option to simply truncate them before you start the migration
Some other tables can also be a quick win, for example information events, how relevant are they for you and how big is the table? Do you use the alarm properties? ...
-Make sure you don't have any windows or maintenance updates scheduled during the window of migration. One of the big pitfalls of the current migration tool is that the entire DMS needs to be able to migrate before we can fully switch. As soon as a DMA restarts or the availability of the nodes is down for a reason, a part of the migration needs to be manually restarted.
-During the live migration we immediately start sending all new data to the new DBs and the old. This is also the reason once a DataMiner is fully migrated there are no new entries, this one has the state "fully migrated", once all DataMiners have that state you can trigger the transition to the new one and we stop sending any data to MySQL.
->I would not recommend to first go to cassandra, this will just be yet another maintenance window you need to follow up on, i believe this will not be that beneficial
There are several other things you should pay attention when you have a failure:
Cassandra cluster doesn't clean up the data when you restart/resume the process, so make sure you keep an eye on the disk space before you do.
If you simply do a full restart of all agents in the cluster you can truncate all the tables.
Eventually everything will be cleaned up by a repair (don't forget to configure reaper at the end)
Since we know for bigger systems it can be tricky or you need help cleaning up we are happy to help you further with our Integration and Operations domain which are the experts on database maintenance (both cloud and on prem)
You can reach out to them simply by mailing our techsupport team: techsupport@skyline.be