Hi, for the management of DTT transmission architecture, covering only M&C requirements for 100-250 elements across headend and remote sites, could we use 1x server with the specs below catering for installation of DM, Cassandra and Elastic? Functionalities expected will be element and service management along with redundancy grouping, alarming, trending, alerting, visual layout and dashboards.
- 16 core CPU with min 10K passmark
- 64 GB RAM
- 2x 250GB SSD (Raid 1) - for OS and DM
- 2x 1TB SSD (Raid 1) - for Cassandra
- 2x 1TB SSD (Raid 1) - for Elastic
- Windows Server OS
- 2x 1GBPS NIC
- Dual Power Supply
I think the key question is if you can open up for separate servers to run the DBs.
From the SSD specs you're listing for the drives, I'm led to think you're considering to run everything local - if that's the case, I believe you'd need more RAM.
It's also worth considering the suggestion from Ben, as starting with a distributed solution with dedicated Linux machines for the DBs will give you more scalability, although the solution would be a bit more complex to start with, if all you wanted was just a single DMA server.
This link might give you further insight:
https://community.dataminer.services/dataminer-compute-requirements/
"Important: If you intend to run e.g. DataMiner, Cassandra and Elasticsearch on a single server, the hardware requirements in the diagram below need to be added up. So, when it comes to RAM, in this case you would need a minimum of 96 GB (32 GB for DataMiner, 32 GB for Cassandra and 32 GB for Elasticsearch)."
Similar consideration apply also for the CPU if you have restrictions and you're forced to use a single machine for everything.
HTH
Do you plan to run DM/Cassandra/Elastic all from the same machine (everything local to the Windows Server)? Or could you open up for separate Linux servers to run the DBs, as assumed in the first answer?