I would like to avoid having impact on functionality (slow) / system (specific processes).
Parameters are sometimes used to store info (e.g. configuration, set of options...) or pass data.
Is there a suggested size limit we should stay under for saved (database) and not-saved parameters?
At what point will SLProtocol / SLScripting start having difficulties?
Hi Mieke, I don't think there is an actual direct limit to the size and usage on (un)saved parameters. (if there is, I'm also interested to know/learn 😊)
In regards to storing parameters, I came across following topic on Dojo:
string values on a table are truncated - DataMiner Dojo
So it seems we do have a limit for storing data in MySQL databases of 65535 bytes. So depending of your encoding this is a limiter to a fixed size on content that could be stored in MySQL DB's.
Since most systems are no longer using MySQL; Normally element data will get stored in Cassandra and I believe there is no real limit to it anymore on data size.
One important thing to keep in mind though: There is no infinite bucket of space for you to work with.
The bigger the data, the bigger the impact on memory. The impact might be smaller on a giga server compared to the issues you might face on a mini specs server. But be careful.
The memory cap for DataMiner processes running in 32 bit mode are also limited. If you have a case where you need to pass around datasets of for example +100MB or gigabytes in content, you probably also want to review your design for internal flow. Transferring such large chunks of data can have a huge footprint on your DMS.
Using C# code to read in a (large) parameter value, means that the full content/size of that parameter that was loaded in the SLProtocol process is now also available in the SLScripting process memory.
If your connector doesn't have the best architecture/flow in place, and you start to duplicate multiple times large data-sets or even store them on displayed parameters (where SLElement would then also load the data in it's memory): the memory limits of your node could become problematic.
But in general, I think for most cases you should be OK to store and work with even large data sets. 🙂