Hi,
I am looking at improving stability and reliability in our dataminer implementation. Currently we seem to be experiencing some dataloss and deadlocking occasionally during high resource utilization periods when sending data from one element to another element. It's an inconsistent issue so I'm looking broadly across our solution to try see what can be improved to reduce this. Please could I get some info on the following questions:
Question 1: For each element we have a single parameter existing that is responsible for all incoming data from external elements. Would we benefit from creating one I/O parameter for each other element instead of multiple writing to the same parameter?
Question 2: When the external elements write to the parameter, for our implementation they are currently using the "NotifyDataMiner" method, would we benefit from switching to "NotifyDataMinerQueued"? I don't think the docs are clear enough on the differences. Does the underlying dataminer protocol already queue some stuff even when not using the queued version of the function? A bit more context on the pros/cons of using these different methods would be very helpful.
Question 3: When data is being queued on dataminer parameters, is there configuration somewhere that defines the buffer depth and queue behaviour?
Links below for methods referenced in question 2
NotifyDataMiner(int, object, object) - Method NotifyDataMiner | DataMiner Docs
NotifyDataMinerQueued(int, object, object) - Method NotifyDataMinerQueued | DataMiner Docs
Thank you!