Here we've defined a very simple user-defined API. It receives an alarm, a field and a value and it places the value in the field of the alarm.
However, the volume of alarms we work with is big. While trying to make this integration as far as real-time as possible, we were receiving many alarms of OutOfMemory exceptions. The API had peaks of calls of about 15k requests per 5 minutes.
How can I determine rate limites for a User-Defined API? Can I somehow limit concurrent requests? Is there any kind of action I can take? Or is just the overhead of such an API just too big?
Hi Caio,
Currently, there is no configurable rate limit available with the User-Defined APIs feature. The current achievable rate is determined by the amount of DataMiner agents, the amount of CPU cores these have, how many endpoints the requests are sent, and how efficient the API scripts are. All these factors will contribute to the max number of triggers that can be handled within a given timeframe.
Could you share some additional details on your use-case? What application is sending the requests to this user-defined API? If I understand it correctly, the API script will update a DataMiner alarm using the provided input from the trigger? You also mention 'OutOfMemory' exceptions, are you referring to the input data, or that you are experiencing such exceptions within DataMiner? These questions will help me better understand the situation and should allow me to suggest ways to improve this implementation/API.
Thanks for sharing those details. One way to reduce the amount of calls and their associated overhead, you could look into batching these alarms in a single request and applying multiple values to alarms in one API scrip trigger (if that would not yet be the case of course). Regarding the memory exceptions, this is indeed unexpected. Feel free to share more details on these (e.g. the content of the information events you mention) here, or in a new Dojo question to have these checked out. Something to keep in mind is that the process running the script does have a memory limit, which could be reached if a lot of data is kept or stuck in memory.
In order:
The application that is sending those requests is custom-built by myself. It uses parallelism to achieve reasonable execution times, but it's limited to only 16 workers.
Yes. Explaining in most detail, everything the API does is: Load the JSON module, interpret the request body as JSON, correctly parse the fields, then call engine.SetAlarmProperty() twice.
Dataminer itself was issuing the OutOfMemory exceptions in the information events of the agents. Which is unexpected, because we have a 6 node cluster, and their specs are very generous, to say the least.