Hi everyone,
DMA is currently experiencing Refusal issues, and the assumption is that this is happening due to an SNMP table (which sometimes has 70k+ rows).
What is the best approach to resolving these issues and handling large SNMP tables?
I have seen the option with Subtables, which would allow filtering by instances in the table, but I want to check what the best options are.
As a workaround, this was previously handled by clearing the table and deleting test streams, but this doesn't work anymore
Thanks!
Hi Tamara,
There are a few strategies you could look into for such large tables depending on your requirements.
If you do not require all of those rows and are only interested in a subset of those then the subtable option should be the ideal one as it will allow you to select what to poll and thus reduce the amount of data being polled.
Otherwise, if you need all the data from the device then I would advise you to look into:
- multipleGetNext and multipleGetBulk together with some performance tests to evaluate the option that better suits the polling you need to do
- partialSNMP will prevent timeout errors by doing the fetch operations only "x" rows at a time.
As a side effect it will also allow for sets to be done in between the fetches, making the user experience better. - splitting the table polling into multiple tables that have different polling frequencies and that then get combined in a single table that displays the data to the user.
This will have a similar benefit as the partialSNMP and subtable options where you are not polling everything at once but instead, only polling data that can change frequently.
Besides this, I would also advise looking into the partial option so that DataMiner is not overloaded into showing all that data at once.
Hi João,
I'll try to overcome the issue with partial option (as partialSNMP is already implemented in driver with 'options="instance;bulk:50"').
Thank you for your answer!