Hi,
I have a scenario wherein after optimizing the code like using Parallel for each and batch also it gets stuck and LCA goes into timeout, don't know how to refactor the code
Please find the code:
public GQIPage GetNextPage(GetNextPageInputArgs args)
{
var rows = new ConcurrentBag<GQIRow>();
var serviceList = GetElements();
var serviceInfoCache = new ConcurrentDictionary<string, string>();
var semaphore = new SemaphoreSlim(10); // Limit the degree of parallelism
try
{
Parallel.ForEach(serviceList, new ParallelOptions { MaxDegreeOfParallelism = 10 }, service =>
{
semaphore.Wait();
try
{
var childInfos = service.Children?.ToList();
if (childInfos != null && childInfos.Count > 0)
{
foreach (var batch in Batch(childInfos, 100))
{
var reqParam = new List<DMSMessage>();
foreach (var childInfo in batch)
{
var paramsNeeded = childInfo.Parameters
.Where(x => _resources.Any(y => y.ParamterId == x.ParameterID))
.ToList();
foreach (var parameter in paramsNeeded)
{
var msg = new GetParameterMessage(childInfo.DataMinerID, childInfo.ElementID,
parameter.ParameterID, parameter.FilterValue, true);
reqParam.Add(msg);
}
}
try
{
_logger.Information("Send DMA request");
var responseMsg = _dms.SendMessages(reqParam.ToArray());
var response = responseMsg.OfType<GetParameterResponseMessage>().ToList();
var paramInfo = response.FindAll(x => !string.IsNullOrEmpty(x.Value.ToString()) && !x.Value.ToString().Equals("EMPTY", StringComparison.CurrentCultureIgnoreCase)).ToList();
if (paramInfo.Any())
{
foreach (var childInfo in batch)
{
var cells = InitializeTableCells();
CreateElementRow(ref rows, childInfo, cells, paramInfo, service.Name);
}
}
}
catch (Exception e)
{
_logger.Error($"Error fetching parameters: {e.Message}");
}
}
}
}
finally
{
semaphore.Release();
}
});
}
catch (Exception e)
{
_logger.Error($"Error fetching elements: {e.Message}");
}
_logger.Information("ROWS Length -- " + rows.Count);
return new GQIPage(rows.ToArray()) { HasNextPage = false };
}
private void CreateElementRow(ref ConcurrentBag<GQIRow> rows, LiteServiceChildInfo element, GQICell[] cells,
List<GetParameterResponseMessage> paramInfo, string serviceName)
{
var paramsNeeded = element.Parameters
.Where(x => _resources.Any(y => y.ParamterId == x.ParameterID))
.Select(x => x.ParameterID)
.ToHashSet();
var paramInfos =
paramInfo.Where(x => paramsNeeded.Contains(x.ParameterId) && x.ElId == element.ElementID);
var paramInfosGrouped = paramInfos.GroupBy(x => x.TableIndex.Split('/')[0]);
foreach (var group in paramInfosGrouped)
{
var tableIndexPrefix = group.Key;
var groupedParamInfos = group.ToList();
var lastcell = groupedParamInfos.Last();
foreach (var cell in groupedParamInfos)
{
BuildCells(cell, cells, ref rows);
if (cell.ParameterId == paramsNeeded.Last() || lastcell == cell)
{
if (cells.All(x => string.IsNullOrEmpty(x.Value.ToString())))
{
cells = InitializeTableCells();
}
else
{
if (rows.Any())
{
var previousRow = rows.Last().Cells;
if (previousRow.Any(x => x.Value.Equals(cell.TableIndex)))
{
rows.TryTake(out _);
}
}
var response =
_dms.SendMessages(new GetElementByIDMessage(element.DataMinerID,
element.ElementID));
var responseMsgs = response.OfType<ElementInfoEventMessage>().FirstOrDefault();
var location = responseMsgs.GetPropertyValue("Territory");
if (cells.Any())
{
cells[0] = new GQICell { Value = serviceName };
cells[1] = new GQICell { Value = element.Alias };
cells[2] = new GQICell
{ Value = string.IsNullOrEmpty(location) ? "No Territory" : location };
rows.Add(new GQIRow(cells));
}
cells = InitializeTableCells();
}
}
}
}
}
So say if there are more than 5 services and each has 5 elements its takes lot of time to generate rows
FYI I am testing for 20 services with 5 element each it gets timeout
Hi Apurva,
Your ad hoc data source seem to do quite a lot of work to gather data from different parts of the system. Note that in these scenarios it might be better to consider aggregating this data outside of GQI in a dedicated table that can then be queried trivially.
That said, there are some more advanced tricks you can employ to improve performance of the ad hoc data source if you know what you are doing.
First step is always to measure what part of that computation is slow. You could use a StopWatch and add some logs that write how long specific steps take to get a rough estimate.
There are probably expensive parts of the computation that could be cached for a while.
By using a static reference to some kind of cache object, it is possible to reuse data.
This usually involves setting up and maintaining a dedicated SLNet connection and caching data for multiple users based on their security permissions.


Basically, it could be any other mechanism that independently of the query framework gathers the information you require. Depending on your situation it could be a dedicated parameter table, csv file, … that is populated in the background by a protocol or script.
Then the time to execute the query doesn't depend anymore on how long it takes to gather all the individual pieces of data from across the system. It can just read a single table or file.
sorry, did got this, what do you mean by this
to consider aggregating this data outside of GQI? how we can