Skip to content
DataMiner Dojo

More results...

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Search in posts
Search in pages
Log in
Menu
  • Updates & Insights
  • Questions
  • Learning
    • E-learning Courses
    • Empower Replay: Limited Edition
    • Tutorials
    • Open Classroom Training
    • Certification
      • DataMiner Fundamentals
      • DataMiner Configurator
      • DataMiner Automation
      • Scripts & Connectors Developer: HTTP Basics
      • Scripts & Connectors Developer: SNMP Basics
      • Visual Overview – Level 1
      • Verify a certificate
    • YouTube Videos
    • Solutions & Use Cases
      • Solutions
      • Use Case Library
    • Agility
      • Book your Agile Fundamentals training
      • Book you Kanban workshop
      • Learn more about Agile
        • Agile Webspace
        • Everything Agile
          • The Agile Manifesto
          • Best Practices
          • Retro Recipes
        • Methodologies
          • The Scrum Framework
          • Kanban
          • Extreme Programming
        • Roles
          • The Product Owner
          • The Agile Coach
          • The Quality & UX Coach (QX)
    • >> Go to DataMiner Docs
  • DevOps
    • About the DevOps Program
    • Sign up for the DevOps Pogram
    • DataMiner DevOps Support
    • Feature Suggestions
  • Swag Shop
  • PARTNERS
    • Business Partners
    • Technology Partners
  • Contact
    • Sales, Training & Certification
    • DataMiner Support
    • Global Feedback Survey
  • >> Go to dataminer.services

Populate a Table with separate rows API calls when each response is limited to one ID

80 views6 days agoConnector driver version HTTP
0
Rachel Andrews80 6 days ago 0 Comments

Hi,

I need to fetch api/{id}/service for id = 1…8. Each call returns a JSON Service object for that single ID only, and I want my DataMiner array-parameter table to show one distinct row per ID. Currently my implementation produces only one row (or eight duplicates) instead of eight unique rows. What’s the simplest way to drive one row per API call—either in a single QAction or via an array-bound session—given that each response is limited to that one ID?

Bram Devlaminck [SLC] [DevOps Enabler] Answered question 6 days ago

1 Answer

  • Active
  • Voted
  • Newest
  • Oldest
1
Bram Devlaminck [SLC] [DevOps Enabler]515 Posted 6 days ago 2 Comments

Hi Rachel,

I myself am only familiar with the QAction approach, so I'll only comment in that regard.
In my opinion, the easiest way to do this after performing the API call is with the FillArrayNoDelete function, ensuring that the primary key of every entry is unique.
In your case the id (1..8) serves this purpose.
Alternatively, you could also use the FillArray function if you want to remove entries once the primary key is not present anymore in the source where you fetch your data from.
Based on this primary key DataMiner will either add a new row (if no entry present with that key), or update the existing entry with that primary key.

As visible below in the screenshot: The FillArrayNoDelete method expects a 2d array. Every array in the first dimension represents a column. See the screenshot of the table below to see how this is formatted.

The most efficient approach in your case will be to first retrieve your 8 rows and perform 1 "big" FillArrayNoDelete with all the data (since you only have 8 rows).

Bonus tip: I prefer using the "SLProtocolExt" type (See my screenshot below, pay attention to the type that's specified for the "protocol" object as an argument of the "Run" function).
Using this has the advantage that the table name is made available directly as a property of the "protocol" object. In my case, the name defined for my table in the protocol.xml is "MyCustomTable", which is translated to "protocol.mycustomtable" in the QAction.


Hope this helps!

Kind regards,

José Silva [SLC] [DevOps Catalyst] Posted new comment 6 days ago
José Silva [SLC] [DevOps Catalyst] commented 6 days ago

Hi all,

To complete this implementation, it would be beneficial to include a mechanism that ensures the table does not grow indefinitely over time.

Ideally, the best approach would be to retrieve all data and use a standard FillArray operation, which automatically handles cleanup by replacing the entire table content.

If that’s not feasible and you're using FillArrayNoDelete, I recommend adding a "Last Received" timestamp column to the table. This would allow you to identify and remove outdated rows after a defined period, helping to keep the table size under control.

Kind regards,

José Silva [SLC] [DevOps Catalyst] commented 6 days ago

Regarding the tip of using SLProtocolExt, I'd recommend to read this:

https://community.dataminer.services/question/what-is-the-performance-impact-of-using-slprotocolext/?hilite=SLProtocolExt

Please login to be able to comment or post an answer.

My DevOps rank

DevOps Members get more insights on their profile page.

My user earnings

0 Dojo credits

Spend your credits in our swag shop.

0 Reputation points

Boost your reputation, climb the leaderboard.

Promo banner DataMiner DevOps Professiona Program
DataMiner Integration Studio (DIS)
Empower Katas
Privacy Policy • Terms & Conditions • Contact

© 2025 Skyline Communications. All rights reserved.

DOJO Q&A widget

Can't find what you need?

? Explore the Q&A DataMiner Docs

[ Placeholder content for popup link ] WordPress Download Manager - Best Download Management Plugin