- Overview
- API Resources
Supply Chain & Retail Solutions API guide
The Data Ingestion API is a web-based REST API used to onboard and validate data into UiPath's Supply Chain & Retail Pricing Solution. It provides a clear, standardised way for external partners and Peak data engineers to submit solution-related data.
The API defines versioned schemas and validation rules, documented in this documentation, which serve as the starting point for onboarding. By validating data at ingestion time, the API ensures that all submitted data is consistent, complete, and immediately usable by the Pricing Solution.
The API manages the full lifecycle of data onboarding — from rolling out a solution's tables in your warehouse, through ingesting and validating data, to monitoring data quality after the fact. When validation issues occur, the API returns actionable row-level feedback that enables quick correction and resubmission, and a tenant-wide Data Quality Dashboard surfaces aggregate ingestion outcomes and failed rows.
For historical data onboarding, the API supports up to 500 rows per payload, with submission rates supported up to 50 requests per second, allowing larger historical datasets to be uploaded in a controlled and scalable manner.
For ongoing updates, the API supports reliable daily or incremental data delivery with row-level validation feedback in every response. Successful rows are accepted for asynchronous ingestion and become available to the Pricing Solution within a few seconds.
Main capabilities
-
Schema lifecycle – save a solution's standard schema, roll out its tables to your warehouse, upgrade to a new version of the schema, or add a new column to an existing table
-
Data ingestion – receives datasets from external sources and ingests them into the warehouse according to schema and validation rules
-
Schema-driven validation – enforces data types, required fields, uniqueness, and custom business rules on each row before any data lands in the warehouse
-
Scheduled ingestion – batches incoming rows per table and flushes them on a configurable cron schedule, instead of writing each request directly to the warehouse
-
Error handling – returns detailed row-level validation errors and batch status for easy troubleshooting
-
Data quality monitoring – inspect aggregate ingestion outcomes, top error codes, and the exact failed rows through the Data Quality Dashboard available in your Peak tenant
-
Data management – supports updating and deleting records based on primary keys
-
Integration-ready – exposes a standard HTTP interface with JSON payloads and token-based authentication for third-party applications
This guide is intended primarily for technical end users familiar with REST APIs. For additional support, submit a support ticket.