- Overview
- API Resources
Supply Chain & Retail Solutions API guide
Definitions
These descriptions are designed to support correct usage of the API. For further assistance or questions, please submit a support ticket.
| Name | Description | Required |
|---|---|---|
solutionName | A unique identifier for the rollout or solution. Example: B2C_OOTB. | Yes |
prefix | A unique prefix applied to all generated object names to support naming convention. | Yes |
suffix | A unique suffix appended to all generated object names for differentiation. | Yes |
appName | The application for which the objects are being deployed. | Yes |
appVersion | The semantic version of the application's standard schema being saved, rolled out, or upgraded. | Yes |
objectName | The base object name. Exact object name matching is currently enforced during schema validation. | Yes |
targetSchemaName | The schema within the data warehouse where the tables and data will be stored. Example: STAGE. | Yes |
operationType | The type of ingest operation the API should perform. Supported values: UPSERT (insert or update based on primary key) or APPEND (insert only, similar to an insert operation). | Yes |
dryRun | When true, the request is fully validated and the response is returned, but no rows are written to the warehouse. Use it to test a payload without persisting anything. Default false. | No |
API host and reference
All v2 endpoints share a single host. Use this base URL for every request in this guide:
- Production:
https://ingestion.peak.ai
An interactive OpenAPI / Swagger reference is served at the same host:
- Production: https://ingestion.peak.ai/api-docs/
The Swagger page lists every endpoint, request schema, and response schema and lets you try requests inline. Use it as the source of truth when generating client code or validating a payload structure. For non-production environments (beta or sandbox), submit a support ticket to request the corresponding URLs.
Operation types
The API supports two operation types for data ingestion:
UPSERT
- Behavior: Insert new records or update existing records based on primary key match
- Use case: Maintaining up-to-date records where data may change over time
- Example: Updating product information, customer details, or pricing data
APPEND
- Behavior: Insert new records only, does not update existing records
- Use case: Appending new data without modifying historical records
- Example: Transaction logs, event data, or time-series data where records should not be modified
The operationType you submit is recorded as "upsert" or "append" on each failed row in the <table_name>_failed_rows companion table — see Audit columns added at rollout for the full list of columns the API populates automatically.
Validation behavior
Every row is validated inline before the API responds, regardless of which mode the tenant is in. Failed rows are returned in the response's failed array. The mode determines what happens to the successful rows after validation.
Asynchronous (default)
Successful rows are accepted and written to the warehouse asynchronously — typically within a few seconds. A 200 OK response means the rows passed validation and were accepted for ingestion, not that they're queryable yet.
Failed rows are returned in the response and also recorded to the matching <table_name>_failed_rows tables that were created during rollout, where they are visible in the Data Quality Dashboard for triage.
This is the recommended mode for everyday ingestion.
Synchronous (on request)
A synchronous fallback is available for use cases where you need successful rows to be queryable in the warehouse the moment a request returns 200 OK — for example, when bulk-loading a large historical dataset and you want each batch confirmed before sending the next, or when an interactive caller needs to act on landed data right away.
In sync mode, successful rows are written to the warehouse before the response returns; failed rows are returned in the response only (they are not recorded to the <table_name>_failed_rows tables).
To enable sync mode for your tenant, submit a support ticket.
Scheduled ingestion
When scheduled ingestion is enabled for a table, individual ingest requests do not write to the warehouse immediately — they are batched and flushed at the configured cron time. See Scheduled ingestion for details on enablement, cron format, and the observable behavior between scheduled runs.
API limits
The API supports ingestion of up to 500 rows per request. When ingesting larger datasets, ensure your data is split into appropriately sized batches. Rate limits are 50 requests per second.
For deletion, the same 500-row maximum applies — each delete request can target between 1 and 500 primary key value sets.
Response status codes
The ingest endpoint uses three status codes to communicate the outcome of a batch:
| Status | Meaning |
|---|---|
200 OK | Every row in the request passed validation and was accepted. |
207 Multi-Status | Some rows passed and some failed. Successful rows are accepted; failed rows are returned in the response's failed array with structured error details. |
400 Bad Request | Every row in the request failed validation, or the request payload itself is malformed. |
Other endpoints return standard HTTP semantics: 201 Created for successful schema saves and rollouts, 404 Not Found when a referenced solution or schema does not exist, 409 Conflict when saving a duplicate appName + appVersion, and 500 Internal Server Error for unexpected failures.
Error codes
Each validation failure returns a structured error response containing an error code, category, and message. Codes follow the pattern DI_E_XXXXX (errors) or DI_W_XXXXX (warnings, non-fatal).
Resolution guidance by category
| Category | What It Means | Consumer Action |
|---|---|---|
BUSINESS_VALIDATION | Data value violates a schema-defined business rule | Fix the data value (correct format, valid enum, within range, etc.) |
SYSTEM_DATA | Data integrity issue (missing PK, duplicate, null PK, type mismatch) | Fix the data (provide PK, remove duplicates, fill required keys, send the right JSON type) |
SYSTEM_SCHEMA | Data structure does not match the schema definition | Fix the schema/data alignment (column not in schema, PK column missing, precision/scale mismatch) |
Business-level validation errors — Category: BUSINESS_VALIDATION
These come from schema-defined validation rules configured per attribute (required, range, enum, length, date/timestamp format).
| # | Error Code | When Triggered | Standard Message |
|---|---|---|---|
| 1 | DI_E_23N01 | required validator: field absent from payload | Required field is absent |
| 2 | DI_E_23502 | nonNull validator: value is null | Value cannot be null |
| 3 | DI_E_23E01 | nonNull validator: string is blank/empty | String value cannot be empty |
| 4 | DI_E_22026 | minLength validator: string too short | String length is below minimum |
| 5 | DI_E_22001 | maxLength validator: string too long | String length exceeds maximum |
| 6 | DI_E_22003 | range validator: value above max or below min | Numeric value out of allowed range |
| 7 | DI_E_22P02 | range validator: value not numeric | Value is not a valid number |
| 8 | DI_E_22023 | enum validator: value not in allowed set | Value is not one of the allowed enum values |
| 9 | DI_E_22007 | dateTimeFormat validator: unparseable date | Invalid date format |
| 10 | DI_E_22008 | timestampFormat validator: unparseable timestamp or negative epoch | Invalid timestamp format or epoch |
System-level data validation errors — Category: SYSTEM_DATA
These come from data integrity checks (primary key null, duplicate, type mismatch) and unique key checks.
| # | Error Code | When Triggered | Standard Message |
|---|---|---|---|
| 11 | DI_E_23P01 | PK attribute value is null/empty in a row | Primary key value cannot be null/empty |
| 12 | DI_E_23505 | 2+ rows in same batch share a PK value | Duplicate primary key |
| 13 | DI_E_23U01 | 2+ rows in same batch share a unique-key value | Duplicate unique key |
| 14 | DI_E_22I01 | Value not parseable as integer | Value is not a valid integer |
| 15 | DI_E_22N01 | Value not parseable as number/float | Value is not a valid number |
| 16 | DI_E_22B01 | Value not parseable as boolean | Value is not a valid boolean |
| 17 | DI_E_22S02 | Value not a valid string | Value is not a valid string |
| 18 | DI_E_22T01 | Timestamp field is empty string | Timestamp value is empty |
| 19 | DI_E_22P03 | Empty/null value during precision/scale check | Precision/scale field is empty or null |
| 20 | DI_E_22P04 | Precision/scale validation error | Precision/scale validation error |
System-level schema validation errors — Category: SYSTEM_SCHEMA
These come from data type mismatches and schema structural checks.
| # | Error Code | When Triggered | Standard Message |
|---|---|---|---|
| 21 | DI_E_42703 | Attribute in data row not found in schema | Column not found in schema |
| 22 | DI_E_23P02 | PK attribute entirely absent from a row | Primary key column is missing |
| 23 | DI_E_22T02 | Timestamp field is wrong JSON type | Timestamp value has wrong type |
| 24 | DI_E_22P01 | Total digits exceed attribute precision | Numeric value exceeds allowed precision |
| 25 | DI_E_22S01 | Decimal digits exceed attribute scale | Numeric value exceeds allowed scale |
Quick reference — all codes sorted
| Code | Category | Short Description |
|---|---|---|
DI_E_22001 | BUSINESS_VALIDATION | String too long (max length) |
DI_E_22003 | BUSINESS_VALIDATION | Numeric out of range (min/max) |
DI_E_22007 | BUSINESS_VALIDATION | Invalid date format |
DI_E_22008 | BUSINESS_VALIDATION | Invalid timestamp format / invalid epoch |
DI_E_22023 | BUSINESS_VALIDATION | Invalid enum value |
DI_E_22026 | BUSINESS_VALIDATION | String too short (min length) |
DI_E_22B01 | SYSTEM_DATA | Invalid boolean type |
DI_E_22I01 | SYSTEM_DATA | Invalid integer type |
DI_E_22N01 | SYSTEM_DATA | Invalid number type |
DI_E_22P01 | SYSTEM_SCHEMA | Numeric precision exceeded |
DI_E_22P02 | BUSINESS_VALIDATION | Value is not a valid number |
DI_E_22P03 | SYSTEM_DATA | Empty value for precision/scale check |
DI_E_22P04 | SYSTEM_DATA | Precision/scale validation error |
DI_E_22S01 | SYSTEM_SCHEMA | Numeric scale exceeded |
DI_E_22S02 | SYSTEM_DATA | Invalid string type |
DI_E_22T01 | SYSTEM_DATA | Timestamp empty string |
DI_E_22T02 | SYSTEM_SCHEMA | Timestamp wrong JSON type |
DI_E_23502 | BUSINESS_VALIDATION | Not-null violation |
DI_E_23505 | SYSTEM_DATA | Duplicate primary key in batch |
DI_E_23E01 | BUSINESS_VALIDATION | Not-empty violation |
DI_E_23N01 | BUSINESS_VALIDATION | Required field missing |
DI_E_23P01 | SYSTEM_DATA | Primary key value is null/empty |
DI_E_23P02 | SYSTEM_SCHEMA | Primary key column missing from row |
DI_E_23U01 | SYSTEM_DATA | Duplicate unique key in batch |
DI_E_42703 | SYSTEM_SCHEMA | Column not found in schema |
Categorization logic
Use the error code prefix to quickly identify the class of error:
DI_E_22XXX → Data exception (type/format/range/precision)
DI_E_23XXX → Integrity constraint (null/unique/pk/required)
DI_E_42XXX → Schema mismatch (undefined columns)
DI_W_XXXXX → Warning (non-fatal, future use)
DI_E_22XXX → Data exception (type/format/range/precision)
DI_E_23XXX → Integrity constraint (null/unique/pk/required)
DI_E_42XXX → Schema mismatch (undefined columns)
DI_W_XXXXX → Warning (non-fatal, future use)
- Definitions
- API host and reference
- Operation types
- UPSERT
- APPEND
- Validation behavior
- Asynchronous (default)
- Synchronous (on request)
- Scheduled ingestion
- API limits
- Response status codes
- Error codes
- Resolution guidance by category
- Business-level validation errors — Category:
BUSINESS_VALIDATION - System-level data validation errors — Category:
SYSTEM_DATA - System-level schema validation errors — Category:
SYSTEM_SCHEMA - Quick reference — all codes sorted
- Categorization logic