- Overview
- API Resources
Supply Chain & Retail Solutions API guide
By default, every ingest request writes to the warehouse immediately. Scheduled ingestion changes that for one or more tables: incoming rows are batched as they arrive and flushed to the warehouse on a per-table cron schedule. Use it when you want to deliver data continuously through the API but only commit it on a fixed cadence — for example, once a day at 02:00 UTC.
This page describes the observable behavior. Enablement is managed by Peak; see Requesting enablement below.
When to use scheduled ingestion
Scheduled ingestion is a good fit when:
- You want to decouple the rate at which you push data from the rate at which it lands in the warehouse — for example, when you ingest steady streams from many sources but downstream consumers only need a daily snapshot.
- You need to smooth ingestion cost by writing a single large batch per cron tick instead of many small writes.
- You're comfortable with rows being buffered for up to one cron interval before they become visible to the Pricing Solution.
If you need rows to be queryable immediately after the API returns, do not enable scheduled ingestion for that table. Use the default direct-write mode.
How it changes your API calls
The endpoint, headers, and request body are unchanged from the standard ingestion flow. What changes is the lifecycle of accepted rows after the API responds:
- Without a schedule: successful rows are accepted into the asynchronous ingestion queue and written to the warehouse within a few seconds. See Validation behavior in the API Guide.
- With a schedule: successful rows are accepted into a per-table buffer and held there until the next cron tick, when they are flushed to the warehouse in a single batch.
Validation feedback (any failed rows in the response's failed array) is returned in the same way in both cases. Failed rows also surface in the matching <table>_failed_rows tables and the Data Quality Dashboard, just as in the unscheduled flow — for scheduled tables, those entries appear shortly after the scheduled run rather than shortly after each ingest request.
A 200 OK response on a scheduled table means the rows were accepted and buffered, not that they have been written to the warehouse.
Cron format
Schedules are configured per table using a standard cron expression. Both Quartz (six fields, including seconds) and Unix (five fields) cron formats are supported. Some examples:
| Expression | Format | Meaning |
|---|---|---|
0 0 2 * * ? | Quartz | Every day at 02:00:00 |
0 0 * * * ? | Quartz | At the top of every hour |
0 */15 * * * ? | Quartz | Every 15 minutes |
0 2 * * * | Unix | Every day at 02:00 |
Times are interpreted in UTC.
Behavior between scheduled runs
When a table is configured for scheduled ingestion, the following happens automatically:
- Between runs, the connector that writes rows from the buffer to the warehouse is idle. No data is moved.
- At the configured cron time, the platform activates the connector. Buffered rows for that table are validated (see Validation behavior) and written to the warehouse in a single batch.
- After the batch completes, the connector returns to idle until the next cron tick.
This idle / active cycle is managed by the platform — there is no API for you to start, stop, or step it.
Inspecting outcomes
Scheduled runs surface outcomes through the same channels as direct ingestion:
- The HTTP response on each ingest call confirms whether the rows were accepted into the buffer and reports any validation failures detected before the response.
- The Data Quality Dashboard in your Peak tenant shows aggregate outcomes for each scheduled run — record counts, pass rate, top error codes, and the exact failed rows. Use the time-range filter to inspect a specific run.
- Failed rows from asynchronous validation appear in the matching
<table_name>_failed_rowstables, just as in the default flow.
If a scheduled run is missed (for example, because the platform was unavailable at the cron time), buffered rows are not lost — they are flushed at the next successful run.
Requesting enablement
Scheduled ingestion is opt-in per tenant and per table. To enable it:
- Submit a support ticket listing the solution name and the tables to be put on a schedule, and the desired cron expression for each.
- Confirm the cadence with your Peak contact — very high-frequency schedules (sub-minute) are rejected.
- Once enabled, your existing API integration continues to work without code changes; only the lifecycle of accepted rows changes, as described above.
To disable scheduled ingestion for a table, submit a follow-up ticket. Buffered rows are flushed once before the schedule is removed.