Enrich up to 100 entities in parallel with real-time progress tracking, automatic multi-model fusion, and export to JSON or Excel.
Batch enrichment supports two ways to provide entity data:
Paste or type a JSON array of entities directly. The editor provides syntax highlighting, validation markers, and persists your data across sessions in local storage.
[
{ "name": "Sanofi", "country": "France" },
{ "name": "Pfizer", "country": "USA" },
{ "name": "Novartis", "country": "CH" }
]Fetch entities from any REST API endpoint. The system automatically extracts arrays from common response wrappers.
Supported authentication:
If the API returns an object, the system checks keys like data, results, items for an embedded array.
After loading entities, they appear in a selectable list with validation status. You can choose which entities to include in the batch:
The sidebar mirrors the single enrichment configuration options:
| Option | Description |
|---|---|
| Schema | Target schema that defines the enrichment output structure |
| Strategy | Single pass, expert domains, or multi-expertise (parallel calls per domain) |
| Models | One or more AI models to run per entity. Multiple models enable automatic fusion. |
| Languages | Languages for multilingual field enrichment (e.g., English + French) |
| Classification | Optional fast model for entity type verification before enrichment |
| Arbitration | Model for LLM-based conflict resolution during fusion. If unset, rule-based merge is used. |
Before starting a batch, a confirmation dialog shows a cost estimate and summary. The estimate is calculated based on property count, model pricing, and the number of entities and models selected. A warning appears when the total LLM call count exceeds 100.
All selected entities are processed simultaneously. Each entity goes through the full enrichment pipeline independently:
A global rate limiter prevents overwhelming AI providers. All entities share the same per-provider concurrency limits (typically 5 concurrent calls per provider). With 20 entities and 2 models, up to 5 calls run simultaneously per provider — the rest wait for availability. This ensures reliable execution without hitting API rate limits.
The results panel shows live progress using Server-Sent Events (SSE). Each entity has a collapsible card that updates in real time:
Waiting to start processing
Currently being enriched, with expertise progress badges showing completion per domain
All models finished successfully. Card auto-collapses.
Some models or expertises failed. Partial results available.
All models failed for this entity. Error details shown.
You can cancel a running batch at any time. Cancellation is cooperative — entities already in-flight complete their current LLM call, but no new calls start. Partial results from completed entities are preserved.
Batch processing is designed to be resilient. Individual failures do not stop the batch:
After batch completion, export results in three formats. For each entity, the fusion result is preferred if available; otherwise, the best model result is used.
Download the full results as a structured JSON file with all entity data, model outputs, and fusion metadata.
Copy the JSON results directly to your clipboard for pasting into other tools or scripts.
A three-sheet workbook: Results (one row per entity with flattened properties), Summary (batch metadata, models, costs), and Conflicts (per-entity conflict details with resolution reasoning).
| Limit | Value |
|---|---|
| Max entities per batch | 100 |
| Max entity data size | 50,000 characters |
| Max prompt length | 100,000 characters |
| URL fetch timeout | 30 seconds |