Test custom prompts against any AI model with real-time response formatting, token tracking, cost metrics, and persistent history.
The Playground is a free-form prompt testing environment. Unlike the schema-driven enrichment workflow, it lets you send any system prompt and user prompt to a model and inspect the raw response. Use it to experiment with prompt engineering, test model capabilities, or run one-off queries.
The Playground uses a split-pane layout with four panels. All inputs are persisted in local storage across sessions.
Set the model's behavior and persona. This is sent as the system message and persists between executions so you can iterate on the user prompt without re-entering context.
The main prompt sent to the model. This is where you write your query, instruction, or test case.
Displays the model's response with auto-detected formatting. JSON responses get syntax highlighting in the Monaco editor; plain text renders as-is. Copy to clipboard with one click.
All executions are saved locally. Filter by model, view prompt previews, and restore any previous session to continue iterating.
The sidebar provides model and language selection:
| Option | Description |
|---|---|
| Model | Select a single AI model from any configured provider. The virtualized dropdown shows pricing and provider info. |
| Language | Choose from 40 supported languages. Affects the language instruction in the prompt sent to the model. |
After each execution, the response panel displays detailed metrics:
Every prompt execution is saved to local storage automatically. The history panel provides tools to review and reuse past sessions:
Iterate on system prompts and instructions to refine model behavior before building them into enrichment schemas.
Run the same prompt against different models to compare output quality, speed, and cost before selecting models for enrichment.
Run one-off knowledge extraction queries without needing to set up a full schema and enrichment pipeline.