Playground - Entity Enricher Documentation

Playground

Test custom prompts against any AI model with real-time response formatting, token tracking, cost metrics, and persistent history.

Overview

The Playground is a free-form prompt testing environment. Unlike the schema-driven enrichment workflow, it lets you send any system prompt and user prompt to a model and inspect the raw response. Use it to experiment with prompt engineering, test model capabilities, or run one-off queries.

Models
Any
Languages
40
History
Persistent
Cost Tracking
Per call

Interface Layout

The Playground uses a split-pane layout with four panels. All inputs are persisted in local storage across sessions.

System Prompt

Set the model's behavior and persona. This is sent as the system message and persists between executions so you can iterate on the user prompt without re-entering context.

User Prompt

The main prompt sent to the model. This is where you write your query, instruction, or test case.

Response

Displays the model's response with auto-detected formatting. JSON responses get syntax highlighting in the Monaco editor; plain text renders as-is. Copy to clipboard with one click.

History

All executions are saved locally. Filter by model, view prompt previews, and restore any previous session to continue iterating.

Configuration

The sidebar provides model and language selection:

OptionDescription
ModelSelect a single AI model from any configured provider. The virtualized dropdown shows pricing and provider info.
LanguageChoose from 40 supported languages. Affects the language instruction in the prompt sent to the model.

Execution & Metrics

After each execution, the response panel displays detailed metrics:

Processing timeTotal round-trip time in milliseconds, including network latency and model inference.
Input tokensNumber of tokens in your system + user prompt as counted by the model.
Output tokensNumber of tokens in the model's response.
CostEstimated cost in USD based on the model's per-token pricing.

Execution History

Every prompt execution is saved to local storage automatically. The history panel provides tools to review and reuse past sessions:

Model filterFilter history entries by the model used. Quickly find results from a specific provider.
Prompt previewEach entry shows the first 50 characters of the user prompt, the timestamp, and the model name.
Success indicatorVisual badge showing whether the execution succeeded or failed.
Restore sessionClick any history entry to restore the system prompt, user prompt, model, and language selection.
Clear historyRemove all saved entries with one click.

Common Use Cases

Prompt Engineering

Iterate on system prompts and instructions to refine model behavior before building them into enrichment schemas.

Model Comparison

Run the same prompt against different models to compare output quality, speed, and cost before selecting models for enrichment.

Quick Queries

Run one-off knowledge extraction queries without needing to set up a full schema and enrichment pipeline.

Next Steps