Real-time cost analytics with time-series charts, per-model breakdowns, performance metrics, and configurable date presets.
Overview
The Cost Dashboard gives you full visibility into your LLM spending and performance. It aggregates data from all enrichment records in your organization and presents it through interactive charts and summary cards. Use it to identify cost trends, compare model efficiency, and optimize your enrichment pipeline.
Tabs
2
Date Presets
4
Chart Types
5
Cross-Org
Owner+
Date Presets
Select a time range from the sidebar. The URL updates to reflect the selected preset (e.g., /costs/30d), enabling bookmarkable views:
Preset
Time Range
Chart Grouping
7d
Last 7 days
Daily
30d
Last 30 days
Daily
90d
Last 90 days
Weekly
all
All time
Monthly
Owners and admins can toggle All organizations in the sidebar to view aggregated costs across the entire platform. This preference is persisted in local storage.
Cost Overview Tab
The default tab provides a comprehensive spending breakdown:
Summary Cards
Total Cost
$12.47
Sum of all LLM costs in the period
Total Requests
342
Number of LLM calls made
Avg Cost/Request
$0.036
Mean cost per individual call
Most Used Model
claude-sonnet
Model with the highest request count
Charts & Tables
Cost Over Time— Line chart showing spending trends across the selected period, grouped by day/week/month.
Cost by Model— Horizontal bar chart of the top 10 models by total cost. Quickly identify the most expensive models.
Token Usage— Breakdown of input tokens, output tokens, and total tokens consumed.
Model Insights— Cards highlighting the most used model and the most expensive model.
Daily Breakdown— Table with date, request count, total cost, average cost per request, and token totals.
Performance Analysis Tab
Switch to the Performance tab to analyze model efficiency and identify cost-performance trade-offs:
Summary Cards
Total Records
156
Enrichment records in the period
Models Used
8
Distinct models that produced results
Language Variants
3
Languages used in multilingual enrichment
Token Ranges
4
Distinct input token size buckets
Charts & Tables
Cost vs Duration— Scatter chart with bubble sizes proportional to request count. Each bubble is a model — find the best balance of speed and cost.
Performance by Model— Table comparing request count, average cost, average duration, and token statistics per model.
Cost by Language Count— Bar chart showing how cost scales with the number of languages selected for multilingual enrichment.
Cost by Input Token Range— Bar chart breaking down costs by input prompt size buckets (e.g., 0–1K, 1K–5K, 5K–10K tokens).
Performance by Schema Property Count— Table showing how enrichment cost and duration correlate with schema complexity (enrichment records only).
Optimization Tips
Compare Models
Use the Cost vs Duration scatter chart to find models that deliver good quality at lower cost. Smaller, faster models often suffice for simple schemas.
Monitor Trends
Check the Cost Over Time chart weekly. Sudden spikes may indicate misconfigured batch jobs or unexpected retry loops.
Right-Size Schemas
The Performance by Schema Property Count table shows how cost scales with schema size. Remove unnecessary properties to reduce per-enrichment cost.
Use Caching Models
Models with prompt caching (like Anthropic) reduce costs for repeated enrichments with the same schema. Token Usage cards show cached token savings.