Skip to main content

Dashboard



The Analytics Dashboard provides real-time visibility into your AI agent performance, costs, and usage patterns. It organizes metrics into six sections: Overview, Cost & Usage, Latency, Tool Usage, Agent/Workflow Usage, and Prompt Analytics.

Time Range Selection

Select a time range from the dropdown:

  • 1 hour, 3 hours, 6 hours, 9 hours, 12 hours
  • 1 day, 2 days, 3 days, 7 days, 14 days, 30 days
  • Custom date range

Free plan users can access up to 7 days of data. 30-day analytics requires a Pro plan.

Overview Section

Trace Metrics Card

Displays aggregate statistics for all traces in the selected period:

MetricDescription
Total TracesNumber of agent/workflow executions
Success RatePercentage of traces that completed without errors
Error CountNumber of failed traces
Avg DurationAverage execution time across all traces

LLM Call Metrics Card

Summarizes LLM API call statistics:

MetricDescription
Total CallsNumber of LLM API requests
Success RatePercentage of successful LLM calls
Failed CallsNumber of LLM calls that returned errors
Avg LatencyAverage response time for LLM calls

Trace Counts Chart

A time-series chart showing trace volume over time. Data points are color-coded by status:

  • Green: Successful traces
  • Red: Failed traces
  • Yellow: In-progress traces

Click on a data point to filter traces by that time period.

LLM Call Rates Chart

Shows LLM call volume over time, broken down by:

  • Success vs failure
  • Model provider (OpenAI, Anthropic, etc.)

Cost & Usage Section

LLM Cost Metrics Card

Displays cost breakdown for LLM usage:

MetricDescription
Total CostSum of all LLM API costs
Cost by ModelBreakdown by model (GPT-4, Claude, etc.)
Cost TrendComparison with previous period

User Cost Metrics Card

Shows cost attribution by user:

MetricDescription
Top UsersUsers with highest LLM costs
Cost per UserAverage cost per active user
User CountNumber of unique users

Trace Cost Chart

Time-series chart of costs per trace execution. Helps identify:

  • Cost spikes
  • Expensive workflows
  • Cost trends over time

Token Usage Chart

Displays token consumption over time:

  • Prompt tokens: Input tokens sent to LLM
  • Completion tokens: Output tokens received from LLM
  • Total tokens: Combined usage

Latency Section

Trace Latency Chart

Shows end-to-end execution time for traces:

  • P50 (median)
  • P90 (90th percentile)
  • P99 (99th percentile)
  • Average

LLM Latency Chart

Displays LLM response times:

  • By model
  • By provider
  • Percentile distribution

Tool Usage Section

Tool Analytics Charts

Visualizes tool execution patterns:

ChartDescription
Tool Call DistributionPie chart of tool usage by name
Tool LatencyAverage execution time per tool
Tool Success RateSuccess/failure ratio per tool
Tool Usage Over TimeTime-series of tool calls

Agent/Workflow Usage Section

Agent Analytics Charts

Shows agent and workflow execution metrics:

ChartDescription
Agent DistributionBreakdown of runs by agent name
Agent Success RateSuccess/failure ratio per agent
Agent LatencyAverage execution time per agent
Workflow StepsAverage steps per workflow execution

Prompt Analytics Section

Summary Cards

CardDescription
Total PromptsNumber of active prompts in the project
Total UsageAll-time prompt execution count
Success RateAverage success rate across all prompts
Total CostCumulative cost for prompt executions

Top Prompts

Lists the most frequently used prompts with:

  • Prompt name and type (chat/completion)
  • Current version
  • Usage count

Click a prompt to navigate to its detail page.

Data Limitations

Free plan users see the first 100 records of the current month. A "Limited view" badge appears when data is truncated. Upgrade to Pro for full historical data access.

Table of Contents