Skip to main content

LLM Usage & Costs

VoltOps automatically tracks and displays LLM usage statistics including prompt tokens, completion tokens, and total costs across all your AI interactions. Monitor your spending, optimize token usage, and analyze cost patterns in real-time.

llm-usage-1

Automatic Pricing Calculation

When you include the model name in your metadata under modelParameters, VoltOps automatically calculates pricing for all supported LLM providers. This gives you instant cost visibility without manual configuration.

Automatic model detection

  • VoltAgent Framework: Automatically captures model information and calculates costs
  • Vercel AI SDK: Built-in integration provides seamless cost tracking

Manual model specification

  • JavaScript/TypeScript SDK Include model name with usage information
const agent = await trace.addAgent({
name: "Support Agent",
metadata: {
modelParameters: {
model: "gpt-4",
},
role: "customer-support",
department: "customer-success",
},
});
  • Python SDK Include model name with usage information
agent = await trace.add_agent(
name="Support Agent",
metadata={
"modelParameters": {
"model": "gpt-4",
},
"role": "customer-support",
"department": "customer-success"
}
)

llm-usage-2

Usage Statistics Display

VoltOps provides detailed token usage breakdowns in your dashboard:

console cost

This gives you instant visibility into:

  • Prompt tokens: Input text sent to the LLM
  • Completion tokens: Generated response from the LLM
  • Total tokens: Combined usage for accurate cost calculation
  • Cost breakdown: Real-time pricing based on your model usage

Table of Contents