Skip to main content

Models

Explore 80+ providers and 2193+ models using VoltAgent's built-in model registry. Use provider/model strings for fast routing, or pass an ai-sdk LanguageModel when you need provider-specific control.

The registry is generated from models.dev and bundled with VoltAgent. At runtime, VoltAgent checks required environment variables and reports the exact one that's missing.

Highlights

  • Zero-import model strings - Use provider/model IDs without adding provider packages.
  • Registry-backed env mapping - VoltAgent knows which env vars each provider expects.
  • Type-aware model IDs - ModelRouterModelId adds autocomplete and validation.
  • Runtime routing - Pick models dynamically per request or tenant.
  • Bring your own LanguageModel - Drop in ai-sdk providers for advanced options.

Quick start (model strings)

import { Agent } from "@voltagent/core";

const agent = new Agent({
name: "openai-summary",
instructions: "Summarize the update in 2 bullets.",
model: "openai/gpt-4.1-mini",
});

Provider directory

Browse provider pages in the left navigation or visit the Providers directory for the full list.

For full model inventories per provider, see each provider page or explore models.dev.

Type-safe model IDs

Use ModelRouterModelId to get IDE autocomplete for model strings:

import type { ModelRouterModelId } from "@voltagent/core";

const modelId: ModelRouterModelId = "openai/gpt-4.1-mini";

Split workloads across models

Assign cheaper models to throughput-heavy steps and stronger models to critical analysis:

import { Agent } from "@voltagent/core";

const ingestAgent = new Agent({
name: "ingest-agent",
instructions: "Extract entities and key facts from raw notes.",
model: "google/gemini-2.0-flash",
});

const reviewAgent = new Agent({
name: "review-agent",
instructions: "Review the summary and flag risks or gaps.",
model: "anthropic/claude-3-5-sonnet",
});

Runtime model selection

Pick a model based on request context:

const agent = new Agent({
name: "runtime-router",
model: ({ context }) => {
const tier = (context.get("tier") as string) || "fast";
return tier === "fast" ? "openai/gpt-4.1-mini" : "anthropic/claude-3-5-sonnet";
},
});

Provider options

Pass provider-specific options per request when you need them:

const analyst = new Agent({
name: "analyst",
instructions: "Explain tradeoffs clearly and concisely.",
model: "openai/o3-mini",
});

const response = await analyst.generateText("Compare JWTs vs cookies for auth.", {
providerOptions: {
openai: { reasoningEffort: "high" },
},
});

Custom headers

If you need custom headers, pass an ai-sdk LanguageModel directly:

import { Agent } from "@voltagent/core";
import { createOpenAICompatible } from "@ai-sdk/openai-compatible";

const customProvider = createOpenAICompatible({
name: "openai",
baseURL: "https://api.openai.com/v1",
apiKey: process.env.OPENAI_API_KEY,
headers: {
"X-Client-Source": "voltagent-docs",
},
});

const agent = new Agent({
name: "custom-agent",
model: customProvider("gpt-4o-mini"),
});
note

VoltAgent does not include fallback chains yet. Implement retries or failover in your app if needed.

Use ai-sdk directly

You can use ai-sdk provider modules anywhere VoltAgent expects a LanguageModel:

import { mistral } from "@ai-sdk/mistral";
import { Agent } from "@voltagent/core";

const agent = new Agent({
name: "mistral-agent",
model: mistral("mistral-small-latest"),
});

Table of Contents