Llama
Use llama/<model> with VoltAgent's model router.
Quick start
import { Agent } from "@voltagent/core";
const agent = new Agent({
name: "llama-agent",
instructions: "You are a helpful assistant",
model: "llama/cerebras-llama-4-maverick-17b-128e-instruct",
});
Environment variables
LLAMA_API_KEY
Provider package
@ai-sdk/openai-compatible
This provider uses the OpenAI-compatible adapter.
Default base URL
https://api.llama.com/compat/v1/
You can override the base URL by setting LLAMA_BASE_URL.
Provider docs
Models
Show models (7)
- cerebras-llama-4-maverick-17b-128e-instruct
- cerebras-llama-4-scout-17b-16e-instruct
- groq-llama-4-maverick-17b-128e-instruct
- llama-3.3-70b-instruct
- llama-3.3-8b-instruct
- llama-4-maverick-17b-128e-instruct-fp8
- llama-4-scout-17b-16e-instruct-fp8