Prompt Management
Overview
VoltAgent provides a three-tier prompt management system that scales from simple prototypes to enterprise-grade production deployments. Choose the approach that best fits your current needs and easily migrate as your requirements grow.
The Three Approaches
Approach | Best For | Setup Time | Team Size | Flexibility |
---|---|---|---|---|
Static Instructions | Prototypes, simple tools | 0 minutes | 1-2 developers | Low |
Dynamic Instructions | Context-aware apps | 5 minutes | 2-5 developers | High |
VoltOps Management | Production teams | 15 minutes | 3+ team members | Enterprise |
1. Basic Static Instructions
Basic agent setup:
import { Agent } from "@voltagent/core";
import { VercelAIProvider } from "@voltagent/vercel-ai";
import { openai } from "@ai-sdk/openai";
const weatherAgent = new Agent({
name: "WeatherAgent",
llm: new VercelAIProvider(),
model: openai("gpt-4o-mini"),
instructions:
"You are a customer support agent. Help users with their questions politely and efficiently.",
});
What it is
Simple string-based instructions that remain constant throughout your agent's lifecycle. This is the most straightforward approach where you hardcode your agent's behavior directly in your application code.
Real-world example: A documentation chatbot that always behaves the same way regardless of user, time, or context.
When to use
✅ Perfect for:
- MVP/Prototyping: Getting your agent working quickly without infrastructure overhead
- Simple task-specific agents: Email summarizers, code formatters, static content generators
- Educational projects: Learning VoltAgent basics without complexity
- Single-purpose tools: Agents that perform one specific task consistently
❌ Avoid when:
- You need different behavior for different users
- Your agent needs to adapt based on time, location, or context
- Multiple team members need to edit prompts
- You're planning to A/B test different approaches
- Your application serves multiple customer segments
Pros & Cons
Pros | Cons |
---|---|
Simple and straightforward | No runtime flexibility |
No external dependencies | No version control |
Perfect for getting started | Hard to update in production |
Immediate deployment | No analytics or monitoring |
2. Dynamic Instructions with UserContext
Function-based instructions with userTier:
const supportAgent = new Agent({
name: "SupportAgent",
llm: new VercelAIProvider(),
model: openai("gpt-4o-mini"),
instructions: async ({ userContext }) => {
const userTier = userContext.get("userTier") || "basic";
if (userTier === "premium") {
return "You are a premium customer support agent. Provide detailed explanations, offer multiple solutions, and prioritize this customer's requests. Be thorough and professional.";
} else {
return "You are a customer support agent. Provide helpful but concise answers to user questions. Be friendly and efficient.";
}
},
});
Using the agent with userContext:
// Premium user gets different treatment
const premiumContext = new Map();
premiumContext.set("userTier", "premium");
const premiumResponse = await supportAgent.generateText("I need help with my order", {
userContext: premiumContext,
});
// Basic user gets standard support
const basicContext = new Map();
basicContext.set("userTier", "basic");
const basicResponse = await supportAgent.generateText("I need help with my order", {
userContext: basicContext,
});
What it is
Function-based instructions that generate prompts dynamically based on runtime context, user data, and application state. Your agent's behavior adapts in real-time without external dependencies.
Real-world examples:
- E-commerce support agent that behaves differently for VIP vs. regular customers
- Educational tutor that adjusts complexity based on student level
- Multi-tenant SaaS agent that uses different brand voices per customer
When to use
✅ Perfect for:
- User-specific experiences: Different prompt behavior per user tier, role, or preferences
- Context-dependent logic: Time-sensitive responses, location-based customization
- Multi-tenant applications: Different behavior per customer/organization
- A/B testing setup: Conditional prompt logic for experimentation
- Privacy-conscious applications: No external prompt management needed