Ollama
Run open-source models locally with Ollama - no API keys needed.
Prerequisites
- Install Ollama: ollama.ai
- Pull a model:
ollama pull llama3.2
Installation
npm install ollama-ai-provider-v2
Quick Setup
import { Agent, VoltAgent, createTool } from "@voltagent/core";
import { honoServer } from "@voltagent/server-hono";
import { createOllama } from "ollama-ai-provider-v2";
import { z } from "zod";
const ollama = createOllama({
baseURL: process.env.OLLAMA_HOST ?? "http://localhost:11434/api",
});
const weatherTool = createTool({
name: "get_weather",
description: "Get weather for a location",
parameters: z.object({
location: z.string(),
}),
execute: async ({ location }) => {
return { location, temperature: 22, condition: "sunny" };
},
});
const agent = new Agent({
name: "Local Agent",
instructions: "A helpful local assistant",
model: ollama("llama3.2:latest"),
tools: [weatherTool],
});
new VoltAgent({
agents: { agent },
server: honoServer({ port: 3141 }),
});
Environment Variables
OLLAMA_HOST=http://localhost:11434/api
Popular Models
| Model | Size | Use Case |
|---|---|---|
llama3.2:latest | 3B | General purpose, fast |
llama3.2:7b | 7B | Better quality |
codellama:latest | 7B | Code generation |
mistral:latest | 7B | Fast, good quality |
mixtral:latest | 47B | High quality, slower |
Pull More Models
ollama pull mistral
ollama pull codellama
ollama pull mixtral
Full Example
See the complete example: with-ollama on GitHub