Skip to main content
Providers

NovitaAI

Use novita-ai/<model> with VoltAgent's model router.

Quick start

import { Agent } from "@voltagent/core";

const agent = new Agent({
name: "novita-ai-agent",
instructions: "You are a helpful assistant",
model: "novita-ai/baichuan/baichuan-m2-32b",
});

Environment variables

  • NOVITA_API_KEY

Provider package

@ai-sdk/openai-compatible

This provider uses the OpenAI-compatible adapter.

Default base URL

https://api.novita.ai/openai

You can override the base URL by setting NOVITA_AI_BASE_URL.

Provider docs

Models

Show models (77)
  • baichuan/baichuan-m2-32b
  • baidu/ernie-4.5-21B-a3b
  • baidu/ernie-4.5-21B-a3b-thinking
  • baidu/ernie-4.5-300b-a47b-paddle
  • baidu/ernie-4.5-vl-28b-a3b
  • baidu/ernie-4.5-vl-28b-a3b-thinking
  • baidu/ernie-4.5-vl-424b-a47b
  • deepseek/deepseek-ocr
  • deepseek/deepseek-prover-v2-671b
  • deepseek/deepseek-r1-0528
  • deepseek/deepseek-r1-0528-qwen3-8b
  • deepseek/deepseek-r1-distill-llama-70b
  • deepseek/deepseek-r1-turbo
  • deepseek/deepseek-v3-0324
  • deepseek/deepseek-v3-turbo
  • deepseek/deepseek-v3.1
  • deepseek/deepseek-v3.1-terminus
  • deepseek/deepseek-v3.2
  • deepseek/deepseek-v3.2-exp
  • google/gemma-3-27b-it
  • gryphe/mythomax-l2-13b
  • kwaipilot/kat-coder
  • kwaipilot/kat-coder-pro
  • meta-llama/llama-3-70b-instruct
  • meta-llama/llama-3-8b-instruct
  • meta-llama/llama-3.1-8b-instruct
  • meta-llama/llama-3.3-70b-instruct
  • meta-llama/llama-4-maverick-17b-128e-instruct-fp8
  • meta-llama/llama-4-scout-17b-16e-instruct
  • microsoft/wizardlm-2-8x22b
  • minimax/minimax-m2
  • minimax/minimax-m2.1
  • minimaxai/minimax-m1-80k
  • mistralai/mistral-nemo
  • moonshotai/kimi-k2-0905
  • moonshotai/kimi-k2-instruct
  • moonshotai/kimi-k2-thinking
  • nousresearch/hermes-2-pro-llama-3-8b
  • openai/gpt-oss-120b
  • openai/gpt-oss-20b
  • paddlepaddle/paddleocr-vl
  • qwen/qwen-2.5-72b-instruct
  • qwen/qwen-mt-plus
  • qwen/qwen2.5-7b-instruct
  • qwen/qwen2.5-vl-72b-instruct
  • qwen/qwen3-235b-a22b-fp8
  • qwen/qwen3-235b-a22b-instruct-2507
  • qwen/qwen3-235b-a22b-thinking-2507
  • qwen/qwen3-30b-a3b-fp8
  • qwen/qwen3-32b-fp8
  • qwen/qwen3-4b-fp8
  • qwen/qwen3-8b-fp8
  • qwen/qwen3-coder-30b-a3b-instruct
  • qwen/qwen3-coder-480b-a35b-instruct
  • qwen/qwen3-max
  • qwen/qwen3-next-80b-a3b-instruct
  • qwen/qwen3-next-80b-a3b-thinking
  • qwen/qwen3-omni-30b-a3b-instruct
  • qwen/qwen3-omni-30b-a3b-thinking
  • qwen/qwen3-vl-235b-a22b-instruct
  • qwen/qwen3-vl-235b-a22b-thinking
  • qwen/qwen3-vl-30b-a3b-instruct
  • qwen/qwen3-vl-30b-a3b-thinking
  • qwen/qwen3-vl-8b-instruct
  • sao10k/L3-8B-Stheno-v3.2
  • sao10k/l3-70b-euryale-v2.1
  • sao10k/l3-8b-lunaris
  • sao10k/l31-70b-euryale-v2.2
  • skywork/r1v4-lite
  • xiaomimimo/mimo-v2-flash
  • zai-org/autoglm-phone-9b-multilingual
  • zai-org/glm-4.5
  • zai-org/glm-4.5-air
  • zai-org/glm-4.5v
  • zai-org/glm-4.6
  • zai-org/glm-4.6v
  • zai-org/glm-4.7

Table of Contents