Models & providers

Veyra routes each task to the optimal model. You can pin, blend, or override at the workspace, project, or run level.

Supported providers

ProviderModelsBest for
OpenAIGPT-5, GPT-5 mini, GPT-5 nano, GPT-5.2Reasoning, deep refactor, complex code
GoogleGemini 3 Pro, Gemini 3 Flash, Gemini 2.5 Pro/FlashLong context, multimodal, fast iteration
AnthropicClaude (BYOK)Long-form review, careful editing
Self-hostedCodex 5.3, custom OSS endpointsAir-gapped or compliance-bound workloads

Routing logic

Routing is driven by a per-task scorer. The default policy ranks providers by capability fit, then latency, then cost. You can override any axis.

Task typeDefault providerWhy
Plan / decomposeGPT-5Strong long-horizon reasoning
Code editGemini 3 ProBest diff fidelity at low latency
Type / lint fixGemini 3 FlashFast, cheap, deterministic
Review / critiqueGPT-5High-precision diff analysis
Self-healGemini 3 FlashLatency dominates correctness here
BYOK
Bring your own API keys to use direct billing. Configure under Settings → Workspace → Providers.
Hello