Models & providers
Veyra routes each task to the optimal model. You can pin, blend, or override at the workspace, project, or run level.
Supported providers
| Provider | Models | Best for |
|---|---|---|
| OpenAI | GPT-5, GPT-5 mini, GPT-5 nano, GPT-5.2 | Reasoning, deep refactor, complex code |
| Gemini 3 Pro, Gemini 3 Flash, Gemini 2.5 Pro/Flash | Long context, multimodal, fast iteration | |
| Anthropic | Claude (BYOK) | Long-form review, careful editing |
| Self-hosted | Codex 5.3, custom OSS endpoints | Air-gapped or compliance-bound workloads |
Routing logic
Routing is driven by a per-task scorer. The default policy ranks providers by capability fit, then latency, then cost. You can override any axis.
| Task type | Default provider | Why |
|---|---|---|
| Plan / decompose | GPT-5 | Strong long-horizon reasoning |
| Code edit | Gemini 3 Pro | Best diff fidelity at low latency |
| Type / lint fix | Gemini 3 Flash | Fast, cheap, deterministic |
| Review / critique | GPT-5 | High-precision diff analysis |
| Self-heal | Gemini 3 Flash | Latency dominates correctness here |
BYOK
Bring your own API keys to use direct billing. Configure under Settings → Workspace → Providers.

