Providers Overview
Multi supports 35+ AI model providers out of the box. You can use any combination of providers by creating profiles in the settings.
Connected Providers
Section titled “Connected Providers”These providers use your existing authenticated account - no API key needed:
| Provider | Description | Cost |
|---|---|---|
| Claude Code | Uses your existing Claude account credentials | Included with Claude subscription |
| Gemini CLI | Uses your existing Gemini CLI credentials | Free |
| Copilot | Uses your existing GitHub Copilot credentials | Included with Copilot subscription |
First-Party API Providers
Section titled “First-Party API Providers”Direct API access to major model providers:
| Provider | Key Models | Docs |
|---|---|---|
| Anthropic | Claude Sonnet 4, Claude Opus 4 | Setup → |
| OpenAI | GPT-4.1, o3, o4-mini | Setup → |
| Google Gemini | Gemini 2.5 Pro, Gemini 2.5 Flash | Setup → |
| xAI | Grok | Setup → |
| DeepSeek | DeepSeek V3, DeepSeek R1 | Setup → |
| Mistral | Mistral Large, Codestral | Setup → |
Router & Gateway Providers
Section titled “Router & Gateway Providers”Access many models through a single API key:
| Provider | Description | Docs |
|---|---|---|
| OpenRouter | 100+ models from all major providers | Setup → |
| Requesty | AI gateway with smart routing | Setup → |
| LiteLLM | Unified API for 100+ LLMs | Setup → |
| Vercel AI Gateway | Vercel’s AI gateway service | Setup → |
Cloud Providers
Section titled “Cloud Providers”Enterprise-grade model hosting:
| Provider | Description | Docs |
|---|---|---|
| Amazon Bedrock | AWS-hosted models | Setup → |
| GCP Vertex AI | Google Cloud-hosted models | Setup → |
Local & Self-Hosted Providers
Section titled “Local & Self-Hosted Providers”Run models on your own hardware:
| Provider | Description | Docs |
|---|---|---|
| Ollama | Run open-source models locally | Setup → |
| LM Studio | Desktop app for local models | Setup → |
Inference Providers
Section titled “Inference Providers”Specialized inference platforms:
| Provider | Description |
|---|---|
| Groq | Ultra-fast inference |
| Cerebras | High-performance AI inference |
| SambaNova | Enterprise AI platform |
| Fireworks AI | Fast inference for open models |
| DeepInfra | Serverless AI inference |
| Chutes AI | AI inference platform |
| Featherless AI | Lightweight AI inference |
| Baseten | ML model deployment |
| Hugging Face | Open-source model hub |
| IO Intelligence | AI inference service |
Other Providers
Section titled “Other Providers”| Provider | Description |
|---|---|
| OpenAI Compatible | Any OpenAI-compatible API endpoint |
| OpenAI ChatGPT Plus/Pro | ChatGPT subscription-based access |
| Moonshot | Moonshot AI models |
| Doubao | ByteDance AI models |
| MiniMax | MiniMax AI models |
| Qwen Code | Alibaba’s Qwen coding models |
| Unbound | Unbound AI platform |
| Z AI | Z AI models |
| Roo Code Cloud | Roo Code’s cloud service |
Setting Up a Provider
Section titled “Setting Up a Provider”All providers are configured through profiles in Multi’s settings:
- Open the Multi panel → click Settings (gear icon)
- Click Add Profile
- Select your provider from the dropdown
- Enter any required credentials (API key, base URL, etc.)
- Choose a model
- Save the profile
You can create multiple profiles for different providers and switch between them at any time.