More Providers
Multi supports many additional providers beyond the major ones. All providers are configured the same way - create a profile in settings, select the provider, add credentials, and choose a model.
Cloud & Enterprise
Section titled “Cloud & Enterprise”Amazon Bedrock
Section titled “Amazon Bedrock”Access AWS-hosted models including Claude, Llama, and Mistral through your AWS account.
| Option | Description |
|---|---|
| Access Key ID | Your AWS access key |
| Secret Access Key | Your AWS secret key |
| Region | AWS region (e.g., us-east-1) |
| Model | Select from available Bedrock models |
GCP Vertex AI
Section titled “GCP Vertex AI”Access Google Cloud-hosted models through your GCP project.
| Option | Description |
|---|---|
| Project ID | Your GCP project ID |
| Region | GCP region |
| Model | Select from available Vertex models |
Routers & Gateways
Section titled “Routers & Gateways”Requesty
Section titled “Requesty”AI gateway with smart routing and cost optimization.
| Option | Description |
|---|---|
| API Key | Your Requesty API key |
| Model ID | The model identifier |
LiteLLM
Section titled “LiteLLM”Unified API proxy for 100+ LLMs. Useful for teams that want a single endpoint.
| Option | Description |
|---|---|
| API Key | Your LiteLLM API key |
| Base URL | Your LiteLLM proxy URL |
| Model ID | The model identifier |
Vercel AI Gateway
Section titled “Vercel AI Gateway”Vercel’s AI gateway service for edge-optimized inference.
| Option | Description |
|---|---|
| API Key | Your Vercel AI Gateway key |
| Model ID | The model identifier |
Local & Self-Hosted
Section titled “Local & Self-Hosted”LM Studio
Section titled “LM Studio”Desktop application for running local models with a built-in OpenAI-compatible server.
- Download LM Studio
- Download a model and start the local server
- In Multi, select LM Studio and enter the model ID
| Option | Description |
|---|---|
| Base URL | LM Studio server URL (default: http://localhost:1234) |
| Model ID | The loaded model name |
Inference Providers
Section titled “Inference Providers”Ultra-fast inference powered by custom LPU hardware.
| Option | Description |
|---|---|
| API Key | Your Groq API key |
| Model | Select from available models |
Cerebras
Section titled “Cerebras”High-performance AI inference platform.
| Option | Description |
|---|---|
| API Key | Your Cerebras API key |
| Model | Select from available models |
SambaNova
Section titled “SambaNova”Enterprise AI platform with fast inference.
| Option | Description |
|---|---|
| API Key | Your SambaNova API key |
| Model | Select from available models |
Fireworks AI
Section titled “Fireworks AI”Fast inference for open-source models.
| Option | Description |
|---|---|
| API Key | Your Fireworks API key |
| Model | Select from available models |
DeepInfra
Section titled “DeepInfra”Serverless AI inference platform.
| Option | Description |
|---|---|
| API Key | Your DeepInfra API key |
| Model | Select from available models |
Chutes AI
Section titled “Chutes AI”AI inference platform.
| Option | Description |
|---|---|
| API Key | Your Chutes API key |
| Model | Select from available models |
Featherless AI
Section titled “Featherless AI”Lightweight AI inference service.
| Option | Description |
|---|---|
| API Key | Your Featherless API key |
| Model | Select from available models |
Baseten
Section titled “Baseten”ML model deployment and inference platform.
| Option | Description |
|---|---|
| API Key | Your Baseten API key |
| Model | Select from available models |
Hugging Face
Section titled “Hugging Face”Access models hosted on Hugging Face’s inference API.
| Option | Description |
|---|---|
| API Key | Your Hugging Face API token |
| Model | Select from available models |
IO Intelligence
Section titled “IO Intelligence”AI inference service.
| Option | Description |
|---|---|
| API Key | Your IO Intelligence API key |
| Model | Select from available models |
Other Providers
Section titled “Other Providers”OpenAI Compatible
Section titled “OpenAI Compatible”Connect to any API endpoint that implements the OpenAI chat completions format.
| Option | Description |
|---|---|
| API Key | API key (if required) |
| Base URL | The API endpoint URL |
| Model ID | The model identifier |
OpenAI ChatGPT Plus/Pro
Section titled “OpenAI ChatGPT Plus/Pro”Access OpenAI models through your ChatGPT Plus or Pro subscription.
| Option | Description |
|---|---|
| Model | Select from available models |
Moonshot
Section titled “Moonshot”Moonshot AI models.
| Option | Description |
|---|---|
| API Key | Your Moonshot API key |
| Model | Select from available models |
Doubao
Section titled “Doubao”ByteDance’s AI models.
| Option | Description |
|---|---|
| API Key | Your Doubao API key |
| Model | Select from available models |
MiniMax
Section titled “MiniMax”MiniMax AI models.
| Option | Description |
|---|---|
| API Key | Your MiniMax API key |
| Model | Select from available models |
Qwen Code
Section titled “Qwen Code”Alibaba’s Qwen coding models.
| Option | Description |
|---|---|
| API Key | Your Qwen API key |
| Model | Select from available models |
Unbound
Section titled “Unbound”Unbound AI platform.
| Option | Description |
|---|---|
| API Key | Your Unbound API key |
| Model | Select from available models |
Z AI models.
| Option | Description |
|---|---|
| API Key | Your Z AI API key |
| Model | Select from available models |
Roo Code Cloud
Section titled “Roo Code Cloud”Roo Code’s cloud inference service.
| Option | Description |
|---|---|
| API Key | Your Roo Code API key |
| Model | Select from available models |
General Setup
Section titled “General Setup”All providers follow the same setup pattern:
- Open the Multi panel → Settings (gear icon)
- Click Add Profile
- Select the provider from the dropdown
- Enter required credentials
- Choose a model
- Save
You can create multiple profiles for different providers and switch between them at any time.