Skip to content

More Providers

Multi supports many additional providers beyond the major ones. All providers are configured the same way - create a profile in settings, select the provider, add credentials, and choose a model.

Access AWS-hosted models including Claude, Llama, and Mistral through your AWS account.

OptionDescription
Access Key IDYour AWS access key
Secret Access KeyYour AWS secret key
RegionAWS region (e.g., us-east-1)
ModelSelect from available Bedrock models

Access Google Cloud-hosted models through your GCP project.

OptionDescription
Project IDYour GCP project ID
RegionGCP region
ModelSelect from available Vertex models

AI gateway with smart routing and cost optimization.

OptionDescription
API KeyYour Requesty API key
Model IDThe model identifier

Unified API proxy for 100+ LLMs. Useful for teams that want a single endpoint.

OptionDescription
API KeyYour LiteLLM API key
Base URLYour LiteLLM proxy URL
Model IDThe model identifier

Vercel’s AI gateway service for edge-optimized inference.

OptionDescription
API KeyYour Vercel AI Gateway key
Model IDThe model identifier

Desktop application for running local models with a built-in OpenAI-compatible server.

  1. Download LM Studio
  2. Download a model and start the local server
  3. In Multi, select LM Studio and enter the model ID
OptionDescription
Base URLLM Studio server URL (default: http://localhost:1234)
Model IDThe loaded model name

Ultra-fast inference powered by custom LPU hardware.

OptionDescription
API KeyYour Groq API key
ModelSelect from available models

Get an API key →

High-performance AI inference platform.

OptionDescription
API KeyYour Cerebras API key
ModelSelect from available models

Enterprise AI platform with fast inference.

OptionDescription
API KeyYour SambaNova API key
ModelSelect from available models

Fast inference for open-source models.

OptionDescription
API KeyYour Fireworks API key
ModelSelect from available models

Get an API key →

Serverless AI inference platform.

OptionDescription
API KeyYour DeepInfra API key
ModelSelect from available models

AI inference platform.

OptionDescription
API KeyYour Chutes API key
ModelSelect from available models

Lightweight AI inference service.

OptionDescription
API KeyYour Featherless API key
ModelSelect from available models

ML model deployment and inference platform.

OptionDescription
API KeyYour Baseten API key
ModelSelect from available models

Access models hosted on Hugging Face’s inference API.

OptionDescription
API KeyYour Hugging Face API token
ModelSelect from available models

Get an API token →

AI inference service.

OptionDescription
API KeyYour IO Intelligence API key
ModelSelect from available models

Connect to any API endpoint that implements the OpenAI chat completions format.

OptionDescription
API KeyAPI key (if required)
Base URLThe API endpoint URL
Model IDThe model identifier

Access OpenAI models through your ChatGPT Plus or Pro subscription.

OptionDescription
ModelSelect from available models

Moonshot AI models.

OptionDescription
API KeyYour Moonshot API key
ModelSelect from available models

ByteDance’s AI models.

OptionDescription
API KeyYour Doubao API key
ModelSelect from available models

MiniMax AI models.

OptionDescription
API KeyYour MiniMax API key
ModelSelect from available models

Alibaba’s Qwen coding models.

OptionDescription
API KeyYour Qwen API key
ModelSelect from available models

Unbound AI platform.

OptionDescription
API KeyYour Unbound API key
ModelSelect from available models

Z AI models.

OptionDescription
API KeyYour Z AI API key
ModelSelect from available models

Roo Code’s cloud inference service.

OptionDescription
API KeyYour Roo Code API key
ModelSelect from available models

All providers follow the same setup pattern:

  1. Open the Multi panel → Settings (gear icon)
  2. Click Add Profile
  3. Select the provider from the dropdown
  4. Enter required credentials
  5. Choose a model
  6. Save

You can create multiple profiles for different providers and switch between them at any time.