Skip to content

Ollama

Ollama lets you run open-source models locally on your own hardware. No API key needed, no data leaves your machine.

  1. Install Ollama - Download from ollama.com
  2. Pull a model - Run ollama pull to download a model:
Terminal window
ollama pull llama3.1
  1. Start Ollama - Make sure the Ollama server is running (it starts automatically on install)
  1. Open the Multi panel → Settings (gear icon)
  2. Click Add Profile
  3. Select Ollama as the provider
  4. Enter the model name (e.g., llama3.1)
  5. Save

The default base URL is http://localhost:11434. Change it if your Ollama server runs on a different address.

OptionDescription
Base URLOllama server URL (default: http://localhost:11434)
Model IDThe model name (e.g., llama3.1)
  • GPU recommended - Models run much faster with a GPU
  • RAM requirements - 7B models need ~8GB RAM, 70B models need ~40GB
  • No internet needed - Once a model is downloaded, everything runs locally
  • Privacy - No data leaves your machine

Free - Ollama is open-source and runs on your own hardware.