Back to Home
Ollama
Run and evaluate open-source models locally. Complete privacy, zero API costs.
Popular Models
Llama 3.1
Meta's flagship open model. Available in 8B, 70B, and 405B sizes.
Open Source 128K Context
Mistral / Mixtral
Efficient models from Mistral AI. Mixtral offers MoE architecture.
Efficient Fast
Qwen 2.5
Alibaba's multilingual model. Strong coding and reasoning capabilities.
Multilingual Coding
DeepSeek Coder
Specialized for code generation. Competitive with commercial models.
Code-Focused Specialized
Best For
- Complete privacy — All processing happens on your machine, nothing leaves your network
- Zero API costs — Run unlimited prompts after initial setup
- Offline capable — Works without internet after models are downloaded
- Experimentation — Try dozens of models to find what works for your use case
Using Ollama with Evvl
First, install Ollama from ollama.com and pull your desired models.
Desktop App: Evvl automatically detects running Ollama instances. No API key needed—just make sure Ollama is running.
Web App: Due to CORS restrictions, Ollama isn't available in the web app. Use the desktop app for local models.
Compare local vs cloud models
See how Llama 3.1 70B stacks up against GPT-4o and Claude on your specific tasks.
Download Desktop App