LM Studio Evals
Run open source models locally with LM Studio and compare them against cloud models in Evvl. Test any Hugging Face model against GPT, Claude, and Gemini with your own prompts.
Why Evaluate LM Studio Models?
LM Studio makes it easy to download and run models from Hugging Face with GPU acceleration and a visual interface. But with thousands of models available, picking the right one for your task takes testing.
Evvl connects to LM Studio's local server and lets you compare its models against cloud APIs in the same session. This tells you both which local model is best and whether a local model is good enough compared to paying for a cloud API.
Common LM Studio Evaluations
These comparisons help you choose the right local model and decide when local is good enough.
GGUF Quantization Levels
LM Studio runs GGUF models at different quantization levels (Q4, Q5, Q8). Lower quantization uses less RAM but reduces quality. Test the same model at different levels to find the best tradeoff for your hardware.
Good for: Fitting models to your available RAM/VRAM
Local vs Cloud for Your Task
Run the same prompt through a local LM Studio model and a cloud model like GPT-4o-mini. If the local model produces equivalent results, you save money and keep your data private.
Good for: Cost/privacy optimization
Comparing Model Families
Llama, Mistral, Qwen, Phi, Gemma. Each family has different strengths. Load several in LM Studio and compare them in Evvl to find the best fit for your use case.
Good for: Choosing between open source model families
Evvl vs LM Studio's Built-in Chat
LM Studio includes a chat interface for testing models. Here's how it compares to using Evvl.
| Feature | LM Studio Chat | Evvl |
|---|---|---|
| Supported models | Local models only | Local + OpenAI, Anthropic, Google, OpenRouter |
| Side-by-side comparison | No (one model at a time) | Yes (multiple models simultaneously) |
| Cloud model comparison | No | Yes |
| Model management | Yes (download, load, configure) | No (connects to LM Studio's server) |
| Best for | Chatting with and configuring a single local model | Comparing local models against each other and cloud APIs |
Use LM Studio to manage and run local models. Use Evvl to compare them against each other and against cloud alternatives.
How to Evaluate LM Studio Models with Evvl
- 1 Install LM Studio and download models
Get it from lmstudio.ai. Browse and download models from Hugging Face with one click.
- 2 Start LM Studio's local server
Enable the local server in LM Studio's settings. This exposes an OpenAI-compatible API that Evvl connects to.
- 3 Open Evvl Desktop and select models
Pick your LM Studio models alongside cloud models from OpenAI, Anthropic, or Google for a direct comparison. (LM Studio requires the desktop app due to browser CORS restrictions.)
- 4 Compare results side by side
See local and cloud model responses at once. Decide if the local model is good enough or if you need a cloud API.
Frequently Asked Questions
Can I use LM Studio with Evvl's web app?
No. LM Studio's local server can't be reached from a browser due to CORS restrictions. You'll need the Evvl desktop app to connect to LM Studio.
What's the difference between LM Studio and Ollama?
Both run local models. LM Studio has a GUI for browsing and downloading Hugging Face models, with built-in chat and configuration. Ollama is command-line focused and lighter weight. Both work with Evvl's desktop app. Try whichever fits your workflow.
Can I compare LM Studio models with cloud models?
Yes. In the Evvl desktop app, you can compare LM Studio models against GPT, Claude, Gemini, and any other supported provider in the same evaluation.
Compare local vs cloud models
See how Hugging Face models stack up against GPT-4.1 and Claude on your tasks.
Download Desktop App