What's My AIChoose the right local runtime after you get a result.

guide

What to use after WhatsMy.AI says your machine is ready.

The result page recommends a runtime because setup style matters. Some people want the fastest path to a first run, others want a GUI, and some care most about raw local tuning.

Ollama

Use Ollama when you want the quickest path from benchmark result to a real local run. It is the best default starting point for most people.

LM Studio

Use LM Studio when you want a graphical model browser and a lower-friction way to test prompts without living in the terminal.

llama.cpp

Use llama.cpp when you care more about low-level tuning, performance tradeoffs, and a flexible local stack than convenience.