A polished, privacy‑respecting alternative to cloud‑bound AI apps. Jan runs entirely on your machine, with a genuinely friendly UI and support for a wide range of open models.
Highlights
🔒 Local‑first: Your data stays on your device; no API calls unless you configure them.
💸 Zero API bills: Run models locally or plug in your own providers.
🌐 Real‑time web search: Perplexity‑style retrieval, but open‑source.
🧩 Model flexibility: Works with Jan‑V1‑4B, GGUF, MLX, Hugging Face models, and more.
🖥️ Native app: Clean, friendly interface—less “engineer‑tool”, more “usable daily app”.
⚡ Fast inference: Competitive token speeds across MLX and llama.cpp backends.
🔌 MCP support: Extensible via Model Context Protocol.
🧪 DeepResearch‑style workflows: Users are already running multi‑step reasoning locally.
Why it matters
If you want a local AI client that feels like a real product rather than a toolkit, Jan is one of the most polished options right now.
