Self-Hosted AI Chat UI Comparison 2026: Open WebUI, LobeChat, LibreChat, and Alternatives
If you are looking for a self-hosted alternative to ChatGPT, you have more options in 2026 than ever before. The open-source community has built several mature chat interfaces, each with different strengths. But “self-hosted AI chat UI” is also becoming an increasingly broad category --- some tools are pure chat interfaces for LLM APIs, while others are evolving into agent management platforms.
This comparison covers the five most relevant options: Open WebUI, LobeChat, Chatbot UI, LibreChat, and Amurg. We will be honest about where each one excels and where it falls short.
What Are You Actually Looking For?
Before comparing tools, it is worth clarifying the two main use cases that bring developers to this search:
Use case 1: Chat interface for LLM APIs. You want a clean UI to talk to OpenAI, Anthropic, Ollama, or other LLM providers. You want conversation history, model switching, maybe some prompt templates. This is the classic “self-hosted ChatGPT” use case.
Use case 2: Agent control interface. You want to manage AI coding agents --- tools like Claude Code, GitHub Copilot CLI, or Gemini CLI that run as persistent processes, execute commands, and modify files. This is a different interaction model than chat.
Most of the tools in this comparison were built for use case 1. If that is what you need, you have excellent options. If you need use case 2, the landscape is thinner.
The Comparison
Open WebUI
Best for: Ollama users who want a polished local LLM interface
Open WebUI (formerly Ollama WebUI) is the most popular self-hosted chat interface, and for good reason. It has the most polished UI in the category, excellent Ollama integration, and a large community.
Strengths:
- Native Ollama integration with automatic model discovery
- Clean, responsive interface that feels close to ChatGPT
- Built-in RAG pipeline for document upload and retrieval
- Image generation integration (Stable Diffusion, DALL-E)
- User management and role-based access
- Active development with frequent releases
- Large community (90k+ GitHub stars)
- Supports OpenAI-compatible API endpoints
Weaknesses:
- Ollama-first design means other providers feel bolted on
- No native support for CLI-based agents (Claude Code, Copilot CLI)
- RAG pipeline is basic compared to purpose-built RAG tools
- Can be resource-heavy for a chat interface
- Plugin ecosystem is limited compared to LobeChat
Best suited for: Developers running local models through Ollama who want a comprehensive chat interface. If your primary workflow is “talk to local LLMs,” Open WebUI is the most mature choice.
LobeChat
Best for: Users who want extensive customization and a plugin ecosystem
LobeChat takes a more modular approach, with a plugin system that lets you extend functionality significantly.
Strengths:
- Rich plugin ecosystem (web browsing, code execution, image generation)
- Support for many LLM providers out of the box (OpenAI, Anthropic, Google, Azure, local)
- Text-to-speech and speech-to-text built in
- Highly customizable UI with themes and layouts
- Agent marketplace with pre-configured personas
- Good mobile responsiveness
- Active development and growing community
Weaknesses:
- Plugin quality varies --- some are poorly maintained
- Configuration complexity increases with plugins
- No native CLI agent management
- The agent marketplace “agents” are prompt templates, not autonomous agents
- Can feel overwhelming with all options enabled
Best suited for: Power users who want a Swiss Army knife chat interface and are willing to spend time configuring it. If you want to combine chat with web browsing, code execution, and image generation in one interface, LobeChat offers the most flexibility.
Chatbot UI
Best for: Developers who want a minimal, clean ChatGPT clone
Chatbot UI by McKay Wrigley was one of the first open-source ChatGPT alternatives and remains popular for its simplicity.
Strengths:
- Clean, minimal interface very close to ChatGPT
- Fast to deploy (Supabase + Vercel or Docker)
- Good conversation management (folders, search)
- Prompt template library
- Low resource usage
Weaknesses:
- Primarily OpenAI-focused; other providers require more setup
- Limited features compared to Open WebUI and LobeChat
- No plugin system
- Development pace has slowed
- No RAG capabilities
- No agent management
Best suited for: Developers who want something that looks and feels like ChatGPT but points to their own API keys. If minimalism is your priority and you primarily use OpenAI models, Chatbot UI delivers a clean experience without bloat.
LibreChat
Best for: Teams that need multi-provider support with conversation branching
LibreChat differentiates itself with strong multi-provider support and a conversation branching feature that lets you explore multiple response paths from any point in a conversation.
Strengths:
- Excellent multi-provider support (OpenAI, Anthropic, Google, Azure, Ollama, custom)
- Conversation branching and forking
- Plugin system compatible with ChatGPT plugins
- Code interpreter sandbox
- User management with token usage tracking
- Good balance of features and usability
- Active community and development
Weaknesses:
- More complex deployment than Chatbot UI
- UI can feel busy with branching enabled
- No native CLI agent support
- Token tracking is approximate, not exact
- Some features require additional infrastructure (Meilisearch for search)
Best suited for: Teams that use multiple LLM providers and want conversation branching. If you frequently compare outputs from different models or want to explore “what if I had said this instead” paths, LibreChat’s branching is unique in this category.
Feature Comparison Table
| Feature | Open WebUI | LobeChat | Chatbot UI | LibreChat | Amurg |
|---|---|---|---|---|---|
| Primary focus | Ollama + chat | Extensible chat | Minimal chat | Multi-provider chat | Agent control plane |
| LLM provider support | Ollama-first, OpenAI-compat | Many native | OpenAI-first | Many native | Via agent adapters |
| CLI agent management | No | No | No | No | Yes (9 adapters) |
| Session persistence | Conversation history | Conversation history | Conversation history | Conversation history | Process-level sessions |
| Mobile interface | Responsive web | Responsive web | Responsive web | Responsive web | Mobile-first design |
| Voice input | Limited | Yes (TTS/STT) | No | No | Yes (built-in) |
| RAG / document upload | Yes | Via plugins | No | Via plugins | No |
| Plugin ecosystem | Limited | Extensive | None | ChatGPT-compatible | No |
| User management / RBAC | Yes | Basic | No | Yes | Yes |
| Audit logging | Basic | No | No | Token tracking | Full command audit |
| Self-hosted | Yes | Yes | Yes | Yes | Yes |
| Ollama integration | Excellent | Good | No | Good | No |
| Deployment complexity | Medium | Medium | Low | Medium | Low-Medium |
| GitHub stars (approx) | 90k+ | 50k+ | 28k+ | 20k+ | New |
Where Amurg Fits (And Where It Does Not)
Amurg approaches this space from a fundamentally different angle. It is not a chat UI for LLM APIs. It is an agent control plane --- a management layer for CLI-based AI coding agents.
What Amurg does that the others do not:
- Manages persistent agent processes. Claude Code, Copilot CLI, Gemini CLI, and Codex CLI are long-running processes that execute commands, modify files, and maintain state. Amurg manages their lifecycle, not just sends messages to an API.
- Agent adapters. Nine built-in adapters for different coding agents. You control Claude Code and Gemini CLI from the same interface.
- Outbound-only WebSocket architecture. Your development machine initiates the connection outward. No inbound ports, no firewall changes, no public IP needed.
- Session persistence across disconnects. Close your browser, lose your network connection, switch devices --- the agent session continues. Reconnect and pick up where you left off. This is process-level persistence, not just conversation history.
- Mobile-first with voice input. Designed for controlling agents from a phone, including voice commands. Typing
git rebase -i HEAD~5on a phone keyboard is miserable; speaking it is not.
What Amurg does NOT do that the others do:
- No direct LLM API chat. If you want to send a prompt to GPT-4 or Claude and get a response, use Open WebUI or LobeChat. Amurg talks to agent CLIs, not LLM APIs directly.
- No RAG pipeline. No document upload, no retrieval-augmented generation. That is not what it is for.
- No plugin ecosystem. No image generation, no web browsing plugins, no code interpreter sandbox.
- No Ollama integration. If you run local models through Ollama, Open WebUI is the right tool.
This is not a limitation disguised as a feature. These are genuinely different tools for different problems. If you want to chat with an LLM, use a chat UI. If you want to manage AI coding agents that are running in your codebase, use a control plane.
Decision Framework
Choose Open WebUI if:
- You run local models through Ollama
- You want the most polished self-hosted chat experience
- You need built-in RAG for document Q&A
- Community size and longevity matter to you
Choose LobeChat if:
- You want maximum extensibility through plugins
- You use multiple LLM providers and want native support for all of them
- You value TTS/STT and multimedia capabilities
- You enjoy configuring and customizing your tools
Choose Chatbot UI if:
- You primarily use OpenAI models
- You want the simplest possible deployment
- A minimal, clean interface is your priority
- You do not need plugins, RAG, or advanced features
Choose LibreChat if:
- You work with multiple LLM providers on a team
- Conversation branching and model comparison matter to you
- You want ChatGPT-compatible plugins
- Token usage tracking is important
Choose Amurg if:
- You use multiple CLI-based AI coding agents (Claude Code, Copilot CLI, Gemini CLI, Codex)
- You need to access and control agents from your phone or a remote machine
- Session persistence across disconnects is critical for your workflow
- You want RBAC and audit logging for agent actions
- You are self-hosting by default, not as an afterthought
The Converging Landscape
The line between chat UIs and agent control planes is blurring. Open WebUI is adding more agentic features. LobeChat’s plugins bring it closer to tool use. And agent control planes like Amurg may eventually add direct LLM chat as a convenience feature.
But in 2026, the core architectures are still fundamentally different. Chat UIs are built around request-response: you send a message, you get a reply. Agent control planes are built around process management: you start a session, the agent works continuously, and you observe, steer, and approve.
If you are reading this, you probably know which problem you have. Pick the tool that solves it honestly, not the one that claims to solve everything.
Setting Up Your Choice
All five tools are self-hosted and open source. Deployment typically involves Docker:
# Open WebUI
docker run -d -p 3000:8080 ghcr.io/open-webui/open-webui:main
# LobeChat
docker run -d -p 3210:3210 lobehub/lobe-chat
# Chatbot UI (requires Supabase)
docker compose up -d
# LibreChat
docker compose up -d
# Amurg
docker compose up -d
Each project’s documentation covers more detailed setup, including environment variables, database configuration, and production deployment. If you are evaluating multiple options, spinning up each one in Docker takes minutes, and there is no substitute for hands-on experience with the actual interface.
Whatever you choose, self-hosting your AI tools means your conversations, your code, and your data stay on infrastructure you control. That is the one thing every option on this list gets right.