[How To] Install Open-WebUI on Linux: Local AI Interface Guide
To install Open-WebUI on Linux is the best way to regain control over your AI interactions while maintaining a ChatGPT-like experience locally. Open-WebUI (formerly known as Ollama WebUI) provides a sleek, feature-rich frontend for managing Large Language Models (LLMs) served by Ollama. By hosting your own AI interface, you ensure data privacy, eliminate subscription fees, and enjoy the flexibility of open-source models like DeepSeek, Mistral, and Llama 3. In this guide, we will walk through the steps to set up this powerful interface on your Linux system, ensuring you have a robust environment for AI development.
Table of Contents
- Prerequisites for Open-WebUI
- Step 1: Installing Ollama (The Backend)
- Step 2: Installing Docker on Ubuntu
- Method 1: Install Open-WebUI on Linux via Docker
- Method 2: Install Open-WebUI on Linux via Python
- Step 3: Accessing the Web Interface
- Basic Configuration and Model Selection
- Best Practices for Local AI
- Conclusion
Prerequisites for Open-WebUI
Before proceeding with the installation, ensure your system meets the following requirements:
- A Linux distribution (Ubuntu 24.04 LTS is recommended).
- Sufficient hardware resources (at least 8GB RAM, though 16GB+ is ideal for larger models).
- A modern GPU (NVIDIA preferred) for hardware acceleration, although CPU-only mode is supported.
- Root or sudo access to the terminal.
Step 1: Installing Ollama (The Backend)
Open-WebUI acts as a frontend; consequently, it requires a backend to actually run the models. Ollama is the industry standard for local LLM inference on Linux. If you haven’t already, you should install and configure Ollama before moving forward. Specifically, you can find the official documentation on the Ollama website.
Run the following command to install Ollama automatically:
lc-root@ubuntu:~$ curl -fsSL https://ollama.com/install.sh | sh
Once installed, verify that the service is running:
lc-root@ubuntu:~$ systemctl status ollama
Step 2: Installing Docker on Ubuntu
The recommended way to install Open-WebUI on Linux is through Docker. It ensures all dependencies are managed within a container, preventing conflicts with your host system. First, update your local package index and install the necessary certificates:
lc-root@ubuntu:~$ sudo apt update lc-root@ubuntu:~$ sudo apt install -y ca-certificates curl
Next, add Docker’s official GPG key and repository. Furthermore, you can refer to the official Docker installation guide for more details.
lc-root@ubuntu:~$ sudo install -m 0755 -d /etc/apt/keyrings lc-root@ubuntu:~$ sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc lc-root@ubuntu:~$ sudo chmod a+r /etc/apt/keyrings/docker.asc lc-root@ubuntu:~$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Finally, install the Docker Engine:
lc-root@ubuntu:~$ sudo apt update lc-root@ubuntu:~$ sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Method 1: Install Open-WebUI on Linux via Docker
With Docker ready, you can deploy the Open-WebUI container. This command assumes Ollama is running on the same host. It maps the internal port 8080 to your host’s port 8080 and creates a persistent volume for your data. You can find the image on the official Open-WebUI GitHub repository.
lc-root@ubuntu:~$ sudo docker run -d -p 8080:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main
The --add-host=host.docker.internal:host-gateway flag is crucial as it allows the container to communicate with the Ollama service running on the host machine.
Method 2: Install Open-WebUI on Linux via Python
If you prefer not to use Docker, you can install Open-WebUI directly using Python’s package manager. This is useful for users who want to build their own AI environments without containerization. However, ensure you have Python 3.11 or higher installed.
First, create a virtual environment to keep your system clean:
lc-root@ubuntu:~$ mkdir openwebui && cd openwebui lc-root@ubuntu:~/openwebui$ python3 -m venv venv lc-root@ubuntu:~/openwebui$ source venv/bin/activate
Now, install Open-WebUI using pip. In addition, make sure your pip is up to date:
lc-root@ubuntu:~/openwebui$ pip install open-webui
Once the installation completes, start the server:
lc-root@ubuntu:~/openwebui$ open-webui serve
Step 3: Accessing the Web Interface
After the installation is complete, you can access the web interface through your browser. Navigate to the following address:
http://localhost:8080
The first time you access the site, you will be prompted to create an administrator account. This account is entirely local to your machine; consequently, no data is sent to external servers. Once logged in, you can start interacting with your local models.
Basic Configuration and Model Selection
To start chatting, you need to pull a model. If you already have models in Ollama, Open-WebUI will detect them automatically. For instance, if you want to run DeepSeek-R1 locally, you can pull it directly through the Open-WebUI settings or via the terminal:
lc-root@ubuntu:~$ ollama pull deepseek-r1:8b
In the Open-WebUI interface, select the model from the dropdown menu at the top of the chat window. Therefore, you can now prompt the model just like you would with ChatGPT, but with the added benefit of complete privacy.
Best Practices for Local AI
To ensure a smooth experience with your local AI setup, consider the following best practices:
- Keep Images Updated: If using Docker, regularly pull the latest image to receive security updates and new features using
docker pull ghcr.io/open-webui/open-webui:main. - Resource Management: Monitor your system’s RAM usage. If you notice significant slowdowns, try smaller quantized versions of models (e.g., 4-bit quantization).
- Data Backups: Regularly back up the
open-webuiDocker volume or your local data directory to preserve your chat history and settings. - Secure Access: If you plan to access the interface from another machine on your network, ensure you use a firewall or a reverse proxy like Nginx with SSL.
Conclusion
By choosing to install Open-WebUI on Linux, you have successfully built a private, high-performance alternative to cloud-based AI services. Whether you used the streamlined Docker approach or the direct Python installation, you now have a sophisticated platform for exploring the world of LLMs. In conclusion, as the open-source AI community continues to evolve, your local setup will serve as a powerful tool for everything from coding assistance to creative writing, all while keeping your data under your own control.