Open WebUI offers a robust, feature-packed, and intuitive self-hosted interface that operates seamlessly offline. It supports various large language models like Ollama and OpenAI-compatible APIs, Open WebUI is open source with an MIT license. It is easy to download and install and it has excellent documentation. I chose to install it on both my Linux computer and on the M2 MacBook Air. The software is written in Svelte, Python, and TypeScript and has a community of over two-hundred thirty developers working on it.
The documentation states that one of its key features is effortless setup. It was a easy to install. I chose to use the Docker. It boasts a number of other great features including OpenAI API integration, full Markdown and Latex support, a model builder to easily create Ollama models within the application. Be sure to check the documentation for all nuances of this amazing software.
I decided to install Open WebUI with bundled Ollam support for CPU only since my Linux computer does not have a GPU. This container Image unites the power of both Open-WebUI and Ollama for an effortless setup. I used the following Docker install script copied from the Github repository.
$ docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
On the MacBook I chose a slightly different install opting to use the existing Ollama install as I wanted to conserve space on the smaller host drive. I used the following command taken from the Github repository.
% docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
Once the install was complete I pointed my browser to:
http://localhost:3000/auth
I was presented with a login page and asked to supply my email and a password.
First time login will use the ‘Sign up’ link and provide your name, email and a password. Subsequent logins require email and password. After logging in the first time you are presented with “What’s New” about the project and software.
After pressing “Okay, Let’s Go!” I am presented with this display.
Now I am ready to start using Open WebUI.
The first thing you want to do is ‘Select a Model’ at the top left of the display. You can search for models that are available from the Ollama project.
On initial install you will need to download a model from Ollama.com. I enter the model name I want in the search window and press ‘Enter.’ The software downloads the model to my computer.
Now that the model is downloaded and verified I am ready to begin using Open WebUI with my locally hosted Phi3.5 model. Other models can be downloaded and easily installed as well. Be sure to consult the excellent getting started guide and have fun using this feature rich interface. The project has some tutorials to assist new users. In conclusion, launching into an immersive experience through their intuitively designed interface allows users of Open-WebUI to fully leverage its comprehensive array of features.