Open WebUI: A nuanced approach to locally hosted Ollama

Open WebUI offers a robust, feature-packed, and intuitive self-hosted interface that operates seamlessly offline. It supports  various large language models like Ollama and OpenAI-compatible APIs,  Open WebUI is open source with an MIT license. It is easy to download and install and it has excellent documentation. I chose to install it on both my Linux computer and on the M2 MacBook Air. The software is written in Svelte, Python, and TypeScript and has a community of over two-hundred thirty developers working on it.

The documentation states that one of its key features is effortless setup. It was a easy to install. I chose to use the Docker. It boasts a number of other great features including OpenAI API integration, full Markdown and Latex support, a model builder to easily create Ollama models within the application. Be sure to check the documentation for all nuances of this amazing software.

I decided to install Open WebUI with bundled Ollam support for CPU only since my Linux computer does not have a GPU. This container Image unites the power of both Open-WebUI and Ollama for an effortless setup. I used the following Docker install script copied from the Github repository.

$ docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

On the MacBook I chose a slightly different install opting to use the existing Ollama install as I wanted to conserve space on the smaller host drive. I used the following command taken from the Github repository.

% docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Once the install was complete I pointed my browser to:

http://localhost:3000/auth

I was presented with a login page and asked to supply my email and a password.

Screen picture by Don Watkins CC by SA 4.0

First time login will use the ‘Sign up’ link and provide your name, email and a password. Subsequent logins require email and password. After logging in the first time you are presented with “What’s New” about the project and software.

After pressing “Okay, Let’s Go!” I am presented with this display.

Screen picture by Don Watkins CC by SA 4.0

Now I am ready to start using Open WebUI.

The first thing you want to do is ‘Select a Model’ at the top left of the display. You can search for models that are available from the Ollama project.

Screen picture by Don Watkins CC by SA 4.0

On initial install you will need to download a model from Ollama.com. I enter the model name I want in the search window and press ‘Enter.’ The software downloads the model to my computer.

Screen picture by Don Watkins CC by SA 4.0

Now that the model is downloaded and verified I am ready to begin using Open WebUI with my locally hosted Phi3.5 model. Other models can be downloaded and easily installed as well. Be sure to consult the excellent getting started guide and have fun using this feature rich interface. The project has some tutorials to assist new users. In conclusion, launching into an immersive experience through their intuitively designed interface allows users of Open-WebUI to fully leverage its comprehensive array of features.

Evaluating writing using open source artificial intelligence

In today’s digital age, writers seek tools that enhance their craft and provide real-time feedback and assistance. Enter Ollama – an open-source machine learning system designed to democratize accessibility for natural language processing tasks across a wide range of languages and scripts with ease. Coupled with the Phi3 model, this powerful duo promises unparalleled benefits in refining your writing style through sophisticated evaluations beyond grammar checking. This article will explore how Ollama, integrated with phi3’s innovative language comprehension and assessment approach, revolutionizes the writer’s journey toward excellence. So buckle up as we embark on a transformative exploration of your writing potential using these advanced AI tools!

I have been experimenting with Ollama and various models designed to work with it. In this article, I wanted to use Ollama and Phi3 to demonstrate the power of open source to revolutionize and evaluate writing. Both Ollama and Phi3 are open source, both have MIT licenses, and they work well together. You need to have Ollama installed on your computer. My daily driver is an Intel NUC 11 with an i7 processor running Linux Mint Cinnamon. Though this is a powerful computer, running a large language model takes more than a great CPU. A powerful GPU would make the process a little quicker.

First, I needed to make sure that Ollama was installed. If you are on Linux, you do that by executing the following code in a terminal.

$ curl -fsSL https://ollama.com/install.sh | sh

The installation is very straightforward and then you will need to download the Phi3 model. Enter the following command into your terminal.

$ ollama pull phi3

This pulls the Phi3 model down to your computer. You can check to make sure it is there by entering the following code.

$ ollama list

You should see a list of all the models loaded on your computer.

NAME            	ID          	SIZE  	MODIFIED       
phi3:latest     	4f2222927938	2.2 GB	56 minutes ago	
cooking:latest  	59c8496c62aa	4.7 GB	7 days ago    	
llama3.1:latest 	42182419e950	4.7 GB	7 days ago    	

Now I am ready to begin. I wanted Ollama and the Phi3 model to develop a rubric to evaluate my writing sample. I need to enter two commands. First, I need to get the LLM running, which will open a REPL (Read Only Print Loop) prompt very similar to a Python REPL.

ollama run phi3:latest
>>> create a rubric to score college level writing.  

The model quickly created a rubric to score my writing some of which I have copied and pasted here:

“College Level Writing Rubric: Essay (20 points)

I. Introduction and Thesis Statement (4/20 = 2 points):

A student’s introduction must be clear, concise, engaging, and
well-structured. A strong thesis statement should identify the topic of
discussion clearly while presenting an argument or perspective that sets
up a logical progression throughout the essay. Students will lose one
point for each instance where these two components fail to establish
context effectively (e.g., unclear writing style, unconvincing tone).

1 Point – Clear and engaging introduction with well-structured paragraphs setting the stage of discussion or argumentation


2 Points – Introduction is structured clearly but fails in some areas such as clarity of language use or lacking an enticing hook to captivate readers’ interest. (Lose one point)


1 Point – Thesis statement presented, but lacks precision and clear articulation about the topic discussed; does not establish a logical thread for essay progression

2 Points – Clearly stated thesis that presents argument or perspective while setting up an effective structure to follow throughout. (Lose one point)

0 PointS – Introduction is unclear, disjointed language use fails in
providing context and lacks persuasive tone necessary; no clear indication
of what the essay will discuss

The rubric, over seven hundred words long, was generated in a few minutes. All text output from Ollama is in MarkDown. The rubric will be rendered much quicker if you are a developer using an Apple Silicon MacBook and an M2 processor. Though exceptionally powerful, the NUC 11 with the i7 lacks a GPU, which eases the strain on the CPU.

Now that the rubric has been created, I entered the following command in the Ollama REPL to evaluate an article I wrote for Opensource.com nearly ten years ago.

>>>Use the rubric to evaluate this article https://opensource.com/education/15
... /12/my-open-source-story-don-watkins

The process took a few minutes and provided an in-depth review of my article. The evaluation was very detailed and over nine hundred words long. I have pasted the first part of the review here.

“2 Points – Introduction engages with opening hook; however, lacks clarity in setting the stage and doesn’t align well with Watkins’ thesis statement (lose up to three points) 1 Point – Thesis statement present but vague or imprecise about what readers should expect throughout this article. Lacks clear alignment between I and II components of essay-like structure; no explicit roadmap provided for reader follow along (lose two maximum points); fails in captivating the audience right from introduction


0 PointS – Introduction lacks coherence, disjointed language use provides little context or interest to readers about open source contributions. No engaging hook presented nor clear alignment between I and II components of essay-like structure; does not provide explicit roadmap for reader follow along (lose one point)…”

Using large language models to assess writing could offer the subtlety writers require to enhance their writing. Are there potential issues? Will artificial intelligence models replace copywriters? What other implications might they have that change how we write and re-write?

Where politicians fear to tread…

A nation weeps, a mother cries,
Tiny hands clutch at crimson skies.
“Just keep them safe,” they whisper low,
But hollow words, the truth don’t show.

In halls of power, shadows dance,
While children tremble, afraid to glance.
Gutless men, in ivory towers,
Bow to pressure, silencing powers.

They cater to a lobby’s plea,
For profits made and safety fee.
Blind eye they turn on innocent lives,
Clandestine deals beneath the tides.

The echo of their promises rings hollow,
A symphony of fear in every poll.
While school halls become battlegrounds rife,
And innocence fades into the knife.

How can they stand and claim “common sense,”
When lives are lost in this gun-fueled trance?
Their cowardice a bitter pill to swallow,
As innocence bleeds on the playground’s hollow.

They preach of freedom, but at what cost?
While children pay for their moral frost.
Rise from slumber, break the chains of fear,
Stand for justice, let the truth be clear!

Poem from Gemma2 Large Language Model from a writing prompt. “Write a poem about the plight of gutless politicians craven to the gun lobby while children lives are endangered in America’s classrooms.”

Harnessing the Power of Local AI: Exploring Ollama and Llama 3.1 for Advanced Language Modeling

Just yesterday, I came across Ollama, an open-source project developed by Meta. I had been helping a friend learn more about locally hosted large language models, and my internet search led me to Ollama. I installed it quickly on my Linux Mint desktop using the code supplied on the community website.

curl -fsSL https://ollama.com/install.sh | sh

It didn’t take too long to install the software and then I wanted to explore how it worked. I decided to download and install the recommended Llama3.1 model. I followed the instructions and in the terminal entered the following command:

$ ollama run llama3.1

The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was presented with a prompt to ‘talk’ with the LLM.

>>> What is the history of St. Bonaventure University? 

“St. Bonaventure University, a private Franciscan university located in Allegany County, New York, has a rich and storied history that spans over
160 years.”

It’s not bad, but it’s not entirely accurate either. The results were rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air.

I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click on the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.

I ran the same commands on the Mac to install Llama3.1 the first time. I asked for the history of St. Bonaventure University in Olean, New York. The results appeared much more quickly, but their accuracy was skewed even more. They were so inaccurate that they were comical.

“The institution was founded in 1858 by five Jesuit priests from the Buffalo Diocese as St. Bonaventure College. The college was established on a 200-acre farm in Deerpark, New York, with the goal of providing a Catholic education to young men.”

I downloaded and installed the Gemma 2 model from Google to see if it was faster and more accurate. The commands are the same whether using the Linux or Mac terminal.

$ ollama run gemma2:2b

Gemma 2:2b is the smallest model, at 1.6 GB, and it was downloaded and installed quickly. The results were similarly inaccurate, but they rendered much more quickly on both the Mac and Linux computers. Llama3.1 was only mildly accurate for information that occurred in 2021 and prior. I asked Gemma2 who was running for President of the United States and its information was only accurate as of October 26, 2023.

Despite the results, I am encouraged that it is possible to run large language models on PCs.