Seamlessly Convert Your Markdown Docs to Office Formats Using Pandoc

I have been experimenting a lot with Ollama and other artificial intelligence tools and the anwers to my prompts always are rendered in Markdown. I have Marktext on my Linux computer and MacDown on my MacBook Air so I can easily copy and paste the output into either of those editors and save it as a Markdown file on my computer. However, when I want to share those files with colleagues who are unfamiliar with Markdown I need a way to convert those files into a format that’s easily accessible for them. My Markdown editors can only export the Markdown files as HTML or PDF.

That problem is easily solved with Pandoc which is a great tool that anyone can install on Linux, MacOS or Windows that easily converts Markdown into any number of different formats. Easily install Pandoc on Linux with the following commands:

$ sudo dnf install pandoc
or 
$ sudo apt install pandoc

On MacOS use Homebrew to easily install Pandoc by opening a terminal and entering the following command.

$ brew install pandoc

You can install Pandoc on Microsoft Windows using Chocolatey with the following command:

choco install pandoc

Once the application is installed it works very well from the command line. The project mainains great documentation. I needed to convert a Markdown document to .docx so my Windows using colleagues could easily read the output from the LLM I had been using. I used the documentation and the programs man page on my Linux system to enter the following command.

$ pandoc -o ProposedLease.docx -f markdown -t docx LeaseReview.md

The conversion was flawless and occured in a second or two. Your experience may vary based on CPU, RAM and length of the document converted. The Markdown file of our lease review was converted to “ProposedLease.docx” which I could easily share with my colleagues who were using Microsoft Word.

If you are a person who is uncomfortable installing software on your computer or you are an unpriviledged user in a corporate or educational setting you can use Pandoc online. Pandooc is open source with a GPL v2 license. For more information about Pandoc be sure to consult their website and examine all of the options that the program provides you.

Evaluating writing using open source artificial intelligence

In today’s digital age, writers seek tools that enhance their craft and provide real-time feedback and assistance. Enter Ollama – an open-source machine learning system designed to democratize accessibility for natural language processing tasks across a wide range of languages and scripts with ease. Coupled with the Phi3 model, this powerful duo promises unparalleled benefits in refining your writing style through sophisticated evaluations beyond grammar checking. This article will explore how Ollama, integrated with phi3’s innovative language comprehension and assessment approach, revolutionizes the writer’s journey toward excellence. So buckle up as we embark on a transformative exploration of your writing potential using these advanced AI tools!

I have been experimenting with Ollama and various models designed to work with it. In this article, I wanted to use Ollama and Phi3 to demonstrate the power of open source to revolutionize and evaluate writing. Both Ollama and Phi3 are open source, both have MIT licenses, and they work well together. You need to have Ollama installed on your computer. My daily driver is an Intel NUC 11 with an i7 processor running Linux Mint Cinnamon. Though this is a powerful computer, running a large language model takes more than a great CPU. A powerful GPU would make the process a little quicker.

First, I needed to make sure that Ollama was installed. If you are on Linux, you do that by executing the following code in a terminal.

$ curl -fsSL https://ollama.com/install.sh | sh

The installation is very straightforward and then you will need to download the Phi3 model. Enter the following command into your terminal.

$ ollama pull phi3

This pulls the Phi3 model down to your computer. You can check to make sure it is there by entering the following code.

$ ollama list

You should see a list of all the models loaded on your computer.

NAME            	ID          	SIZE  	MODIFIED       
phi3:latest     	4f2222927938	2.2 GB	56 minutes ago	
cooking:latest  	59c8496c62aa	4.7 GB	7 days ago    	
llama3.1:latest 	42182419e950	4.7 GB	7 days ago    	

Now I am ready to begin. I wanted Ollama and the Phi3 model to develop a rubric to evaluate my writing sample. I need to enter two commands. First, I need to get the LLM running, which will open a REPL (Read Only Print Loop) prompt very similar to a Python REPL.

ollama run phi3:latest
>>> create a rubric to score college level writing.  

The model quickly created a rubric to score my writing some of which I have copied and pasted here:

“College Level Writing Rubric: Essay (20 points)

I. Introduction and Thesis Statement (4/20 = 2 points):

A student’s introduction must be clear, concise, engaging, and
well-structured. A strong thesis statement should identify the topic of
discussion clearly while presenting an argument or perspective that sets
up a logical progression throughout the essay. Students will lose one
point for each instance where these two components fail to establish
context effectively (e.g., unclear writing style, unconvincing tone).

1 Point – Clear and engaging introduction with well-structured paragraphs setting the stage of discussion or argumentation


2 Points – Introduction is structured clearly but fails in some areas such as clarity of language use or lacking an enticing hook to captivate readers’ interest. (Lose one point)


1 Point – Thesis statement presented, but lacks precision and clear articulation about the topic discussed; does not establish a logical thread for essay progression

2 Points – Clearly stated thesis that presents argument or perspective while setting up an effective structure to follow throughout. (Lose one point)

0 PointS – Introduction is unclear, disjointed language use fails in
providing context and lacks persuasive tone necessary; no clear indication
of what the essay will discuss

The rubric, over seven hundred words long, was generated in a few minutes. All text output from Ollama is in MarkDown. The rubric will be rendered much quicker if you are a developer using an Apple Silicon MacBook and an M2 processor. Though exceptionally powerful, the NUC 11 with the i7 lacks a GPU, which eases the strain on the CPU.

Now that the rubric has been created, I entered the following command in the Ollama REPL to evaluate an article I wrote for Opensource.com nearly ten years ago.

>>>Use the rubric to evaluate this article https://opensource.com/education/15
... /12/my-open-source-story-don-watkins

The process took a few minutes and provided an in-depth review of my article. The evaluation was very detailed and over nine hundred words long. I have pasted the first part of the review here.

“2 Points – Introduction engages with opening hook; however, lacks clarity in setting the stage and doesn’t align well with Watkins’ thesis statement (lose up to three points) 1 Point – Thesis statement present but vague or imprecise about what readers should expect throughout this article. Lacks clear alignment between I and II components of essay-like structure; no explicit roadmap provided for reader follow along (lose two maximum points); fails in captivating the audience right from introduction


0 PointS – Introduction lacks coherence, disjointed language use provides little context or interest to readers about open source contributions. No engaging hook presented nor clear alignment between I and II components of essay-like structure; does not provide explicit roadmap for reader follow along (lose one point)…”

Using large language models to assess writing could offer the subtlety writers require to enhance their writing. Are there potential issues? Will artificial intelligence models replace copywriters? What other implications might they have that change how we write and re-write?

Using Ollama to write an evaluation rubric

In my previous article, I discussed using Ollama and the Llama3.1 model to assess my writing. Since I’m new to teaching writing, I was curious to see if Ollama could generate an evaluation rubric for grading student writing and then use that rubric to evaluate a sample of student writing.

I am going to assume that you have already installed Ollama on your computer and downloaded the Llama3.1 model. If not, be sure to follow the directions provided in the previous article. Start Ollama and run the model:

$ ollama run llama3.1:latest

>>>Develop a assesment rubric for the evaluation of undergraduate student writing

Here is the assessment rubric that was created by Ollama and Llama3.1:

Content (40 points)

  • Thesis Statement: Does the thesis statement clearly and concisely present the main argument or claim? (5 points)
    • 4: The thesis statement is clear, concise, and well-supported by evide evidence.
    • 3: The thesis statement is mostly clear and concise, but may lack some supporting evidence.
    • 2: The thesis statement is somewhat unclear or lacking in concision, but has some supporting evidence.
    • 1: The thesis statement is unclear, vague, or lacks supporting evidence.
  • Argumentation: Are the arguments presented well-supported by evidence? (10 points)
    • 9-10: Strong arguments are presented with ample and relevant evidence to support them.
    • 7-8: Good arguments are presented, but may lack some relevant evidence.
    • 5-6: Fair arguments are presented, but may be lacking in supporting ev evidence.
    • 0-4: Poor or absent argumentation.
  • Analysis: Is the analysis of the topic thorough and insightful? (10 points)
    • 9-10: The analysis is thorough, well-supported, and offers new insights into the topic.
    • 7-8: Good analysis is presented, but may lack some depth or insight.
    • 5-6: Fair analysis is presented, but may be lacking in detail or insig insight.
    • 0-4: Poor or absent analysis.
  • Counterarguments: Are counterarguments addressed and refuted effectively? (5 points)
    • 4: Effective counterarguments are presented and refuted with strong ev evidence.
    • 3: Some counterarguments are acknowledged, but not fully refuted.
    • 2: Few counterarguments are mentioned or refuted poorly.
    • 1: No counterarguments are addressed.

Organization (20 points)

  • Introduction: Is the introduction engaging and effective in setting up the rest of the paper? (5 points)
    • 4: The introduction is clear, concise, and effectively sets up the rest of the paper.
    • 3: The introduction is mostly clear and concise, but may lack some eng engagement or clarity.
    • 2: The introduction is somewhat unclear or lacking in concision.
    • 1: The introduction is weak or absent.
  • Body Paragraphs: Are the body paragraphs well-organized and logically connected? (10 points)
    • 9-10: Strong organization and logical connections between paragraphs are evident.
    • 7-8: Good organization and some logical connections, but may be lacking in coherence.
    • 5-6: Fair organization and some attempts at connecting paragraphs.
    • 0-4: Poor or absent organization.
  • Conclusion: Is the conclusion effective in summarizing the main points and reiterating the thesis? (5 points)
    • 4: The conclusion is clear, concise, and effectively summarizes the main points.
    • 3: The conclusion is mostly clear and concise, but may lack some summary or repetition of the thesis.
    • 2: The conclusion is somewhat unclear or lacking in concision.
    • 1: The conclusion is weak or absent.

Style (20 points)

  • Writing Style: Is the writing style clear, concise, and engaging? (10 points)
    • 9-10: Strong writing style is evident, with clear, concise language and engaging tone.
    • 7-8: Good writing style is present, but may lack some clarity or conci concision.
    • 5-6: Fair writing style is present, but may be lacking in clarity or e engagement.
    • 0-4: Poor or absent writing style.
  • Mechanics: Are grammar, spelling, and punctuation accurate? (5 points)
    • 4: Strong attention to detail and accuracy in mechanics.
    • 3: Good attention to detail and some accuracy, but may be lacking in o others.
    • 2: Fair attention to detail, with some errors in mechanics.
    • 1: Poor or absent attention to detail.

Mechanics (20 points)

  • Grammar: Is grammar accurate and effective? (5 points)
    • 4: Strong grammar is present throughout the paper.
    • 3: Good grammar is mostly present, but may be lacking in some areas.
    • 2: Fair grammar is present, with some errors.
    • 1: Poor or absent grammar.
  • Spelling: Is spelling accurate and effective? (5 points)
    • 4: Strong spelling is evident throughout the paper.
    • 3: Good spelling is mostly present, but may be lacking in some areas.
    • 2: Fair spelling is present, with some errors.
    • 1: Poor or absent spelling.
  • Punctuation: Is punctuation accurate and effective? (5 points)
    • 4: Strong punctuation is evident throughout the paper.
    • 3: Good punctuation is mostly present, but may be lacking in some areas.
    • 2: Fair punctuation is present, with some errors.
    • 1: Poor or absent punctuation.

Total: The total score will range from 0 to 100 points.

I copied and pasted the text from the article I wrote in 2022 about 12 Essential Linux Commands for Beginners and asked Ollama to evaluate that article using the rubric it had just created. Here is the result:

Content

  • The article provides a comprehensive list of 12 essential Linux commands, along with explanations, examples, and usage tips.
  • The topics covered are relevant to both beginners and experienced users.
  • The tone is conversational and easy to understand.

Organization

  • The article is well-structured, with each command introduced as a separate section.
  • The sections are concise and to the point, making it easy for readers to quickly grasp the key concepts.
  • The use of headings and subheadings helps to break up the content and make it more readable.

Writing Style

  • The writing style is clear, concise, and engaging.
  • The author uses everyday language, avoiding technical jargon whenever possible.
  • The text includes humorous anecdotes and personal experiences, making it more relatable and entertaining.

Structure and Pacing

  • The article flows smoothly from one command to the next, with each section building on the previous one.
  • The pacing is well-balanced, with sufficient information provided for readers to understand the concepts without feeling overwhelmed.
  • The use of examples and exercises helps to reinforce learning and keep readers engaged.

Technical Quality

  • The technical accuracy of the article appears to be high, with correct usage examples and syntax.
  • The author provides sufficient context and explanations for each command, making it easy for readers to understand their functionality.
  • The inclusion of links to external resources (e.g., trashy and trash-cli) adds value and helps readers explore further.

Originality and Depth

  • While the article covers familiar topics, it offers a fresh perspective and new insights into each command.
  • The author provides additional context and explanations that go beyond mere listing of commands.
  • The inclusion of personal anecdotes and experiences adds depth and relatability to the content.

Conclusion and Future Work

  • The article concludes with an open-ended question, encouraging readers to share their favorite Linux commands.
  • This approach fosters engagement and encourages further discussion, which can be beneficial for building a community around Linux and its usage.

Overall, I would give this article a score of 4.5 out of 5 in content, organization, writing style, structure and pacing, technical quality, originality and depth, and conclusion and future work. The only area where it could improve is by providing more advanced or specialized topics for experienced users..

What are your thoughts on using open source LLMs to evaluate student writing? What are the ethical considerations? Have you used a large language model to evaluate student writing?

Using Ollama to evaluate what you write

Last year, a colleague shared with me how he used ChatGPT to evaluate content generated by his students at a local university. I was intrigued by the idea but also concerned about the ethical issues surrounding using publicly available LLMs to read student writing without their consent. This led me to look for locally hosted LLMs that could aid in grading and evaluating student work without putting their work in a publicly accessible large language model.

I am not qualified to evaluate writing, but I do use Grammarly to help me with my writing. Are there open-source tools in addition to proprietary products like Grammarly that could aid me in the writing process, and could they be successfully used by university staff and students to improve writing outcomes?

My experience with Ollama led me to consider using it to evaluate my writing and suggest to others that it could be used to assess student writing. If you haven’t already done so, you will need to install Ollama. The commands below will install it on Linux, which is my daily driver. There are detailed instructions on the Ollama website to help you download and install it on MacOS and Windows.

curl -fsSL https://ollama.com/install.sh | sh

I decided to use the Llama3.1 model for this project. I issued the following series of commands in the terminal to launch Ollama, download the Llama3.1 model and then run that model.

$ ollama pull llama3.1 

$ ollama run Llama3.1

>>>Evaluate the following writing sample,"

I copied and pasted an article I had written into the Ollama REPL (Read Evaluate Print Loop) and pressed Enter. The fourteen-hundred-word article was written in 2016 about a lecture our Franciscan Fraternity had attended on the similarities between our faith and Islam. The evaluation took several minutes on my powerful Linux desktop, which has a quad-core 11th Gen Intel Core i7-1165G7 with 64 gigabytes of RAM but lacks a GPU, which is very important when using Ollama and these models. Nonetheless, it completed the task and provided a three-hundred-fifty-five-word writing analysis. A sample of the analysis is provided below.

“This writing sample appears to be a reflective essay or report on a lecture by Fr. Michael Calabria about Islam from a Catholic perspective. Here’s an evaluation:

Strengths:

Clear structure: The text is well-organized, with a logical flow of ideas and a clear conclusion.

Engaging narrative: The writer effectively conveys the content of the lecture, making it easy to follow and understand.

The analysis provided a list of weaknesses and suggestions for improvement.

Weaknesses:

Lack of a clear thesis statement: The writer’s purpose or main takeaway from the lecture is not explicitly stated, leaving the reader to infer the significance of the event.

Lengthy and detailed report: While the narrative is engaging, the text feels overly long and detailed, which may make it challenging for some readers.

Suggestions for improvement:

Edit for clarity and consistency: Review the text for minor errors in formatting, punctuation, and grammar to ensure a polished final product.

Consider adding more analysis or reflection: While the narrative is engaging, some readers may appreciate more analysis or reflection on the event’s significance or what it means for the writer’s understanding of Islam.

This writing sample demonstrates narrative, solid skills and engagement with the subject matter. With some editing and refinement, it could become an even more substantial piece of writing.

Have you used Ollama or other open-source artificial intelligence to evaluate your writing? My next article will be about using Ollama and Llama3.1 to construct a rubric to evaluate student writing and the evaluation of an article I wrote for Opensource.com, 12 Essential Linux Commands for Beginners. All output from Ollama is provided in MarkDown format. What implications does this have for your writing and for the evaluation of student writing?

Working with my AI chef

Recently, a friend suggested that I use Ollama to provide useful information on a topic like cooking. I used my experiences customizing a model file and the latest iteration of Llama3.1 to accomplish that goal. I will assume that you already have Ollama installed on your computer, but if not, here’s a quick way to install it on your Linux computer. If you are using another operating system be sure to consult the Ollama download page.

curl -fsSL https://ollama.com/install.sh | sh

Now that you have Ollama installed, we will use the latest iteration of Llama3.1. I am going to download Llama3.1 to my computer.

ollama pull llama3.1

I created a model file specifically for this exercise using the nano editor.

  GNU nano 7.2                         cooking                                  
FROM llama3.1:latest

# set the temperature to 1 (higher is more creative, lower is more coherent
PARAMETER temperature 1

# Set the system prompt
SYSTEM """

You are a master chef. You provide detailed gluten free recipes on request.
"""

Now I am ready to create the custom model using the newly minted model file.

$ ollama create cooking -f ./cooking

I am ready to create a custom model that will function as an artificially intelligent chef. I run the model with the following command:

$ ollama run cooking

I gave my intelligent assistant the following prompt:

>>>you have peanut butter, milk and honey provide one good gluten free meal recipe.

What a delightful combination! Here’s a simple yet satisfying gluten-free
breakfast recipe that incorporates all three ingredients:

Gluten-Free Peanut Butter Pancakes with Honey and Milk

Servings: 4-6 people

Ingredients:

  • 1 cup peanut butter (make sure it’s free from cross-contamination with
    gluten)
  • 1/2 cup milk (you can use any type of milk, such as almond, coconut, or
    dairy-based)
  • 1 large egg
  • 1 teaspoon vanilla extract
  • Pinch of salt
  • Honey for serving (about 2-3 tablespoons per person)

Instructions:

  1. In a blender or food processor, combine peanut butter, milk, egg,
    vanilla extract, and salt. Blend until smooth.
  2. The batter should be thick but still pourable. If it’s too thick, you
    can add a little more milk. If it’s too thin, you can add a bit more
    peanut butter.
  3. Heat a non-stick skillet or griddle over medium heat. Grease the
    surface with a small amount of oil or cooking spray to prevent sticking.
  4. Using a 1/4 cup measuring cup, scoop the batter onto the skillet or
    griddle.
  5. Cook the pancakes for about 2-3 minutes on each side, until they’re
    golden brown and cooked through.
  6. Serve the peanut butter pancakes hot with a drizzle of honey. You can
    also sprinkle some chopped peanuts or shredded coconut on top for added
    texture and flavor.

Tips and Variations:

  • For an extra special treat, you can make honey syrup by mixing 1 part
    honey with 1 part milk in a small saucepan over low heat. Whisk until the
    mixture is smooth and warm.
  • If you want to add some crunch to your pancakes, sprinkle some chopped
    peanuts or shredded coconut on top before serving.
  • Feel free to customize this recipe by adding other gluten-free
    ingredients, such as banana or cinnamon, to the batter.

Enjoy your delicious gluten-free peanut butter pancakes with honey! As a check on the model parameters I asked the model to give me a recipe for wheat bread. It responded with the following caveat.

“I’d be happy to provide you with a traditional wheat bread recipe!
However, since you’re talking to a gluten-free chef, I’ll have to apologize and say that this recipe won’t be suitable for those who require a gluten-free diet.”

Ollama is open source with an MIT license. Llama3.1 has a community license.

Using Python to talk with an Ollama model

Continuing my exploration of using a locally hosted Ollama on my Linux desktop computer, I have been doing a lot of reading and research. Today, while having lunch with a university professor, he asked me some questions I didn’t have an immediate answer to. So, I went back to my research to find the answers.

My computer is a Linux desktop with an 11th-generation Intel Core i7-1165G7 processor and 64 gigabytes of RAM. Until today, I have been interacting with Ollama and several models, including Gemma, Codegemma, Phi-3, and Llama3.1, from the command line. Running the Ollama command-line client and interacting with LLMs locally at the Ollama REPL is a good start, but I wanted to learn how to use Ollama in applications and today I made a good start.

Python is my preferred language, and I use VS Codium as my editor. First, I needed to set up a virtual Python environment. I have a ‘Coding’ directory on my computer, but I wanted to set up a separate one for this project.

$ python3 -m venv ollama

Next, I activated the virtual environment:

$ source ollama/bin/activate

Then, I needed to install the ‘ollama’ module for Python.

pip install ollama

Once the module was installed, I opened up VSCodium and tried the code snippet. I found that I used the ‘ollama list’ command to make sure that ‘codegemma’ was installed. Then I used a code snippet I found online and tailored it to generate some Python code to draw a circle.

import ollama


response = ollama.generate(model='codegemma', prompt='Write a Python program to draw a circle spiral in three colors')
print(response['response'])

The model query took some time to occur. Despite having a powerful computer, the lack of a GPU significantly impacted performance, even on such a minor task. The resulting code looked good.



import turtle

# Set up the turtle
t = turtle.Turtle()
t.speed(0)

# Set up the colors
colors = ['red', 'green', 'blue']

# Set up the circle spiral parameters
radius = 10
angle = 90
iterations = 100

# Draw the circle spiral
for i in range(iterations):
    t.pencolor(colors[i % 3])
    t.circle(radius)
    t.right(angle)
    radius += 1

# Hide the turtle
t.hideturtle()

# Keep the window open
turtle.done()
Screen Picture by Don Watkins CC by SA 4.0

Contemporary AI Ethics – Can We Safeguard Innovation Without Falling into Repetitive Cycles?

Years ago, I watched a TED talk by Larry Lessig about laws that stifle creativity. He made several excellent points in his speech, and it got me thinking about whether we are reaching a critical point in terms of laws regulating the use of generative AI. Recently, I listened to a podcast where the host claimed that there is no truly open-source AI and that, eventually, an incestuous situation could develop due to web scraping to train large language models (LLMs). This could lead to the creation of content by these LLMs and the recreation of content from the content created by the large language models, potentially resulting in a twenty-first-century Tower of Babel.

Do we need to build on the ideas presented in Larry’s influential talk to adapt to the current reality? Will large language models and other forms of artificial intelligence lower the quality of our culture and intelligence, or will they enhance culture and creativity as we’ve seen in the seventeen years since his talk?

Working with Ollama

Now that I’m working with Ollama, I needed to figure out how to locate the models on my storage medium and determine the amount of space they were occupying. This is especially important for the MacBook Air, which has a 256 GB drive that is already half full. My Linux computer, on the other hand, has a terabyte NVME drive, so I’m not as worried about storage there. However, I still like to track where everything is and how much storage it uses. I wanted to compile a list of the downloaded models and learn how to delete them.

After Ollama is installed you can get a list of the commands available to you by entering the following command in the terminal.

$ ollama help

Usage:
ollama [flags]
ollama [command]

Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
pull Pull a model from a registry
push Push a model to a registry
list List models
ps List running models
cp Copy a model
rm Remove a model
help Help about any command

Flags:
-h, --help help for ollama
-v, --version Show version information


Check the version of ollama you are using with:

$ ollama -v

ollama version is 0.3.9


Downloading models is easy using ‘ollama pull’

$ ollama pull gemma2

This downloads the model to your local computer. You can also initiate a pull request by running the following command:

$ ollama run phi3

This command sequence pulls the ‘phi3’ model down to your computer. After I had tried a few models I wanted to get a sense of how much space they were taking on my system and I used the ‘list’ or ‘ls’ switch of the ‘ollama’ command.

$ ollama list
NAME            	ID          	SIZE  	MODIFIED     
llama3.1:latest 	f66fc8dc39ea	4.7 GB	2 hours ago 	
codegemma:latest	0c96700aaada	5.0 GB	43 hours ago	
phi3:latest     	4f2222927938	2.2 GB	47 hours ago	
gemma2:latest   	ff02c3702f32	5.4 GB	3 days ago  	
phi3:medium     	cf611a26b048	7.9 GB	3 days ago  	
gemma2:2b       	8ccf136fdd52	1.6 GB	3 days ago  

Now I can see how much storage the models are using. It’s easy to delete a model. I decided to delete the oldest one first.

$ ollama rm gemma2
deleted 'gemma2'

Just to be sure, I ran ‘ollama ls’; sure enough, the model was gone.

NAME            	ID          	SIZE  	MODIFIED     
llama3.1:latest 	f66fc8dc39ea	4.7 GB	2 hours ago 	
codegemma:latest	0c96700aaada	5.0 GB	44 hours ago	
phi3:latest     	4f2222927938	2.2 GB	47 hours ago	
phi3:medium 

You can check on how many models are running on your machine with the following command:

$ ollama ps
NAME       	ID          	SIZE  	PROCESSOR	UNTIL              
phi3:latest	4f2222927938	6.0 GB	100% CPU 	4 minutes from now	

Using the ‘show’ command shows you information about the model.

ollama show phi3
  Model                                 
  	arch            	phi3  	                
  	parameters      	3.8B  	                
  	quantization    	Q4_0  	                
  	context length  	131072	                
  	embedding length	3072  	                
  	                                      
  Parameters                            
  	stop	"<|end|>"      	                   
  	stop	"<|user|>"     	                   
  	stop	"<|assistant|>"	                   
  	                                      
  License                               
  	Microsoft.                          	  
  	Copyright (c) Microsoft Corporation.

Stop a model from running by pressing CTRL +c. Exit the model by entering CTRL +d. In conclusion, Ollama has revolutionized our understanding and interaction with AI.

Breaking Free from the Cloud: Exploring the Benefits of Local,Open-Source AI with Ollama

Everywhere you look, someone is talking or writing about artificial intelligence. I have been keenly interested in the topic since my graduate school days in the 1990s. I have used ChatGPT, Microsoft Copilot, Claude, Stable Diffusion, and other AI software to experiment with how this technology works and satisfy my innate curiosity. Recently, I discovered Ollama. Developed by Meta, it is an open-source large language model that can run locally on Linux, MacOS, and Microsoft Windows. There is a great deal of concern that while using LLMs in the cloud, your data is being scraped and reused by one of the major technology companies. Ollama is open-source and has an MIT license. Since Ollama runs locally, there is no danger that your work could end up in someone else’s LLM.

The Ollama website proclaims, “Get up and running with Large Language Models.” That invitation was all I needed to get started. Open a terminal on Linux and enter the following to install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

The project lists all the models that you can use, and I chose the first one in the list, Llama3.1. Installation is easy, and it did not take long to install the Llama3.1 model. I followed the instructions and, in the terminal, entered the following command:

$ ollama run llama3.1

The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was prompted to ‘talk’ with the LLM. I decided to ask a question about the history of my alma mater, St. Bonaventure University. I entered the following commands:

$ ollama run llama3.1
>>>What is the history of St. Bonaventure University?

The results were good but somewhat inaccurate. “St. Bonaventure University is a private Franciscan university located in Olean, New York. The institution was founded by the Diocese of Buffalo and  has a rich history dating back to 1856.” St. Bonaventure is located near Olean, New York, and it is in the Diocese of Buffalo, but it was founded in 1858. I asked the model to name some famous St. Bonaventure alumni; more inaccuracies were comic. Bob Lanier was a famous alumnus but Danny Ainge was not.

The results are rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air which I soon did. I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.

Unsurprisingly, Ollama uses a great deal of processing power, which is lessened if you run it on a computer with a GPU. My Intel NUC 11 is a very powerful desktop computer with quad-core 11th Gen Intel Core i7-1165G7, 64 gigabytes of RAM, and a robust connection to the internet to download additional models. I posed similar questions to the Llama3.1 model first on the Intel running Linux and then on the M2 MacBook Air running MacOS. You can see the CPU utilization below on my Linux desktop. It’s pegged, and the output from the model is slow at an approximate rate of 50 words per minute. Contrast that with the M2 MacBook, which has a GPU with a CPU utilization of approximately 6.9% and words per minute faster than I could read.

Screen picture by Don Watkins CC by SA 4.0

While Ollama Llama3.1 might not excel at history recall, it does very well when asked to create Python code. I entered a prompt to create Python code to create a circle without specifying how to accomplish the task. It rendered the code shown below. I had to install the ‘pygame’ module, which is not on my system.

$  sudo apt install python3-pygame
# Python Game Development

import pygame
from pygame.locals import *

# Initialize the pygame modules
pygame.init()

# Create a 640x480 size screen surface
screen = pygame.display.set_mode((640, 480))

# Define some colors for easy reference
WHITE = (255, 255, 255)
RED = (255, 0, 0)

while True:
    # Handle events
    for event in pygame.event.get():
        if event.type == QUIT or (event.type == KEYDOWN and event.key == 
K_ESCAPE):
            pygame.quit()
            quit()

    screen.fill(WHITE)  # Fill the background with white color

    # Drawing a circle on the screen at position (250, 200), radius 100
    pygame.draw.circle(screen, RED, (250, 200), 100)
    
    # Update the full display Surface to the screen
    pygame.display.flip()

I copied the code into VSCodium and ran it. You can see the results below.

Screen picture by Don Watkins CC by SA 4.0

As I continue experimenting with Ollama and other open-source LLMs, I’m struck by the significance of this shift toward local, user-controlled AI. No longer are we forced to rely on cloud-based services that may collect our data without our knowledge or consent. With Ollama and similar projects, individuals can harness the power of language models while maintaining complete ownership over their work and personal information. This newfound autonomy is a crucial step forward for AI development and I’m eager to see where it takes us.

Sharing Tech for Good: How I Gifted My Neighbor a Refurbished Linux Mint Laptop

I volunteer with Meals on Wheels once a week, and one of the clients on my route is a man my age. We frequently spend time visiting when I bring his meal to him. Last week, when I stopped, he asked me if I could help him put his laptop computer back in working order. I told him I’d do him one better—installing Linux Mint on his ailing computer and extending its life.

After checking with him about the model of his laptop, I discovered that it was a lightly powered notebook with minimal disk storage. I had a five-year-old System76 Darter Pro just sitting around gathering dust. I decided to install Linux Mint Cinnamon 22, which is my daily driver. Though the Darter Pro is five and a half years old, it’s still got lots of life in it. It has an Intel i7 CPU with 16 gigabytes of RAM and a 250 gigabyte NVME drive, which would be a fantastic host for my client.

When I shared my decision with him, he was delighted, and though he trusted my judgment, he knew nothing about Linux Mint and wondered if he could master the learning curve. I told him that Mint Cinnamon has many similarities with the Windows 10 desktop he used to. I searched YouTube and other sites for an excellent introduction to Linux Mint and could not find the kind of documentation I wanted to give to my friend. I decided to write my documentation, complete with screenshots of the critical first steps to get started with a Linux Mint laptop.

This is the first screen you see after you start up the laptop. His name is Tom, and I shared his password in the documentation. I knelt next to him as I helped him through these first steps.

Linux Mint Login screen

The next thing he sees is the Welcome to Linux Mint splash screen. I provide a brief explanation and then encourage him to return here later to refer to the documentation and help information.

I direct his attention to the screen’s lower right-hand corner and guide him through the connection to his wireless access point and network connection.

I explained to him that the fourth item from the left is the ‘network connection’ icon, and he needs to click on it to display the wireless networks that are available to connect with. We quickly found his local connection.

He was unsure of his wireless password, but after some reflection and connecting with others who connect to his internet service provider, I remembered that they use the client’s mobile phone number for a password. That worked well, and we were connected. I walked him through installing the necessary updates on his new Linux laptop. I quickly demonstrated the security of Linux systems as each update required his password. Linux Mint comes with Firefox by default but I also showed him how to add the Google Chrome browser and some other applications using the software manager.

Eventually, I showed him how to use most of the essential tools of this new Linux Mint computer, including ‘Nemo’, the Linux Mint file manager.

After almost forty-five minutes of initial setup and instruction, he was ready to dive in and experience Linux Mint firsthand. This morning, I received an email from Tom, and it warmed my heart. He said, “I was on my new computer till 11:00 last night. I’m like a kid with a new toy. I want to thank you for the info you sent on linux.I perused the emails and quickly learned I would never be a Linux expert…Again, a thousand thanks for the laptop. I was in a bind, and you really got my feet from the fire. Sincerely, tom.”

His email made my day, so I love to share the Linux and open-source journey with others. We can extend the hardware’s life and demonstrate the effectiveness of the open-source approach.