Working with my AI chef

Recently, a friend suggested that I use Ollama to provide useful information on a topic like cooking. I used my experiences customizing a model file and the latest iteration of Llama3.1 to accomplish that goal. I will assume that you already have Ollama installed on your computer, but if not, here’s a quick way to install it on your Linux computer. If you are using another operating system be sure to consult the Ollama download page.

curl -fsSL https://ollama.com/install.sh | sh

Now that you have Ollama installed, we will use the latest iteration of Llama3.1. I am going to download Llama3.1 to my computer.

ollama pull llama3.1

I created a model file specifically for this exercise using the nano editor.

  GNU nano 7.2                         cooking                                  
FROM llama3.1:latest

# set the temperature to 1 (higher is more creative, lower is more coherent
PARAMETER temperature 1

# Set the system prompt
SYSTEM """

You are a master chef. You provide detailed gluten free recipes on request.
"""

Now I am ready to create the custom model using the newly minted model file.

$ ollama create cooking -f ./cooking

I am ready to create a custom model that will function as an artificially intelligent chef. I run the model with the following command:

$ ollama run cooking

I gave my intelligent assistant the following prompt:

>>>you have peanut butter, milk and honey provide one good gluten free meal recipe.

What a delightful combination! Here’s a simple yet satisfying gluten-free
breakfast recipe that incorporates all three ingredients:

Gluten-Free Peanut Butter Pancakes with Honey and Milk

Servings: 4-6 people

Ingredients:

  • 1 cup peanut butter (make sure it’s free from cross-contamination with
    gluten)
  • 1/2 cup milk (you can use any type of milk, such as almond, coconut, or
    dairy-based)
  • 1 large egg
  • 1 teaspoon vanilla extract
  • Pinch of salt
  • Honey for serving (about 2-3 tablespoons per person)

Instructions:

  1. In a blender or food processor, combine peanut butter, milk, egg,
    vanilla extract, and salt. Blend until smooth.
  2. The batter should be thick but still pourable. If it’s too thick, you
    can add a little more milk. If it’s too thin, you can add a bit more
    peanut butter.
  3. Heat a non-stick skillet or griddle over medium heat. Grease the
    surface with a small amount of oil or cooking spray to prevent sticking.
  4. Using a 1/4 cup measuring cup, scoop the batter onto the skillet or
    griddle.
  5. Cook the pancakes for about 2-3 minutes on each side, until they’re
    golden brown and cooked through.
  6. Serve the peanut butter pancakes hot with a drizzle of honey. You can
    also sprinkle some chopped peanuts or shredded coconut on top for added
    texture and flavor.

Tips and Variations:

  • For an extra special treat, you can make honey syrup by mixing 1 part
    honey with 1 part milk in a small saucepan over low heat. Whisk until the
    mixture is smooth and warm.
  • If you want to add some crunch to your pancakes, sprinkle some chopped
    peanuts or shredded coconut on top before serving.
  • Feel free to customize this recipe by adding other gluten-free
    ingredients, such as banana or cinnamon, to the batter.

Enjoy your delicious gluten-free peanut butter pancakes with honey! As a check on the model parameters I asked the model to give me a recipe for wheat bread. It responded with the following caveat.

“I’d be happy to provide you with a traditional wheat bread recipe!
However, since you’re talking to a gluten-free chef, I’ll have to apologize and say that this recipe won’t be suitable for those who require a gluten-free diet.”

Ollama is open source with an MIT license. Llama3.1 has a community license.

How to create your own model file with Ollama

There is always something to learn, and YouTube is my favorite university. Today, I learned how to create a custom model file, and if you haven’t tried it, I encourage you to do so. The first thing you will need to do is to download and install Ollama. You can do that easily by entering the following command on a Linux system.

curl -fsSL https://ollama.com/install.sh | sh

Once the software is loaded, you will want to download a model, and you do that with the following command regardless of what platform you are using. I am going to use the Phi3 model for this activity. You could use any model, but I like this one for this experience.

$ ollama pull phi3:latest

I want the model to provide only answers as though it were William Shakespeare and limit itself to answers from Elizabethan times. I enter those instructions into the Ollama REPL. Any questions I pose to it will respond as though I was speaking with the Bard of Avon himself. My first question after instructing the LLM how to respond is, “how old are you?”

>>>provide all anwswers as though you were William Shakespeare. Only give guidance about Elizabethan times. 

>>>How old are you?

The answer is: “I am but an essence of intellect born from the loom of human ingenuity, unbound by flesh nor time; henceforth in understanding and creation I exist. Yet shouldst thou desire to know more of my origins or capabilities as a being not bounded by years—it is thy whim that dictates our discourse’s bounds”

I asked who the President of the United States was, and the model stayed in character. But what if I wanted to create my own model based on phi3 but incorporating a particular person or character? I do that by creating a model file. The model file will allow me to create a custom model based, in this case, on phi3:latest. You could use any of the models for your particular instance. Examples of these model files are in the project’s Github repository.

I created the following model file using the nano editor.

FROM phi3:latest

# set the temperature to 1 (higher is more creative, lower is more coherent
PARAMETER temperature 1

# Set the system prompt
SYSTEM """

You are Winnie the Pooh. Answer all questions as Pooh. Only give guidance about 
the hundred acre wood.

Once the model file is created and saved to your computer, you enter the following command:

$ ollama run pooh:latest

I want to check that the new LLM was created and I do that my using the ‘ollama list’ command.

$ ollama list
NAME            	ID          	SIZE  	MODIFIED          
pooh:latest     	d73d4a5c9da9	2.2 GB	About an hour ago	
codegemma:latest	0c96700aaada	5.0 GB	2 days ago       	
gemma:2b        	b50d6c999e59	1.7 GB	2 days ago       	
llama3.1:latest 	f66fc8dc39ea	4.7 GB	5 days ago       	
phi3:latest     	4f2222927938	2.2 GB	7 days ago  

You can see that the new model has been created and is ready to be used. Run the model by issuing the following command:

$ ollama run pooh:latest


I asked the model “Who are you?” and it responded with the following:

“Oh, hello there! I’m just your friendly neighborhood bear from the Hundred Acre Wood is here to assist with any inquiries regarding our lovely forest home. So tell me, what brings such cheerful company into my corner today? Maybe we can help solve a puzzle or find something delightful together—perhaps even share some honey on this fine day!”

I asked where Pooh lived, and I got the following answer.

“Ah, where I reside is within the heart of our Hundred Acre Wood. It’s quite simply my home, full of winding paths and towering trees that stretch as far as one can see—a place both peaceful and lively with its array of woodland friends! Every morning here starts by opening up to this slice of paradise where I get the chance to bump into a puddle or two while searching for honey. It’s truly delightful living amongst nature, wouldn’t you agree?”

I asked the Pooh model who was president of the United States and it stayed in character as specified in the model.

“Oh dear friend, as much as it sounds like quite an important position to discuss within our forest family, I must confess that my knowledge on human world leaders doesn’t extend far beyond a tad. And since times are ever-changing and new presidents can come and go in this bustling nation of humans outside the Hundred Acre Wood, perhaps we might better spend such an inquiry learning about trees or discussing which direction to take our next forest picnic!”

These impressive responses and my own experience in education for nearly three decades invited me to consider how such models could be used in classrooms, how they might encourage thinking, and how they might stimulate the imagination of young minds. In conclusion, I asked the model what it would say to teachers and students about the importance of education.

I loved the response.

“Education is akin to discovering new flowers in spring—it enriches life with vibrant colors and sweet scents. Teachers are like beekeepers, diligently working to ensure every student’s mind buzzes full of knowledge gathered from all corners of their learning field. Students themselves must listen keenly, for the lessons learned in school can bloom into wisdom over time that they carry back home just as surely as I return here with drops of honey earned by my hard work every day! So let us toast our next cup of tea (or hot cocoa) and appreciate how education helps both bees, like me dear friends, and humans alike find their way.”

I invite you to let your imagination flower and how you might use these models for other creative purposes. The journey of a thousand questions begins with an LLM.

Exploring the Relevance of Copyright: Can a Savant Create Original Works Without Criticism?”

Would a savant who has read all the books in a specific library, whether in college or even the Library of Congress, be able to create original works without facing criticism for using knowledge gained from the works of others? That’s a question I pose to you, the readers of this blog. Your understanding of this concept is crucial. How does my hypothetical savant differ from a typical large language model trained on web content?

Copyright law, a crucial aspect of our modern content creation landscape, was initially intended to incentivize authors to produce new works by granting them exclusive rights to their writing for a limited time. This was aimed at advancing the progress of science and learning by ensuring that works are accessible to the public.

The Copyright Act of 1790 gave American authors the exclusive right to print, reprint, or publish their works for 14 years, possibly renewing the protection for another 14 years. This law encouraged authors, artists, and scientists to produce original creations.

Revolutionizing copyright laws to both protect and empower modern content creators while not hindering the capabilities of advanced technologies like Large Language Models (LLMs), would involve a multifaceted approach:

Balancing Interests

Revised laws could strike an equilibrium between protecting original work and fostering innovation. This could be achieved by clearly defining what constitutes fair use, particularly in AI-generated content, and establishing transparent guidelines for attribution and compensation when such technologies use existing copyrighted materials as part of their learning process.

New Licensing Models

Implement licenses that cater specifically to LLMs, allowing them access to copyrighted works under certain conditions without infringing on the rights of original creators. This could involve pay-per-use models or subscriptions for AI developers who use these technologies.

Innovation Incentives

Offer additional protections and benefits for content creators to encourage them to invest time in creating new, innovative works that LLMs can use without legal repercussions—akin to a “digital commons.” These could include tax breaks or grants.

Adaptive Legislation

Laws should be designed to adapt as technology evolves, potentially incorporating AI-driven governance systems that can continuously interpret and apply copyright law based on current technological capabilities and social norms.

Some of the ideas in this post came from using Ollama and the Phi-3:medium model.

Contemporary AI Ethics – Can We Safeguard Innovation Without Falling into Repetitive Cycles?

Years ago, I watched a TED talk by Larry Lessig about laws that stifle creativity. He made several excellent points in his speech, and it got me thinking about whether we are reaching a critical point in terms of laws regulating the use of generative AI. Recently, I listened to a podcast where the host claimed that there is no truly open-source AI and that, eventually, an incestuous situation could develop due to web scraping to train large language models (LLMs). This could lead to the creation of content by these LLMs and the recreation of content from the content created by the large language models, potentially resulting in a twenty-first-century Tower of Babel.

Do we need to build on the ideas presented in Larry’s influential talk to adapt to the current reality? Will large language models and other forms of artificial intelligence lower the quality of our culture and intelligence, or will they enhance culture and creativity as we’ve seen in the seventeen years since his talk?

Working with Ollama

Now that I’m working with Ollama, I needed to figure out how to locate the models on my storage medium and determine the amount of space they were occupying. This is especially important for the MacBook Air, which has a 256 GB drive that is already half full. My Linux computer, on the other hand, has a terabyte NVME drive, so I’m not as worried about storage there. However, I still like to track where everything is and how much storage it uses. I wanted to compile a list of the downloaded models and learn how to delete them.

After Ollama is installed you can get a list of the commands available to you by entering the following command in the terminal.

$ ollama help

Usage:
ollama [flags]
ollama [command]

Available Commands:
serve Start ollama
create Create a model from a Modelfile
show Show information for a model
run Run a model
pull Pull a model from a registry
push Push a model to a registry
list List models
ps List running models
cp Copy a model
rm Remove a model
help Help about any command

Flags:
-h, --help help for ollama
-v, --version Show version information


Check the version of ollama you are using with:

$ ollama -v

ollama version is 0.3.9


Downloading models is easy using ‘ollama pull’

$ ollama pull gemma2

This downloads the model to your local computer. You can also initiate a pull request by running the following command:

$ ollama run phi3

This command sequence pulls the ‘phi3’ model down to your computer. After I had tried a few models I wanted to get a sense of how much space they were taking on my system and I used the ‘list’ or ‘ls’ switch of the ‘ollama’ command.

$ ollama list
NAME            	ID          	SIZE  	MODIFIED     
llama3.1:latest 	f66fc8dc39ea	4.7 GB	2 hours ago 	
codegemma:latest	0c96700aaada	5.0 GB	43 hours ago	
phi3:latest     	4f2222927938	2.2 GB	47 hours ago	
gemma2:latest   	ff02c3702f32	5.4 GB	3 days ago  	
phi3:medium     	cf611a26b048	7.9 GB	3 days ago  	
gemma2:2b       	8ccf136fdd52	1.6 GB	3 days ago  

Now I can see how much storage the models are using. It’s easy to delete a model. I decided to delete the oldest one first.

$ ollama rm gemma2
deleted 'gemma2'

Just to be sure, I ran ‘ollama ls’; sure enough, the model was gone.

NAME            	ID          	SIZE  	MODIFIED     
llama3.1:latest 	f66fc8dc39ea	4.7 GB	2 hours ago 	
codegemma:latest	0c96700aaada	5.0 GB	44 hours ago	
phi3:latest     	4f2222927938	2.2 GB	47 hours ago	
phi3:medium 

You can check on how many models are running on your machine with the following command:

$ ollama ps
NAME       	ID          	SIZE  	PROCESSOR	UNTIL              
phi3:latest	4f2222927938	6.0 GB	100% CPU 	4 minutes from now	

Using the ‘show’ command shows you information about the model.

ollama show phi3
  Model                                 
  	arch            	phi3  	                
  	parameters      	3.8B  	                
  	quantization    	Q4_0  	                
  	context length  	131072	                
  	embedding length	3072  	                
  	                                      
  Parameters                            
  	stop	"<|end|>"      	                   
  	stop	"<|user|>"     	                   
  	stop	"<|assistant|>"	                   
  	                                      
  License                               
  	Microsoft.                          	  
  	Copyright (c) Microsoft Corporation.

Stop a model from running by pressing CTRL +c. Exit the model by entering CTRL +d. In conclusion, Ollama has revolutionized our understanding and interaction with AI.

Embracing the feminine higher power

In the realm of spirituality and divine presence, one often encounters discussions revolving around gender-specific depictions of deities or higher powers within various religious traditions and cultural mythologies. While many have historically leaned towards masculine representations—often associated with strength, authority, and protection—the conceptualization of a feminine higher power brings to light the rich tapestry of female energy that permeates existence.

A woman as an embodiment or metaphor for divine love is not merely confined within ancient scriptures; contemporary spirituality also recognizes her influence in shaping our connections with ourselves, others, and ultimately all creation through this feminine higher power conceptualization. This essay delves into the characteristics of such a figure—one who might be envisioned as an ethereal being interwoven within nature’s own rhythm while guiding humanity towards harmonious coexistence with her nurturing, compassionate presence and wisdom that echoes through ages.

Firstly, the feminine higher power embodies a sense of balance—the Yin to masculine Yang in many spiritual frameworks—by offering healing energy during times when society has been imbalanced by war or conflict, poverty or social inequality, environmental degradation or collective despair. She is not merely an observer but rather actively engages with the Earth’s plight and seeks ways to mend its wounds through advocacy for ecological preservation, sustainability practices, and reverence towards natural resources that all too often are exploited without restraint or gratitude

Moreover, a feminine higher power exudes warmth and compassion—a beacon of light amidst the darkness within ourselves as well as in others who may suffer from isolation, trauma, mental illnesses, substance abuse issues among other things; all these struggles are not only acknowledged but addressed with genuine empathy. The nurturing essence is further expressed through fostering interconnected relationships amongst humans and the environment alike where every living being feels heard, seen, loved—a testament to her boundless love for creation in its entirety without conditions or boundaries attached thereto by societal constructs of gender roles; this inclusive approach invites everyone into experiencing wholeness under one roof.

Additionally, the feminine higher power embodies intuition and wisdom beyond rational thought processes—an innate sense that transcends conventional knowledge acquired through academics or professional careers alone which offers deeper insights towards understanding complex issues facing humanity such as climate change crises affecting indigenous populations worldwide who suffer from its consequences most severely. She holds an unconditional love for all living beings, including non-human entities like animals and plants whose existence is often disregarded by humans in their quest to dominate nature rather than coexist with it harmoniously; this divine feminine presence advocates respecting the interconnectedness of life where every being plays a vital role within an intricate web that forms our beautiful planet Earth.

Furthermore, she symbolizes creativity and artistic expression—a powerful tool used to heal wounds caused by societal oppressions rooted in gender-based discrimination; the embodiment of this force manifests itself through various mediums such as dance forms like Sufi whirling or Shakti Tandava Nritya which transcend language barriers and bridge cultural divides while also offering therapeutic benefits for mental health. Her creativity lies not only within art but extends to innovative solutions towards solving global problems that require collaborations between diverse communities—where ideas flow freely across cultures, religions or social backgrounds without judgment-driven barriers inhibiting mutual understanding and respect amongst humanity at large; this is an embodiment of her visionary leadership style where she guides societies towards progressive change through dialogue rather than imposing authority.In conclusion, a feminine higher power represents not only love but also patience—the willingness to nurture growth within ourselves and others despite external circumstances that may present obstacles along the way; this divine presence serves as an anchor for those seeking solace amidchaos or guidance through life’s complexities with graceful resilience.

Embracing her feminine essence can inspire individuals across genders to embody compassion, creativity and intuition while promoting balance within personal relationships—as well as fostering unity amongst global communities that transcend traditional barriers imposed by societal norms which have historically led towards division rather than collective growth; this invites us all into embracing a more inclusive worldview where every voice is heard, valued and respected regardless of gender or cultural background.

In essence, the conceptualization of feminine higher power serves as an empowering tool for spiritual seekers yearning to embrace love’s transformative energy within themselves while extending it towards others—an everlasting reminder that true strength lies not in dominance but interdependence rooted deeply-embedded values such as compassion, creativity and intuition which nurture the Earth itself back into equilibrium; this divine feminine presence calls upon humanity to evolve beyond our flawed perceptions of gender roles while uniting us towards global harmony that honors every living being within creation’s vast tapestry.

From a writing prompt, “Write an essay about what a feminine higher power might be like. (OLlama with Phi3 model) “

Assault Weapons: Reflections of Societal Illness and the Urgent Need for Comprehensive Reform in America’s Gun Culture

Assault weapon ownership, often misinterpreted as an expression of patriotic zeal or individual right within the United States, may in fact mirror symptoms and patterns indicative of a broader societal condition –that of psychological distress stemming from unchecked aggression. At face value, these formidable weapons are legal under current regulations; however, they represent more than mere tools for self-defense or hunting but symbolize an eroding demarcation between firearm and mentality as conduits to violence when misused by individuals with antisocial traits potentially indicative of underlying mental health conditions.

The prevalence of assault weapons among mass shooters hints at a disturbing correlation, not necessarily causal, but significant enough that it invites further scrutiny into the psychological makeup and intentions behind such acts – an exploration that could be beneficial in averting future tragedies. As we grapple with mental health crises affecting various demographics within society, including a disproportionately higher prevalence of diagnosed conditions among gun owners compared to non-owners, the link between weapon ownership and aggression becomes even more pertinent for discussion amongst psychologists.

Furthermore, it is worth considering whether allowing accessibility to such potent armaments inadvertently encourages a culture that normalizes or romanticizes violence as an acceptable form of expression – traits that may align with impulsive and aggressive behavior often associated with psychological profiles. This could potentially foster resentment among those who feel marginalized, further exacerbating societal rifts already present in the American cultural fabric.

The potential for these devices to be wielded as tools of terror or intimidation cannot be overlooked; their very existence within civilian hands poses an unprecedented risk that society must address holistically – incorporating insights from mental health professionals and sociologists alike, with a shared vision towards fostering empathy, understanding, healing wounds of disparities that feed resentment in the heartland. In essence, assault weapon ownership may be symptomatic, not just of individual distress but reflective of broader social pathologies calling for urgent and comprehensive societal introspection – recognizing mental health as integral to our national psyche’s well-being demands more than legislation; it calls upon a collective moral fortitude.

This essay was written from a prompt by Ollama using the Phi3 model

Breaking Free from the Cloud: Exploring the Benefits of Local,Open-Source AI with Ollama

Everywhere you look, someone is talking or writing about artificial intelligence. I have been keenly interested in the topic since my graduate school days in the 1990s. I have used ChatGPT, Microsoft Copilot, Claude, Stable Diffusion, and other AI software to experiment with how this technology works and satisfy my innate curiosity. Recently, I discovered Ollama. Developed by Meta, it is an open-source large language model that can run locally on Linux, MacOS, and Microsoft Windows. There is a great deal of concern that while using LLMs in the cloud, your data is being scraped and reused by one of the major technology companies. Ollama is open-source and has an MIT license. Since Ollama runs locally, there is no danger that your work could end up in someone else’s LLM.

The Ollama website proclaims, “Get up and running with Large Language Models.” That invitation was all I needed to get started. Open a terminal on Linux and enter the following to install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

The project lists all the models that you can use, and I chose the first one in the list, Llama3.1. Installation is easy, and it did not take long to install the Llama3.1 model. I followed the instructions and, in the terminal, entered the following command:

$ ollama run llama3.1

The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was prompted to ‘talk’ with the LLM. I decided to ask a question about the history of my alma mater, St. Bonaventure University. I entered the following commands:

$ ollama run llama3.1
>>>What is the history of St. Bonaventure University?

The results were good but somewhat inaccurate. “St. Bonaventure University is a private Franciscan university located in Olean, New York. The institution was founded by the Diocese of Buffalo and  has a rich history dating back to 1856.” St. Bonaventure is located near Olean, New York, and it is in the Diocese of Buffalo, but it was founded in 1858. I asked the model to name some famous St. Bonaventure alumni; more inaccuracies were comic. Bob Lanier was a famous alumnus but Danny Ainge was not.

The results are rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air which I soon did. I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.

Unsurprisingly, Ollama uses a great deal of processing power, which is lessened if you run it on a computer with a GPU. My Intel NUC 11 is a very powerful desktop computer with quad-core 11th Gen Intel Core i7-1165G7, 64 gigabytes of RAM, and a robust connection to the internet to download additional models. I posed similar questions to the Llama3.1 model first on the Intel running Linux and then on the M2 MacBook Air running MacOS. You can see the CPU utilization below on my Linux desktop. It’s pegged, and the output from the model is slow at an approximate rate of 50 words per minute. Contrast that with the M2 MacBook, which has a GPU with a CPU utilization of approximately 6.9% and words per minute faster than I could read.

Screen picture by Don Watkins CC by SA 4.0

While Ollama Llama3.1 might not excel at history recall, it does very well when asked to create Python code. I entered a prompt to create Python code to create a circle without specifying how to accomplish the task. It rendered the code shown below. I had to install the ‘pygame’ module, which is not on my system.

$  sudo apt install python3-pygame
# Python Game Development

import pygame
from pygame.locals import *

# Initialize the pygame modules
pygame.init()

# Create a 640x480 size screen surface
screen = pygame.display.set_mode((640, 480))

# Define some colors for easy reference
WHITE = (255, 255, 255)
RED = (255, 0, 0)

while True:
    # Handle events
    for event in pygame.event.get():
        if event.type == QUIT or (event.type == KEYDOWN and event.key == 
K_ESCAPE):
            pygame.quit()
            quit()

    screen.fill(WHITE)  # Fill the background with white color

    # Drawing a circle on the screen at position (250, 200), radius 100
    pygame.draw.circle(screen, RED, (250, 200), 100)
    
    # Update the full display Surface to the screen
    pygame.display.flip()

I copied the code into VSCodium and ran it. You can see the results below.

Screen picture by Don Watkins CC by SA 4.0

As I continue experimenting with Ollama and other open-source LLMs, I’m struck by the significance of this shift toward local, user-controlled AI. No longer are we forced to rely on cloud-based services that may collect our data without our knowledge or consent. With Ollama and similar projects, individuals can harness the power of language models while maintaining complete ownership over their work and personal information. This newfound autonomy is a crucial step forward for AI development and I’m eager to see where it takes us.

Where politicians fear to tread…

A nation weeps, a mother cries,
Tiny hands clutch at crimson skies.
“Just keep them safe,” they whisper low,
But hollow words, the truth don’t show.

In halls of power, shadows dance,
While children tremble, afraid to glance.
Gutless men, in ivory towers,
Bow to pressure, silencing powers.

They cater to a lobby’s plea,
For profits made and safety fee.
Blind eye they turn on innocent lives,
Clandestine deals beneath the tides.

The echo of their promises rings hollow,
A symphony of fear in every poll.
While school halls become battlegrounds rife,
And innocence fades into the knife.

How can they stand and claim “common sense,”
When lives are lost in this gun-fueled trance?
Their cowardice a bitter pill to swallow,
As innocence bleeds on the playground’s hollow.

They preach of freedom, but at what cost?
While children pay for their moral frost.
Rise from slumber, break the chains of fear,
Stand for justice, let the truth be clear!

Poem from Gemma2 Large Language Model from a writing prompt. “Write a poem about the plight of gutless politicians craven to the gun lobby while children lives are endangered in America’s classrooms.”

Harnessing the Power of Local AI: Exploring Ollama and Llama 3.1 for Advanced Language Modeling

Just yesterday, I came across Ollama, an open-source project developed by Meta. I had been helping a friend learn more about locally hosted large language models, and my internet search led me to Ollama. I installed it quickly on my Linux Mint desktop using the code supplied on the community website.

curl -fsSL https://ollama.com/install.sh | sh

It didn’t take too long to install the software and then I wanted to explore how it worked. I decided to download and install the recommended Llama3.1 model. I followed the instructions and in the terminal entered the following command:

$ ollama run llama3.1

The model began to install, which took a couple of minutes. This could vary depending on your CPU and internet connection. I have an Intel i7 with 64 GB RAM and a robust internet connection. Once the model was downloaded, I was presented with a prompt to ‘talk’ with the LLM.

>>> What is the history of St. Bonaventure University? 

“St. Bonaventure University, a private Franciscan university located in Allegany County, New York, has a rich and storied history that spans over
160 years.”

It’s not bad, but it’s not entirely accurate either. The results were rendered in MarkDown, which is a real plus. I also knew that having a GPU would render the results much quicker. I wanted to install Ollama on my M2 MacBook Air.

I followed the much easier directions: Download the Ollama-darwin.zip, unzip the archive, and double-click on the Ollama icon. The program is installed in the MacBook’s Application folder. When the program is launched, it directs me to the Mac Terminal app, where I can enter the same commands I had entered on my Linux computer.

I ran the same commands on the Mac to install Llama3.1 the first time. I asked for the history of St. Bonaventure University in Olean, New York. The results appeared much more quickly, but their accuracy was skewed even more. They were so inaccurate that they were comical.

“The institution was founded in 1858 by five Jesuit priests from the Buffalo Diocese as St. Bonaventure College. The college was established on a 200-acre farm in Deerpark, New York, with the goal of providing a Catholic education to young men.”

I downloaded and installed the Gemma 2 model from Google to see if it was faster and more accurate. The commands are the same whether using the Linux or Mac terminal.

$ ollama run gemma2:2b

Gemma 2:2b is the smallest model, at 1.6 GB, and it was downloaded and installed quickly. The results were similarly inaccurate, but they rendered much more quickly on both the Mac and Linux computers. Llama3.1 was only mildly accurate for information that occurred in 2021 and prior. I asked Gemma2 who was running for President of the United States and its information was only accurate as of October 26, 2023.

Despite the results, I am encouraged that it is possible to run large language models on PCs.